1
|
Domain-Independent Lifelong Problem Solving Through Distributed ALife Actors. ARTIFICIAL LIFE 2024; 30:259-276. [PMID: 38048055 DOI: 10.1162/artl_a_00418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
A domain-independent problem-solving system based on principles of Artificial Life is introduced. In this system, DIAS, the input and output dimensions of the domain are laid out in a spatial medium. A population of actors, each seeing only part of this medium, solves problems collectively in it. The process is independent of the domain and can be implemented through different kinds of actors. Through a set of experiments on various problem domains, DIAS is shown able to solve problems with different dimensionality and complexity, to require no hyperparameter tuning for new problems, and to exhibit lifelong learning, that is, to adapt rapidly to run-time changes in the problem domain, and to do it better than a standard, noncollective approach. DIAS therefore demonstrates a role for ALife in building scalable, general, and adaptive problem-solving systems.
Collapse
|
2
|
A Spatial Artificial Chemistry Implementation of a Gene Regulatory Network Aimed at Generating Protein Concentration Dynamics. ARTIFICIAL LIFE 2024:1-26. [PMID: 38421716 DOI: 10.1162/artl_a_00431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/02/2024]
Abstract
Gene regulatory networks are networks of interactions in organisms responsible for determining the production levels of proteins and peptides. Mathematical and computational models of gene regulatory networks have been proposed, some of them rather abstract and called artificial regulatory networks. In this contribution, a spatial model for gene regulatory networks is proposed that is biologically more realistic and incorporates an artificial chemistry to realize the interaction between regulatory proteins called the transcription factors and the regulatory sites of simulated genes. The result is a system that is quite robust while able to produce complex dynamics similar to what can be observed in nature. Here an analysis of the impact of the initial states of the system on the produced dynamics is performed, showing that such models are evolvable and can be directed toward producing desired protein dynamics.
Collapse
|
3
|
Upgrades of Genetic Programming for Data-Driven Modeling of Time Series. EVOLUTIONARY COMPUTATION 2023; 31:401-432. [PMID: 37126579 DOI: 10.1162/evco_a_00330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 03/31/2023] [Indexed: 05/03/2023]
Abstract
In many engineering fields and scientific disciplines, the results of experiments are in the form of time series, which can be quite problematic to interpret and model. Genetic programming tools are quite powerful in extracting knowledge from data. In this work, several upgrades and refinements are proposed and tested to improve the explorative capabilities of symbolic regression (SR) via genetic programming (GP) for the investigation of time series, with the objective of extracting mathematical models directly from the available signals. The main task is not simply prediction but consists of identifying interpretable equations, reflecting the nature of the mechanisms generating the signals. The implemented improvements involve almost all aspects of GP, from the knowledge representation and the genetic operators to the fitness function. The unique capabilities of genetic programming, to accommodate prior information and knowledge, are also leveraged effectively. The proposed upgrades cover the most important applications of empirical modeling of time series, ranging from the identification of autoregressive systems and partial differential equations to the search of models in terms of dimensionless quantities and appropriate physical units. Particularly delicate systems to identify, such as those showing hysteretic behavior or governed by delayed differential equations, are also addressed. The potential of the developed tools is substantiated with both a battery of systematic numerical tests with synthetic signals and with applications to experimental data.
Collapse
|
4
|
Vegetation Evolution with Dynamic Maturity Strategy and Diverse Mutation Strategy for Solving Optimization Problems. Biomimetics (Basel) 2023; 8:454. [PMID: 37887585 PMCID: PMC10604831 DOI: 10.3390/biomimetics8060454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 09/20/2023] [Accepted: 09/20/2023] [Indexed: 10/28/2023] Open
Abstract
We introduce two new search strategies to further improve the performance of vegetation evolution (VEGE) for solving continuous optimization problems. Specifically, the first strategy, named the dynamic maturity strategy, allows individuals with better fitness to have a higher probability of generating more seed individuals. Here, all individuals will first become allocated to generate a fixed number of seeds, and then the remaining number of allocatable seeds will be distributed competitively according to their fitness. Since VEGE performs poorly in getting rid of local optima, we propose the diverse mutation strategy as the second search operator with several different mutation methods to increase the diversity of seed individuals. In other words, each generated seed individual will randomly choose one of the methods to mutate with a lower probability. To evaluate the performances of the two proposed strategies, we run our proposal (VEGE + two strategies), VEGE, and another seven advanced evolutionary algorithms (EAs) on the CEC2013 benchmark functions and seven popular engineering problems. Finally, we analyze the respective contributions of these two strategies to VEGE. The experimental and statistical results confirmed that our proposal can significantly accelerate convergence and improve the convergence accuracy of the conventional VEGE in most optimization problems.
Collapse
|
5
|
Fault Reconfiguration in Distribution Networks Based on Improved Discrete Multimodal Multi-Objective Particle Swarm Optimization Algorithm. Biomimetics (Basel) 2023; 8:431. [PMID: 37754182 PMCID: PMC10526146 DOI: 10.3390/biomimetics8050431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 09/13/2023] [Accepted: 09/13/2023] [Indexed: 09/28/2023] Open
Abstract
Distribution network reconfiguration involves altering the topology structure of distribution networks by adjusting the switch states, which plays an important role in the smart grid since it can effectively isolate faults, reduce the power loss, and improve the system stability. However, the fault reconfiguration of the distribution network is often regarded as a single-objective or multi-objective optimization problem, and its multimodality is often ignored in existing studies. Therefore, the obtained solutions may be unsuitable or infeasible when the environment changes. To improve the availability and robustness of the solutions, an improved discrete multimodal multi-objective particle swarm optimization (IDMMPSO) algorithm is proposed to solve the fault reconfiguration problem of the distribution network. To demonstrate the performance of the proposed IDMMPSO algorithm, the IEEE33-bus distribution system is used in the experiment. Moreover, the proposed algorithm is compared with other competitors. Experimental results show that the proposed algorithm can provide different equivalent solutions for decision-makers in solving the fault reconfiguration problem of the distribution network.
Collapse
|
6
|
Epigenetic opportunities for evolutionary computation. ROYAL SOCIETY OPEN SCIENCE 2023; 10:221256. [PMID: 37181799 PMCID: PMC10170609 DOI: 10.1098/rsos.221256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 04/20/2023] [Indexed: 05/16/2023]
Abstract
Evolutionary computation is a group of biologically inspired algorithms used to solve complex optimization problems. It can be split into evolutionary algorithms, which take inspiration from genetic inheritance, and swarm intelligence algorithms, that take inspiration from cultural inheritance. However, much of the modern evolutionary literature remains relatively unexplored. To understand which evolutionary mechanisms have been considered, and which have been overlooked, this paper breaks down successful bioinspired algorithms under a contemporary biological framework based on the extended evolutionary synthesis, an extension of the classical, genetics focused, modern synthesis. Although the idea of the extended evolutionary synthesis has not been fully accepted in evolutionary theory, it presents many interesting concepts that could provide benefits to evolutionary computation. The analysis shows that Darwinism and the modern synthesis have been incorporated into evolutionary computation but the extended evolutionary synthesis has been broadly ignored beyond: cultural inheritance, incorporated in the sub-set of swarm intelligence algorithms, evolvability, through covariance matrix adaptation evolution strategy (CMA-ES), and multilevel selection, through multilevel selection genetic algorithm (MLSGA). The framework shows a gap in epigenetic inheritance for evolutionary computation, despite being a key building block in modern interpretations of evolution. This leaves a diverse range of biologically inspired mechanisms as low hanging fruit that should be explored further within evolutionary computation and illustrates the potential of epigenetic based approaches through the recent benchmarks in the literature.
Collapse
|
7
|
Group theoretic particle swarm optimization for multi-level threshold lung cancer image segmentation. Quant Imaging Med Surg 2023; 13:1312-1322. [PMID: 36915344 PMCID: PMC10006099 DOI: 10.21037/qims-22-295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 09/08/2022] [Indexed: 11/07/2022]
Abstract
Background Image segmentation is an important step during the processing of medical images. For example, for the computer aid diagnostic systems for lung cancer image analysis, the segmented regions of tumors would help doctors in early diagnosis to determine timely and appropriate treatment possibilities and thereby improve the survival rate of the patients. However, general clinical routines of manual segmentation for large number of medical images are very difficult and time consuming, which is the challenge we aim to tackle using our proposed method. Methods A novel image segmentation method with evolutionary learning technique named Group Theoretic Particle Swarm Optimization is proposed. It can tackle multi-level thresholding optimization problem during the segmentation process and rebuild the search paradigm according to the solid mathematical foundation of symmetric group from four designable aspects, which are particle encoding, solution landscape, neighborhood movement and swarm topology, respectively. The Kapur's entropy of multi-level thresholds is assessed as the objective function. Results In contrast to those conventional metaheuristics methods for lung cancer image segmentation, this newly presented method generates the best performance result among them. Experimental results show that its Kapur's entropy has the value of 9.07, which is 16% higher than the worst case. Computational time is acceptable at the cost of 173.730 seconds, average level of evaluation metrics [Kappa, Precision, Recall, F1-measure, intersection over union (IoU) and receiver operating characteristic (ROC)] is over 90%, and search process of multi-level threshold combination would finally converge in the later phase of iterations after 700. The ablation study indicates that all components are significant to the contributions of our proposed method. Conclusions Group Theoretic Particle Swarm Optimization for multi-level threshold segmentation is an efficient way to split a medical image into distinct regions and extract tumor tissues regions from the background. It maintains the balanced relationship between diversification and intensification during the search process and helps clinicians to make the diagnosis more accurately. Our proposed method processes potential medical value and clinical meanings.
Collapse
|
8
|
Cellular Competency during Development Alters Evolutionary Dynamics in an Artificial Embryogeny Model. ENTROPY (BASEL, SWITZERLAND) 2023; 25:e25010131. [PMID: 36673272 PMCID: PMC9858125 DOI: 10.3390/e25010131] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 12/23/2022] [Accepted: 12/29/2022] [Indexed: 05/25/2023]
Abstract
Biological genotypes do not code directly for phenotypes; developmental physiology is the control layer that separates genomes from capacities ascertained by selection. A key aspect is cellular competency, since cells are not passive materials but descendants of unicellular organisms with complex context-sensitive behavioral capabilities. To probe the effects of different degrees of cellular competency on evolutionary dynamics, we used an evolutionary simulation in the context of minimal artificial embryogeny. Virtual embryos consisted of a single axis of positional information values provided by cells' 'structural genes', operated upon by an evolutionary cycle in which embryos' fitness was proportional to monotonicity of the axial gradient. Evolutionary dynamics were evaluated in two modes: hardwired development (genotype directly encodes phenotype), and a more realistic mode in which cells interact prior to evaluation by the fitness function ("regulative" development). We find that even minimal ability of cells with to improve their position in the embryo results in better performance of the evolutionary search. Crucially, we observed that increasing the behavioral competency masks the raw fitness encoded by structural genes, with selection favoring improvements to its developmental problem-solving capacities over improvements to its structural genome. This suggests the existence of a powerful ratchet mechanism: evolution progressively becomes locked in to improvements in the intelligence of its agential substrate, with reduced pressure on the structural genome. This kind of feedback loop in which evolution increasingly puts more effort into the developmental software than perfecting the hardware explains the very puzzling divergence of genome from anatomy in species like planaria. In addition, it identifies a possible driver for scaling intelligence over evolutionary time, and suggests strategies for engineering novel systems in silico and in bioengineering.
Collapse
|
9
|
Editorial: Evolutionary computation-based machine learning and its applications for multi-robot systems. Front Neurorobot 2023; 17:1177909. [PMID: 37168714 PMCID: PMC10165071 DOI: 10.3389/fnbot.2023.1177909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 03/14/2023] [Indexed: 05/13/2023] Open
|
10
|
Morphological Development at the Evolutionary Timescale: Robotic Developmental Evolution. ARTIFICIAL LIFE 2022; 28:3-21. [PMID: 35287173 DOI: 10.1162/artl_a_00357] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Evolution and development operate at different timescales; generations for the one, a lifetime for the other. These two processes, the basis of much of life on earth, interact in many non-trivial ways, but their temporal hierarchy-evolution overarching development-is observed for most multicellular life forms. When designing robots, however, this tenet lifts: It becomes-however natural-a design choice. We propose to inverse this temporal hierarchy and design a developmental process happening at the phylogenetic timescale. Over a classic evolutionary search aimed at finding good gaits for tentacle 2D robots, we add a developmental process over the robots' morphologies. Within a generation, the morphology of the robots does not change. But from one generation to the next, the morphology develops. Much like we become bigger, stronger, and heavier as we age, our robots are bigger, stronger, and heavier with each passing generation. Our robots start with baby morphologies, and a few thousand generations later, end-up with adult ones. We show that this produces better and qualitatively different gaits than an evolutionary search with only adult robots, and that it prevents premature convergence by fostering exploration. In addition, we validate our method on voxel lattice 3D robots from the literature and compare it to a recent evolutionary developmental approach. Our method is conceptually simple, and it can be effective on small or large populations of robots, and intrinsic to the robot and its morphology, not the task or environment. Furthermore, by recasting the evolutionary search as a learning process, these results can be viewed in the context of developmental learning robotics.
Collapse
|
11
|
Genetic-based adaptive momentum estimation for predicting mortality risk factors for COVID-19 patients using deep learning. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2022; 32:614-628. [PMID: 34518740 PMCID: PMC8426801 DOI: 10.1002/ima.22644] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 07/12/2021] [Accepted: 07/14/2021] [Indexed: 06/13/2023]
Abstract
The mortality risk factors for coronavirus disease (COVID-19) must be early predicted, especially for severe cases, to provide intensive care before they develop to critically ill immediately. This paper aims to develop an optimized convolution neural network (CNN) for predicting mortality risk factors for COVID-19 patients. The proposed model supports two types of input data clinical variables and the computed tomography (CT) scans. The features are extracted from the optimized CNN phase and then applied to the classification phase. The CNN model's hyperparameters were optimized using a proposed genetic-based adaptive momentum estimation (GB-ADAM) algorithm. The GB-ADAM algorithm employs the genetic algorithm (GA) to optimize Adam optimizer's configuration parameters, consequently improving the classification accuracy. The model is validated using three recent cohorts from New York, Mexico, and Wuhan, consisting of 3055, 7497,504 patients, respectively. The results indicated that the most significant mortality risk factors are: CD 8 + T Lymphocyte (Count), D-dimer greater than 1 Ug/ml, high values of lactate dehydrogenase (LDH), C-reactive protein (CRP), hypertension, and diabetes. Early identification of these factors would help the clinicians in providing immediate care. The results also show that the most frequent COVID-19 signs in CT scans included ground-glass opacity (GGO), followed by crazy-paving pattern, consolidations, and the number of lobes. Moreover, the experimental results show encouraging performance for the proposed model compared with different predicting models.
Collapse
|
12
|
From evolutionary ecosystem simulations to computational models of human behavior. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2022; 13:e1622. [PMID: 36111832 PMCID: PMC9786238 DOI: 10.1002/wcs.1622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 06/30/2022] [Accepted: 08/09/2022] [Indexed: 12/30/2022]
Abstract
We have a wide breadth of computational tools available today that enable a more ethical approach to the study of human cognition and behavior. We argue that the use of computer models to study evolving ecosystems provides a rich source of inspiration, as they enable the study of complex systems that change over time. Often employing a combination of genetic algorithms and agent-based models, these methods span theoretical approaches from games to complexification, nature-inspired methods from studies of self-replication to the evolution of eyes, and evolutionary ecosystems of humans, from entire economies to the effects of personalities in teamwork. The review of works provided here illustrates the power of evolutionary ecosystem simulations and how they enable new insights for researchers. They also demonstrate a novel methodology of hypothesis exploration: building a computational model that encapsulates a hypothesis of human cognition enables it to be tested under different conditions, with its predictions compared to real data to enable corroboration. Such computational models of human behavior provide us with virtual test labs in which unlimited experiments can be performed. This article is categorized under: Computer Science and Robotics > Artificial Intelligence.
Collapse
|
13
|
A double-population chaotic self-adaptive evolutionary dynamics model for the prediction of supercritical carbon dioxide solubility in polymers. ROYAL SOCIETY OPEN SCIENCE 2022; 9:211419. [PMID: 35116155 PMCID: PMC8767190 DOI: 10.1098/rsos.211419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Accepted: 11/25/2021] [Indexed: 05/03/2023]
Abstract
Solubility of gas in polymers is an important physico-chemical property of foam materials and widely used in the preparation and modification of new materials. Under the conditions of high temperature and high pressure, the dissolution process is a nonlinear, non-equilibrium and dynamic process, so it is difficult to establish an accurate solubility calculation model. Inspired by particle dynamics and evolutionary algorithm, this paper proposes a hybrid model based on chaotic self-adaptive particle dynamics evolutionary algorithm (CSA-PD-EA), which can use the iterative process of particles in evolutionary algorithms at the dynamic level to simulate the mutual diffusion process of molecules during dissolution. The predicted solubility of supercritical CO2 in poly(d,l-lactide-co-glycolide), poly(l-lactide) and poly(vinyl acetate) indicated that the comprehensive prediction performance of the CSA-PD-EA model was high. The calculation error and correlation coefficient were, respectively, 0.3842 and 0.9187. The CSA-PD-EA model showed prominent advantages in accuracy, efficiency and correlation over other computational models, and its calculation time was 4.144-15.012% of that of other dynamic models. The CSA-PD-EA model has wide application prospects in the computation of physical and chemical properties and can provide the basis for the theoretical calculation of multi-scale complex systems in chemistry, materials, biology and physics.
Collapse
|
14
|
Maximizing Drift Is Not Optimal for Solving OneMax. EVOLUTIONARY COMPUTATION 2021; 29:521-541. [PMID: 33480820 DOI: 10.1162/evco_a_00290] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Accepted: 01/15/2021] [Indexed: 06/12/2023]
Abstract
It seems very intuitive that for the maximization of the OneMax problem Om(x):=∑i=1nxi the best that an elitist unary unbiased search algorithm can do is to store a best so far solution, and to modify it with the operator that yields the best possible expected progress in function value. This assumption has been implicitly used in several empirical works. In Doerr et al. (2020), it was formally proven that this approach is indeed almost optimal. In this work, we prove that drift maximization is not optimal. More precisely, we show that for most fitness levels between n/2 and 2n/3 the optimal mutation strengths are larger than the drift-maximizing ones. This implies that the optimal RLS is more risk-affine than the variant maximizing the stepwise expected progress. We show similar results for the mutation rates of the classic (1+1) Evolutionary Algorithm (EA) and its resampling variant, the (1+1) EA>0. As a result of independent interest we show that the optimal mutation strengths, unlike the drift-maximizing ones, can be even.
Collapse
|
15
|
A Decomposition-Based Evolutionary Algorithm with Correlative Selection Mechanism for Many-Objective Optimization. EVOLUTIONARY COMPUTATION 2021; 29:269-304. [PMID: 33047610 DOI: 10.1162/evco_a_00279] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Accepted: 09/27/2020] [Indexed: 06/11/2023]
Abstract
Decomposition-based evolutionary algorithms have been quite successful in dealing with multiobjective optimization problems. Recently, more and more researchers attempt to apply the decomposition approach to solve many-objective optimization problems. A many-objective evolutionary algorithm based on decomposition with correlative selection mechanism (MOEA/D-CSM) is also proposed to solve many-objective optimization problems in this article. Since MOEA/D-SCM is based on a decomposition approach which adopts penalty boundary intersection (PBI), a set of reference points must be generated in advance. Thus, a new concept related to the set of reference points is introduced first, namely, the correlation between an individual and a reference point. Thereafter, a new selection mechanism based on the correlation is designed and called correlative selection mechanism. The correlative selection mechanism finds its correlative individuals for each reference point as soon as possible so that the diversity among population members is maintained. However, when a reference point has two or more correlative individuals, the worse correlative individuals may be removed from a population so that the solutions can be ensured to move toward the Pareto-optimal front. In a comprehensive experimental study, we apply MOEA/D-CSM to a number of many-objective test problems with 3 to 15 objectives and make a comparison with three state-of-the-art many-objective evolutionary algorithms, namely, NSGA-III, MOEA/D, and RVEA. Experimental results show that the proposed MOEA/D-CSM can produce competitive results on most of the problems considered in this study.
Collapse
|
16
|
Infrequent Pattern Detection for Reliable Network Traffic Analysis Using Robust Evolutionary Computation. SENSORS 2021; 21:s21093005. [PMID: 33922954 PMCID: PMC8123319 DOI: 10.3390/s21093005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 04/08/2021] [Accepted: 04/16/2021] [Indexed: 12/03/2022]
Abstract
While anomaly detection is very important in many domains, such as in cybersecurity, there are many rare anomalies or infrequent patterns in cybersecurity datasets. Detection of infrequent patterns is computationally expensive. Cybersecurity datasets consist of many features, mostly irrelevant, resulting in lower classification performance by machine learning algorithms. Hence, a feature selection (FS) approach, i.e., selecting relevant features only, is an essential preprocessing step in cybersecurity data analysis. Despite many FS approaches proposed in the literature, cooperative co-evolution (CC)-based FS approaches can be more suitable for cybersecurity data preprocessing considering the Big Data scenario. Accordingly, in this paper, we have applied our previously proposed CC-based FS with random feature grouping (CCFSRFG) to a benchmark cybersecurity dataset as the preprocessing step. The dataset with original features and the dataset with a reduced number of features were used for infrequent pattern detection. Experimental analysis was performed and evaluated using 10 unsupervised anomaly detection techniques. Therefore, the proposed infrequent pattern detection is termed Unsupervised Infrequent Pattern Detection (UIPD). Then, we compared the experimental results with and without FS in terms of true positive rate (TPR). Experimental analysis indicates that the highest rate of TPR improvement was by cluster-based local outlier factor (CBLOF) of the backdoor infrequent pattern detection, and it was 385.91% when using FS. Furthermore, the highest overall infrequent pattern detection TPR was improved by 61.47% for all infrequent patterns using clustering-based multivariate Gaussian outlier score (CMGOS) with FS.
Collapse
|
17
|
Inference of dynamic spatial GRN models with multi-GPU evolutionary computation. Brief Bioinform 2021; 22:6217729. [PMID: 33834216 DOI: 10.1093/bib/bbab104] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 02/15/2021] [Accepted: 03/09/2021] [Indexed: 02/06/2023] Open
Abstract
Reverse engineering mechanistic gene regulatory network (GRN) models with a specific dynamic spatial behavior is an inverse problem without analytical solutions in general. Instead, heuristic machine learning algorithms have been proposed to infer the structure and parameters of a system of equations able to recapitulate a given gene expression pattern. However, these algorithms are computationally intensive as they need to simulate millions of candidate models, which limits their applicability and requires high computational resources. Graphics processing unit (GPU) computing is an affordable alternative for accelerating large-scale scientific computation, yet no method is currently available to exploit GPU technology for the reverse engineering of mechanistic GRNs from spatial phenotypes. Here we present an efficient methodology to parallelize evolutionary algorithms using GPU computing for the inference of mechanistic GRNs that can develop a given gene expression pattern in a multicellular tissue area or cell culture. The proposed approach is based on multi-CPU threads running the lightweight crossover, mutation and selection operators and launching GPU kernels asynchronously. Kernels can run in parallel in a single or multiple GPUs and each kernel simulates and scores the error of a model using the thread parallelism of the GPU. We tested this methodology for the inference of spatiotemporal mechanistic gene regulatory networks (GRNs)-including topology and parameters-that can develop a given 2D gene expression pattern. The results show a 700-fold speedup with respect to a single CPU implementation. This approach can streamline the extraction of knowledge from biological and medical datasets and accelerate the automatic design of GRNs for synthetic biology applications.
Collapse
|
18
|
From Prediction to Prescription: Evolutionary Optimization of Nonpharmaceutical Interventions in the COVID-19 Pandemic. IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION : A PUBLICATION OF THE IEEE NEURAL NETWORKS COUNCIL 2021; 25:386-401. [PMID: 36694708 PMCID: PMC8545006 DOI: 10.1109/tevc.2021.3063217] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 12/05/2020] [Accepted: 02/24/2021] [Indexed: 05/12/2023]
Abstract
Several models have been developed to predict how the COVID-19 pandemic spreads, and how it could be contained with nonpharmaceutical interventions, such as social distancing restrictions and school and business closures. This article demonstrates how evolutionary AI can be used to facilitate the next step, i.e., determining most effective intervention strategies automatically. Through evolutionary surrogate-assisted prescription, it is possible to generate a large number of candidate strategies and evaluate them with predictive models. In principle, strategies can be customized for different countries and locales, and balance the need to contain the pandemic and the need to minimize their economic impact. Early experiments suggest that workplace and school restrictions are the most important and need to be designed carefully. They also demonstrate that results of lifting restrictions can be unreliable, and suggest creative ways in which restrictions can be implemented softly, e.g., by alternating them over time. As more data becomes available, the approach can be increasingly useful in dealing with COVID-19 as well as possible future pandemics.
Collapse
|
19
|
Death and Progress: How Evolvability is Influenced by Intrinsic Mortality. ARTIFICIAL LIFE 2020; 26:90-111. [PMID: 32027531 DOI: 10.1162/artl_a_00311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Many factors influence the evolvability of populations, and this article illustrates how intrinsic mortality (death induced through internal factors) in an evolving population contributes favorably to evolvability on a fixed deceptive fitness landscape. We test for evolvability using the hierarchical if-and-only-if (h-iff) function as a deceptive fitness landscape together with a steady state genetic algorithm (SSGA) with a variable mutation rate and indiscriminate intrinsic mortality rate. The mutation rate and the intrinsic mortality rate display a relationship for finding the global maximum. This relationship was also found when implementing the same deceptive fitness landscape in a spatial model consisting of an evolving population. We also compared the performance of the optimal mutation and mortality rate with a state-of-the-art evolutionary algorithm called age-fitness Pareto optimization (AFPO) and show how the two approaches traverse the h-iff landscape differently. Our results indicate that the intrinsic mortality rate and mutation rate induce random genetic drift that allows a population to efficiently traverse a deceptive fitness landscape. This article gives an overview of how intrinsic mortality influences the evolvability of a population. It thereby supports the premise that programmed death of individuals could have a beneficial effect on the evolvability of the entire population.
Collapse
|
20
|
A Preliminary Study of Knowledge Transfer in Multi-Classification Using Gene Expression Programming. Front Neurosci 2020; 13:1396. [PMID: 32009880 PMCID: PMC6978847 DOI: 10.3389/fnins.2019.01396] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Accepted: 12/10/2019] [Indexed: 12/04/2022] Open
Abstract
Gene Expression Programming (GEP), a variant of Genetic Programming (GP), is a well established technique for automatic generation of computer programs. Due to the flexible representation, GEP has long been concerned as a classification algorithm for various applications. Whereas, GEP cannot be extended to multi-classification directly, and thus is only capable of treating an M-classification task as M separate binary classifications without considering the inter-relationship among classes. Consequently, GEP-based multi-classifier may suffer from output conflict of various class labels, and the underlying conflict can probably lead to the degraded performance in multi-classification. This paper employs evolutionary multitasking optimization paradigm in an existing GEP-based multi-classification framework, so as to alleviate the output conflict of each separate binary GEP classifier. Therefore, several knowledge transfer strategies are implemented to enable the interation among the population of each separate binary task. Experimental results on 10 high-dimensional datasets indicate that knowledge transfer among separate binary classifiers can enhance multi-classification performance within the same computational budget.
Collapse
|
21
|
Abstract
Living systems are more robust, diverse, complex, and supportive of human life than any technology yet created. However, our ability to create novel lifeforms is currently limited to varying existing organisms or bioengineering organoids in vitro. Here we show a scalable pipeline for creating functional novel lifeforms: AI methods automatically design diverse candidate lifeforms in silico to perform some desired function, and transferable designs are then created using a cell-based construction toolkit to realize living systems with the predicted behaviors. Although some steps in this pipeline still require manual intervention, complete automation in future would pave the way to designing and deploying unique, bespoke living systems for a wide range of functions.
Collapse
|
22
|
Computational Intelligence in Remote Sensing: An Editorial. SENSORS 2020; 20:s20030633. [PMID: 31979240 PMCID: PMC7038229 DOI: 10.3390/s20030633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 01/17/2020] [Indexed: 11/16/2022]
Abstract
Computational intelligence is a very active and fruitful research of artificial intelligence with a broad spectrum of applications. Remote sensing data has been a salient field of application of computational intelligence algorithms, both for the exploitation of the data and for the research/development of new data analysis tools. In this editorial paper we provide the setting of the special issue "Computational Intelligence in Remote Sensing" and an overview of the published papers. The 11 accepted and published papers cover a wide spectrum of applications and computational tools that we try to summarize and put in perspective in this editorial paper.
Collapse
|
23
|
The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities. ARTIFICIAL LIFE 2020; 26:274-306. [PMID: 32271631 DOI: 10.1162/artl_a_00319] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Evolution provides a creative fount of complex and subtle adaptations that often surprise the scientists who discover them. However, the creativity of evolution is not limited to the natural world: Artificial organisms evolving in computational environments have also elicited surprise and wonder from the researchers studying them. The process of evolution is an algorithmic process that transcends the substrate in which it occurs. Indeed, many researchers in the field of digital evolution can provide examples of how their evolving algorithms and organisms have creatively subverted their expectations or intentions, exposed unrecognized bugs in their code, produced unexpectedly adaptations, or engaged in behaviors and outcomes, uncannily convergent with ones found in nature. Such stories routinely reveal surprise and creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. Bugs are fixed, experiments are refocused, and one-off surprises are collapsed into a single data point. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This article is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.
Collapse
|
24
|
Online Signature Verification Based on a Single Template via Elastic Curve Matching. SENSORS 2019; 19:s19224858. [PMID: 31703448 PMCID: PMC6891754 DOI: 10.3390/s19224858] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 10/27/2019] [Accepted: 10/29/2019] [Indexed: 12/01/2022]
Abstract
Person verification using online handwritten signatures is one of the most widely researched behavior-biometrics. Many signature verification systems typically require five, ten, or even more signatures for an enrolled user to provide an accurate verification of the claimed identity. To mitigate this drawback, this paper proposes a new elastic curve matching using only one reference signature, which we have named the curve similarity model (CSM). In the CSM, we give a new definition of curve similarity and its calculation method. We use evolutionary computation (EC) to search for the optimal matching between two curves under different similarity transformations, so as to obtain the similarity distance between two curves. Referring to the geometric similarity property, curve similarity can realize translation, stretching and rotation transformation between curves, thus adapting to the inconsistency of signature size, position and rotation angle in signature curves. In the matching process of signature curves, we design a sectional optimal matching algorithm. On this basis, for each section, we develop a new consistent and discriminative fusion feature extraction for identifying the similarity of signature curves. The experimental results show that our system achieves the same performance with five samples assessed with multiple state-of-the-art automatic signature verifiers and multiple datasets. Furthermore, it suggests that our system, with a single reference signature, is capable of achieving a similar performance to other systems with up to five signatures trained.
Collapse
|
25
|
Metaheuristic Optimisation Algorithms for Tuning a Bioinspired Retinal Model. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4834. [PMID: 31698827 PMCID: PMC6891458 DOI: 10.3390/s19224834] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Revised: 10/31/2019] [Accepted: 11/03/2019] [Indexed: 11/22/2022]
Abstract
A significant challenge in neuroscience is understanding how visual information is encoded in the retina. Such knowledge is extremely important for the purpose of designing bioinspired sensors and artificial retinal systems that will, in so far as may be possible, be capable of mimicking vertebrate retinal behaviour. In this study, we report the tuning of a reliable computational bioinspired retinal model with various algorithms to improve the mimicry of the model. Its main contribution is two-fold. First, given the multi-objective nature of the problem, an automatic multi-objective optimisation strategy is proposed through the use of four biological-based metrics, which are used to adjust the retinal model for accurate prediction of retinal ganglion cell responses. Second, a subset of population-based search heuristics-genetic algorithms (SPEA2, NSGA-II and NSGA-III), particle swarm optimisation (PSO) and differential evolution (DE)-are explored to identify the best algorithm for fine-tuning the retinal model, by comparing performance across a hypervolume metric. Nonparametric statistical tests are used to perform a rigorous comparison between all the metaheuristics. The best results were achieved with the PSO algorithm on the basis of the largest hypervolume that was achieved, well-distributed elements and high numbers on the Pareto front.
Collapse
|
26
|
How Artificial Intelligence Can Help Us Understand Human Creativity. Front Psychol 2019; 10:1401. [PMID: 31275212 PMCID: PMC6594218 DOI: 10.3389/fpsyg.2019.01401] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 05/29/2019] [Indexed: 11/25/2022] Open
Abstract
Recent years have been marked by important developments in artificial intelligence (AI). These developments have highlighted serious limitations in human rationality and shown that computers can be highly creative. There are also important positive outcomes for psychologists studying creativity. It is now possible to design entirely new classes of experiments that are more promising than the simple tasks typically used for studying creativity in psychology. In addition, given the current and future AI algorithms for developing new data structures and programs, novel theories of creativity are on the horizon. Thus, AI opens up entire new avenues for studying human creativity in psychology.
Collapse
|
27
|
Abstract
Reservoir computing (RC) is a powerful computational paradigm that allows high versatility with cheap learning. While other artificial intelligence approaches need exhaustive resources to specify their inner workings, RC is based on a reservoir with highly nonlinear dynamics that does not require a fine tuning of its parts. These dynamics project input signals into high-dimensional spaces, where training linear readouts to extract input features is vastly simplified. Thus, inexpensive learning provides very powerful tools for decision-making, controlling dynamical systems, classification, etc. RC also facilitates solving multiple tasks in parallel, resulting in a high throughput. Existing literature focuses on applications in artificial intelligence and neuroscience. We review this literature from an evolutionary perspective. RC's versatility makes it a great candidate to solve outstanding problems in biology, which raises relevant questions. Is RC as abundant in nature as its advantages should imply? Has it evolved? Once evolved, can it be easily sustained? Under what circumstances? (In other words, is RC an evolutionarily stable computing paradigm?) To tackle these issues, we introduce a conceptual morphospace that would map computational selective pressures that could select for or against RC and other computing paradigms. This guides a speculative discussion about the questions above and allows us to propose a solid research line that brings together computation and evolution with RC as test model of the proposed hypotheses. This article is part of the theme issue 'Liquid brains, solid brains: How distributed cognitive architectures process information'.
Collapse
|
28
|
A Performance Evaluation Scheme for Multiple Object Tracking with HFSWR. SENSORS 2019; 19:s19061393. [PMID: 30901870 PMCID: PMC6471911 DOI: 10.3390/s19061393] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Revised: 03/19/2019] [Accepted: 03/19/2019] [Indexed: 11/25/2022]
Abstract
High-frequency surface wave radar (HFSWR) can detect and continuously track ship objects in real time and beyond the horizon. When ships navigate in a sea area, their motions in a time period form a scenario. The diversity and complexity of the motion scenarios make it difficult to accurately track ships, in which failures such as track fragmentation (TF) are frequently observed. However, it is still unclear how and to what degrees the motions of ships affect the tracking performance, especially which motion patterns can cause tracking failures. This paper addresses this problem and attempts to undertake a first step towards providing an intensive quantitative performance assessment and vulnerability detection scheme for ship-tracking algorithms by proposing an evolutionary and data-mining-based approach. Low-dimensional scenarios in terms of multiple maneuvering ship objects are generated using a grammar-based model. Closed-loop feedback is introduced using evolutionary computation to efficiently collect scenarios that cause more and more tracking performance loss, which provides diversified cases for analysing using data-mining technique to discover indicators of tracking vulnerability. Results on different tracking algorithms show that more cluster and convergence patterns and longer duration of our convoy and cluster patterns in the scenarios can cause severer TF to HFSWR ship tracking.
Collapse
|
29
|
Prediction of drug synergy score using ensemble based differential evolution. IET Syst Biol 2019; 13:24-29. [PMID: 30774113 PMCID: PMC8687263 DOI: 10.1049/iet-syb.2018.5023] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2018] [Revised: 07/23/2018] [Accepted: 09/05/2018] [Indexed: 12/23/2022] Open
Abstract
Prediction of drug synergy score is an ill-posed problem. It plays an efficient role in the medical field for inhibiting specific cancer agents. An efficient regression-based machine learning technique has an ability to minimise the drug synergy prediction errors. Therefore, in this study, an efficient machine learning technique for drug synergy prediction technique is designed by using ensemble based differential evolution (DE) for optimising the support vector machine (SVM). Because the tuning of the attributes of SVM kernel regulates the prediction precision. The ensemble based DE employs two trial vector generation techniques and two control attributes settings. The initial generation technique has the best solution and the other is without the best solution. The proposed and existing competitive machine learning techniques are applied to drug synergy data. The extensive analysis demonstrates that the proposed technique outperforms others in terms of accuracy, root mean square error and coefficient of correlation.
Collapse
|
30
|
No Strategy Can Win in the Repeated Prisoner's Dilemma: Linking Game Theory and Computer Simulations. Front Robot AI 2018; 5:102. [PMID: 33500981 PMCID: PMC7805755 DOI: 10.3389/frobt.2018.00102] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Accepted: 08/06/2018] [Indexed: 11/13/2022] Open
Abstract
Computer simulations are regularly used for studying the evolution of strategies in repeated games. These simulations rarely pay attention to game theoretical results that can illuminate the data analysis or the questions being asked. Results from evolutionary game theory imply that for every Nash equilibrium, there are sequences of mutants that would destabilize them. If strategies are not limited to a finite set, populations move between a variety of Nash equilibria with different levels of cooperation. This instability is inescapable, regardless of how strategies are represented. We present algorithms that show that simulations do agree with the theory. This implies that cognition itself may only have limited impact on the cycling dynamics. We argue that the role of mutations or exploration is more important in determining levels of cooperation.
Collapse
|
31
|
Optimization of Deep Neural Networks Using SoCs with OpenCL. SENSORS 2018; 18:s18051384. [PMID: 29710875 PMCID: PMC5982427 DOI: 10.3390/s18051384] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Revised: 04/18/2018] [Accepted: 04/27/2018] [Indexed: 11/16/2022]
Abstract
In the optimization of deep neural networks (DNNs) via evolutionary algorithms (EAs) and the implementation of the training necessary for the creation of the objective function, there is often a trade-off between efficiency and flexibility. Pure software solutions implemented on general-purpose processors tend to be slow because they do not take advantage of the inherent parallelism of these devices, whereas hardware realizations based on heterogeneous platforms (combining central processing units (CPUs), graphics processing units (GPUs) and/or field-programmable gate arrays (FPGAs)) are designed based on different solutions using methodologies supported by different languages and using very different implementation criteria. This paper first presents a study that demonstrates the need for a heterogeneous (CPU-GPU-FPGA) platform to accelerate the optimization of artificial neural networks (ANNs) using genetic algorithms. Second, the paper presents implementations of the calculations related to the individuals evaluated in such an algorithm on different (CPU- and FPGA-based) platforms, but with the same source files written in OpenCL. The implementation of individuals on remote, low-cost FPGA systems on a chip (SoCs) is found to enable the achievement of good efficiency in terms of performance per watt.
Collapse
|
32
|
Computational Intelligence-Assisted Understanding of Nature-Inspired Superhydrophobic Behavior. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2018; 5:1700520. [PMID: 29375975 PMCID: PMC5770681 DOI: 10.1002/advs.201700520] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 10/13/2017] [Indexed: 05/11/2023]
Abstract
In recent years, state-of-the-art computational modeling of physical and chemical systems has shown itself to be an invaluable resource in the prediction of the properties and behavior of functional materials. However, construction of a useful computational model for novel systems in both academic and industrial contexts often requires a great depth of physicochemical theory and/or a wealth of empirical data, and a shortage in the availability of either frustrates the modeling process. In this work, computational intelligence is instead used, including artificial neural networks and evolutionary computation, to enhance our understanding of nature-inspired superhydrophobic behavior. The relationships between experimental parameters (water droplet volume, weight percentage of nanoparticles used in the synthesis of the polymer composite, and distance separating the superhydrophobic surface and the pendant water droplet in adhesive force measurements) and multiple objectives (water droplet contact angle, sliding angle, and adhesive force) are built and weighted. The obtained optimal parameters are consistent with the experimental observations. This new approach to materials modeling has great potential to be applied more generally to aid design, fabrication, and optimization for myriad functional materials.
Collapse
|
33
|
A New Framework for Analysis of Coevolutionary Systems-Directed Graph Representation and Random Walks. EVOLUTIONARY COMPUTATION 2017; 27:195-228. [PMID: 29155606 DOI: 10.1162/evco_a_00218] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Studying coevolutionary systems in the context of simplified models (i.e., games with pairwise interactions between coevolving solutions modeled as self plays) remains an open challenge since the rich underlying structures associated with pairwise-comparison-based fitness measures are often not taken fully into account. Although cyclic dynamics have been demonstrated in several contexts (such as intransitivity in coevolutionary problems), there is no complete characterization of cycle structures and their effects on coevolutionary search. We develop a new framework to address this issue. At the core of our approach is the directed graph (digraph) representation of coevolutionary problems that fully captures structures in the relations between candidate solutions. Coevolutionary processes are modeled as a specific type of Markov chains-random walks on digraphs. Using this framework, we show that coevolutionary problems admit a qualitative characterization: a coevolutionary problem is either solvable (there is a subset of solutions that dominates the remaining candidate solutions) or not. This has an implication on coevolutionary search. We further develop our framework that provides the means to construct quantitative tools for analysis of coevolutionary processes and demonstrate their applications through case studies. We show that coevolution of solvable problems corresponds to an absorbing Markov chain for which we can compute the expected hitting time of the absorbing class. Otherwise, coevolution will cycle indefinitely and the quantity of interest will be the limiting invariant distribution of the Markov chain. We also provide an index for characterizing complexity in coevolutionary problems and show how they can be generated in a controlled manner.
Collapse
|
34
|
PERSPECTIVE: COMPLEX ADAPTATIONS AND THE EVOLUTION OF EVOLVABILITY. Evolution 2017; 50:967-976. [PMID: 28565291 DOI: 10.1111/j.1558-5646.1996.tb02339.x] [Citation(s) in RCA: 889] [Impact Index Per Article: 127.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/1995] [Accepted: 08/09/1995] [Indexed: 11/29/2022]
Abstract
The problem of complex adaptations is studied in two largely disconnected research traditions: evolutionary biology and evolutionary computer science. This paper summarizes the results from both areas and compares their implications. In evolutionary computer science it was found that the Darwinian process of mutation, recombination and selection is not universally effective in improving complex systems like computer programs or chip designs. For adaptation to occur, these systems must possess "evolvability," i.e., the ability of random variations to sometimes produce improvement. It was found that evolvability critically depends on the way genetic variation maps onto phenotypic variation, an issue known as the representation problem. The genotype-phenotype map determines the variability of characters, which is the propensity to vary. Variability needs to be distinguished from variations, which are the actually realized differences between individuals. The genotype-phenotype map is the common theme underlying such varied biological phenomena as genetic canalization, developmental constraints, biological versatility, developmental dissociability, and morphological integration. For evolutionary biology the representation problem has important implications: how is it that extant species acquired a genotype-phenotype map which allows improvement by mutation and selection? Is the genotype-phenotype map able to change in evolution? What are the selective forces, if any, that shape the genotype-phenotype map? We propose that the genotype-phenotype map can evolve by two main routes: epistatic mutations, or the creation of new genes. A common result for organismic design is modularity. By modularity we mean a genotype-phenotype map in which there are few pleiotropic effects among characters serving different functions, with pleiotropic effects falling mainly among characters that are part of a single functional complex. Such a design is expected to improve evolvability by limiting the interference between the adaptation of different functions. Several population genetic models are reviewed that are intended to explain the evolutionary origin of a modular design. While our current knowledge is insufficient to assess the plausibility of these models, they form the beginning of a framework for understanding the evolution of the genotype-phenotype map.
Collapse
|
35
|
Evolutionary Design of Classifiers Made of Droplets Containing a Nonlinear Chemical Medium. EVOLUTIONARY COMPUTATION 2016; 25:643-671. [PMID: 27728772 DOI: 10.1162/evco_a_00197] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Unconventional computing devices operating on nonlinear chemical media offer an interesting alternative to standard, semiconductor-based computers. In this work we study in-silico a chemical medium composed of communicating droplets that functions as a database classifier. The droplet network can be "programmed" by an externally provided illumination pattern. The complex relationship between the illumination pattern and the droplet behavior makes manual programming hard. We introduce an evolutionary algorithm that automatically finds the optimal illumination pattern for a given classification problem. Notably, our approach does not require us to prespecify the signals that represent the output classes of the classification problem, which is achieved by using a fitness function that measures the mutual information between chemical oscillation patterns and desired output classes. We illustrate the feasibility of our approach in computer simulations by evolving droplet classifiers for three machine learning datasets. We demonstrate that the same medium composed of 25 droplets located on a square lattice can be successfully used for different classification tasks by applying different illumination patterns as its externally supplied program.
Collapse
|
36
|
Introducing Elitist Black-Box Models: When Does Elitist Behavior Weaken the Performance of Evolutionary Algorithms? EVOLUTIONARY COMPUTATION 2016; 25:587-606. [PMID: 27700278 DOI: 10.1162/evco_a_00195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Black-box complexity theory provides lower bounds for the runtime of black-box optimizers like evolutionary algorithms and other search heuristics and serves as an inspiration for the design of new genetic algorithms. Several black-box models covering different classes of algorithms exist, each highlighting a different aspect of the algorithms under considerations. In this work we add to the existing black-box notions a new elitist black-box model, in which algorithms are required to base all decisions solely on (the relative performance of) a fixed number of the best search points sampled so far. Our elitist model thus combines features of the ranking-based and the memory-restricted black-box models with an enforced usage of truncation selection. We provide several examples for which the elitist black-box complexity is exponentially larger than that of the respective complexities in all previous black-box models, thus showing that the elitist black-box complexity can be much closer to the runtime of typical evolutionary algorithms. We also introduce the concept of p-Monte Carlo black-box complexity, which measures the time it takes to optimize a problem with failure probability at most p. Even for small p, the p-Monte Carlo black-box complexity of a function class [Formula: see text] can be smaller by an exponential factor than its typically regarded Las Vegas complexity (which measures the expected time it takes to optimize [Formula: see text]).
Collapse
|
37
|
Application of an evolutionary algorithm in the optimal design of micro-sensor. Biomed Mater Eng 2016; 26 Suppl 1:S1711-9. [PMID: 26405938 DOI: 10.3233/bme-151471] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This paper introduces an automatic bond graph design method based on genetic programming for the evolutionary design of micro-resonator. First, the system-level behavioral model is discussed, which based on genetic programming and bond graph. Then, the geometry parameters of components are automatically optimized, by using the genetic algorithm with constraints. To illustrate this approach, a typical device micro-resonator is designed as an example in biomedicine. This paper provides a new idea for the automatic optimization design of biomedical sensors by evolutionary calculation.
Collapse
|
38
|
An Evolutionary Computation Approach to Examine Functional Brain Plasticity. Front Neurosci 2016; 10:146. [PMID: 27092047 PMCID: PMC4820463 DOI: 10.3389/fnins.2016.00146] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2015] [Accepted: 03/21/2016] [Indexed: 01/13/2023] Open
Abstract
One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength of functional relationship between DMN and ECN for TBI subjects, which is consistent with prior findings in the TBI-literature. The EC-approach also allowed us to separate sub-regional-pairs contributing to positive and negative plasticity; the detected sub-regional-pairs significantly overlap across runs thus highlighting the reliability of the EC-approach. These sub-regional-pairs may be useful in performing nuanced analyses of brain-behavior relationships during recovery from TBI.
Collapse
|
39
|
Abstract
This paper presents a parallel simulated annealing algorithm that is able to achieve 90% parallel efficiency in iteration on up to 192 processors and up to 40% parallel efficiency in time when applied to a 5000-dimension Rastrigin function. Our algorithm breaks scalability barriers in the method of Chu et al. (1999) by abandoning adaptive cooling based on variance. The resulting gains in parallel efficiency are much larger than the loss of serial efficiency from lack of adaptive cooling. Our algorithm resamples the states across processors periodically. The resampling interval is tuned according to the success rate for each specific number of processors. We further present an adaptive method to determine the resampling interval based on the adoption rate. This adaptive method is able to achieve nearly identical parallel efficiency but higher success rates compared to the fixed interval one using the best interval found.
Collapse
|
40
|
Abstract
Automata chemistries are good vehicles for experimentation in open-ended evolution, but they are by necessity complex systems whose low-level properties require careful design. To aid the process of designing automata chemistries, we develop an abstract model that classifies the features of a chemistry from a physical (bottom up) perspective and from a biological (top down) perspective. There are two levels: things that can evolve, and things that cannot. We equate the evolving level with biology and the non-evolving level with physics. We design our initial organisms in the biology, so they can evolve. We design the physics to facilitate evolvable biologies. This architecture leads to a set of design principles that should be observed when creating an instantiation of the architecture. These principles are Everything Evolves, Everything's Soft, and Everything Dies. To evaluate these ideas, we present experiments in the recently developed Stringmol automata chemistry. We examine the properties of Stringmol with respect to the principles, and so demonstrate the usefulness of the principles in designing automata chemistries.
Collapse
|
41
|
In silico discovery of significant pathways in colorectal cancer metastasis using a two-stage optimisation approach. IET Syst Biol 2015; 9:294-302. [PMID: 26577164 PMCID: PMC8687187 DOI: 10.1049/iet-syb.2015.0031] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Revised: 06/30/2015] [Accepted: 07/08/2015] [Indexed: 11/19/2022] Open
Abstract
Accurate and reliable modelling of protein-protein interaction networks for complex diseases such as colorectal cancer can help better understand mechanism of diseases and potentially discover new drugs. Different machine learning methods such as empirical mode decomposition combined with least square support vector machine, and discrete Fourier transform have been widely utilised as a classifier and for automatic discovery of biomarkers for the diagnosis of the disease. The existing methods are, however, less efficient as they tend to ignore interaction with the classifier. In this study, the authors propose a two-stage optimisation approach to effectively select biomarkers and discover interactions among them. At the first stage, particle swarm optimisation (PSO) and differential evolution (DE) are used to optimise parameters of support vector machine recursive feature elimination algorithm, and dynamic Bayesian network is then used to predict temporal relationship between biomarkers across two time points. Results show that 18 and 25 biomarkers selected by PSO and DE-based approach, respectively, yields the same accuracy of 97.3% and F1-score of 97.7 and 97.6%, respectively. The stratified analysis reveals that Alpha-2-HS-glycoprotein was a dominant hub gene with multiple interactions to other genes including Fibrinogen alpha chain, which is also a potential biomarker for colorectal cancer.
Collapse
|
42
|
Studying Collective Human Decision Making and Creativity with Evolutionary Computation. ARTIFICIAL LIFE 2015; 21:379-393. [PMID: 26280078 DOI: 10.1162/artl_a_00178] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We report a summary of our interdisciplinary research project "Evolutionary Perspective on Collective Decision Making" that was conducted through close collaboration between computational, organizational, and social scientists at Binghamton University. We redefined collective human decision making and creativity as evolution of ecologies of ideas, where populations of ideas evolve via continual applications of evolutionary operators such as reproduction, recombination, mutation, selection, and migration of ideas, each conducted by participating humans. Based on this evolutionary perspective, we generated hypotheses about collective human decision making, using agent-based computer simulations. The hypotheses were then tested through several experiments with real human subjects. Throughout this project, we utilized evolutionary computation (EC) in non-traditional ways-(1) as a theoretical framework for reinterpreting the dynamics of idea generation and selection, (2) as a computational simulation model of collective human decision-making processes, and (3) as a research tool for collecting high-resolution experimental data on actual collaborative design and decision making from human subjects. We believe our work demonstrates untapped potential of EC for interdisciplinary research involving human and social dynamics.
Collapse
|
43
|
Cost-effective targeting of conservation investments to reduce the northern Gulf of Mexico hypoxic zone. Proc Natl Acad Sci U S A 2014; 111:18530-5. [PMID: 25512489 PMCID: PMC4284528 DOI: 10.1073/pnas.1405837111] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
A seasonally occurring summer hypoxic (low oxygen) zone in the northern Gulf of Mexico is the second largest in the world. Reductions in nutrients from agricultural cropland in its watershed are needed to reduce the hypoxic zone size to the national policy goal of 5,000 km(2) (as a 5-y running average) set by the national Gulf of Mexico Task Force's Action Plan. We develop an integrated assessment model linking the water quality effects of cropland conservation investment decisions on the more than 550 agricultural subwatersheds that deliver nutrients into the Gulf with a hypoxic zone model. We use this integrated assessment model to identify the most cost-effective subwatersheds to target for cropland conservation investments. We consider targeting of the location (which subwatersheds to treat) and the extent of conservation investment to undertake (how much cropland within a subwatershed to treat). We use process models to simulate the dynamics of the effects of cropland conservation investments on nutrient delivery to the Gulf and use an evolutionary algorithm to solve the optimization problem. Model results suggest that by targeting cropland conservation investments to the most cost-effective location and extent of coverage, the Action Plan goal of 5,000 km(2) can be achieved at a cost of $2.7 billion annually. A large set of cost-hypoxia tradeoffs is developed, ranging from the baseline to the nontargeted adoption of the most aggressive cropland conservation investments in all subwatersheds (estimated to reduce the hypoxic zone to less than 3,000 km(2) at a cost of $5.6 billion annually).
Collapse
|
44
|
Closed-loop optimization of chromatography column sizing strategies in biopharmaceutical manufacture. JOURNAL OF CHEMICAL TECHNOLOGY AND BIOTECHNOLOGY (OXFORD, OXFORDSHIRE : 1986) 2014; 89:1481-1490. [PMID: 25506115 PMCID: PMC4258073 DOI: 10.1002/jctb.4267] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2013] [Accepted: 11/20/2013] [Indexed: 06/04/2023]
Abstract
BACKGROUND This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. RESULTS An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. CONCLUSION This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.
Collapse
|
45
|
Abstract
A controller of biological or artificial organism (e.g., in bio-inspired cellular robots) consists of a number of processes that drive its dynamics. For a system of processes to perform as a successful controller, different properties can be mentioned. One of the desirable properties of such a system is the capability of generating sufficiently diverse patterns of outputs and behaviors. A system with such a capability is potentially adaptable to perform complicated tasks with proper parameterizations and may successfully reach the solution space of behaviors from the point of view of search and evolutionary algorithms. This article aims to take an early step toward exploring this capability at the levels of individuals and populations by introducing measures of diversity generation and by evaluating the influence of different types of processes on diversity generation. A reaction-diffusion-based controller called the artificial homeostatic hormone system (AHHS) is studied as a system consisting of different processes with various domains of functioning (e.g., internal or external to the control unit). Various combinations of these processes are investigated in terms of diversity generation at levels of both individuals and populations, and the effects of the processes are discussed representing different influences for the processes. A case study of evolving a multimodular AHHS controller with all the various process combinations is also investigated, representing the relevance of the diversity generation measures and practical scenarios.
Collapse
|
46
|
Human-interpretable feature pattern classification system using learning classifier systems. EVOLUTIONARY COMPUTATION 2014; 22:629-650. [PMID: 24697596 DOI: 10.1162/evco_a_00127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Image pattern classification is a challenging task due to the large search space of pixel data. Supervised and subsymbolic approaches have proven accurate in learning a problem's classes. However, in the complex image recognition domain, there is a need for investigation of learning techniques that allow humans to interpret the learned rules in order to gain an insight about the problem. Learning classifier systems (LCSs) are a machine learning technique that have been minimally explored for image classification. This work has developed the feature pattern classification system (FPCS) framework by adopting Haar-like features from the image recognition domain for feature extraction. The FPCS integrates Haar-like features with XCS, which is an accuracy-based LCS. A major contribution of this work is that the developed framework is capable of producing human-interpretable rules. The FPCS system achieved 91 [Formula: see text] 1% accuracy on the unseen test set of the MNIST dataset. In addition, the FPCS is capable of autonomously adjusting the rotation angle in unaligned images. This rotation adjustment raised the accuracy of FPCS to 95%. Although the performance is competitive with equivalent approaches, this was not as accurate as subsymbolic approaches on this dataset. However, the benefit of the interpretability of rules produced by FPCS enabled us to identify the distribution of the learned angles-a normal distribution around [Formula: see text]-which would have been very difficult in subsymbolic approaches. The analyzable nature of FPCS is anticipated to be beneficial in domains such as speed sign recognition, where underlying reasoning and confidence of recognition needs to be human interpretable.
Collapse
|
47
|
Abstract
The paper explores the use of evolutionary techniques in dealing with the image segmentation problem. An image is modeled as a weighted undirected graph, where nodes correspond to pixels, and edges connect similar pixels. A genetic algorithm that uses a fitness function based on an extension of the normalized cut criterion is proposed. The algorithm employs the locus-based representation of individuals, which allows for the partitioning of images without setting the number of segments beforehand. A new concept of nearest neighbor that takes into account not only the spatial location of a pixel, but also the affinity with the other pixels contained in the neighborhood, is also defined. Experimental results show that our approach is able to segment images in a number of regions that conform well to human visual perception. The visual perceptiveness is substantiated by objective evaluation methods based on uniformity of pixels inside a region, and comparison with ground-truth segmentations available for part of the used test images.
Collapse
|
48
|
Assessment of uncertainty in functional-structural plant models. ANNALS OF BOTANY 2011; 108:1043-53. [PMID: 21593061 PMCID: PMC3189835 DOI: 10.1093/aob/mcr110] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2010] [Accepted: 03/10/2011] [Indexed: 05/21/2023]
Abstract
BACKGROUND AND AIMS Constructing functional-structural plant models (FSPMs) is a valuable method for examining how physiology and morphology interact in determining plant processes. However, such models always have uncertainty concerned with whether model components have been selected and represented effectively, with the number of model outputs simulated and with the quality of data used in assessment. We provide a procedure for defining uncertainty of an FSPM and how this uncertainty can be reduced. METHODS An important characteristic of FSPMs is that typically they calculate many variables. These can be variables that the model is designed to predict and also variables that give indications of how the model functions. Together these variables are used as criteria in a method of multi-criteria assessment. Expected ranges are defined and an evolutionary computation algorithm searches for model parameters that achieve criteria within these ranges. Typically, different combinations of model parameter values provide solutions achieving different combinations of variables within their specified ranges. We show how these solutions define a Pareto Frontier that can inform about the functioning of the model. KEY RESULTS The method of multi-criteria assessment is applied to development of BRANCHPRO, an FSPM for foliage reiteration on old-growth branches of Pseudotsuga menziesii. A geometric model utilizing probabilities for bud growth is developed into a causal explanation for the pattern of reiteration found on these branches and how this pattern may contribute to the longevity of this species. CONCLUSIONS FSPMs should be assessed by their ability to simulate multiple criteria simultaneously. When different combinations of parameter values achieve different groups of assessment criteria effectively a Pareto Frontier can be calculated and used to define the sources of model uncertainty.
Collapse
|
49
|
Prediction of R5, X4, and R5X4 HIV-1 coreceptor usage with evolved neural networks. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2008; 5:291-300. [PMID: 18451438 PMCID: PMC3523352 DOI: 10.1109/tcbb.2007.1074] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
The HIV-1 genome is highly heterogeneous. This variation affords the virus a wide range of molecular properties, including the ability to infect cell types, such as macrophages and lymphocytes, expressing different chemokine receptors on the cell surface. In particular, R5 HIV-1 viruses use CCR5 as co-receptor for viral entry, X4 viruses use CXCR4, whereas some viral strains, known as R5X4 or D-tropic, have the ability to utilize both co-receptors. X4 and R5X4 viruses are associated with rapid disease progression to AIDS. R5X4 viruses differ in that they have yet to be characterized by the examination of the genetic sequence of HIV-1 alone. In this study, a series of experiments was performed to evaluate different strategies of feature selection and neural network optimization. We demonstrate the use of artificial neural networks trained via evolutionary computation to predict viral co-receptor usage. The results indicate identification of R5X4 viruses with predictive accuracy of 75.5%.
Collapse
|
50
|
Artificial intelligence in sports biomechanics: new dawn or false hope? J Sports Sci Med 2006; 5:474-479. [PMID: 24357939 PMCID: PMC3861744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This article reviews developments in the use of Artificial Intelligence (AI) in sports biomechanics over the last decade. It outlines possible uses of Expert Systems as diagnostic tools for evaluating faults in sports movements ('techniques') and presents some example knowledge rules for such an expert system. It then compares the analysis of sports techniques, in which Expert Systems have found little place to date, with gait analysis, in which they are routinely used. Consideration is then given to the use of Artificial Neural Networks (ANNs) in sports biomechanics, focusing on Kohonen self-organizing maps, which have been the most widely used in technique analysis, and multi-layer networks, which have been far more widely used in biomechanics in general. Examples of the use of ANNs in sports biomechanics are presented for javelin and discus throwing, shot putting and football kicking. I also present an example of the use of Evolutionary Computation in movement optimization in the soccer throw in, which predicted an optimal technique close to that in the coaching literature. After briefly overviewing the use of AI in both sports science and biomechanics in general, the article concludes with some speculations about future uses of AI in sports biomechanics. Key PointsExpert Systems remain almost unused in sports biomechanics, unlike in the similar discipline of gait analysis.Artificial Neural Networks, particularly Kohonen Maps, have been used, although their full value remains unclear.Other AI applications, including Evolutionary Computation, have received little attention.
Collapse
|