1
|
Gandomi AH. Interior search algorithm (ISA): a novel approach for global optimization. ISA TRANSACTIONS 2014; 53:1168-83. [PMID: 24785823 DOI: 10.1016/j.isatra.2014.03.018] [Citation(s) in RCA: 126] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2013] [Revised: 10/23/2013] [Accepted: 03/31/2014] [Indexed: 05/12/2023]
Abstract
This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune.
Collapse
|
|
11 |
126 |
2
|
Fausto F, Cuevas E, Valdivia A, González A. A global optimization algorithm inspired in the behavior of selfish herds. Biosystems 2017; 160:39-55. [PMID: 28847742 DOI: 10.1016/j.biosystems.2017.07.010] [Citation(s) in RCA: 102] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2017] [Revised: 07/05/2017] [Accepted: 07/31/2017] [Indexed: 11/16/2022]
Abstract
In this paper, a novel swarm optimization algorithm called the Selfish Herd Optimizer (SHO) is proposed for solving global optimization problems. SHO is based on the simulation of the widely observed selfish herd behavior manifested by individuals within a herd of animals subjected to some form of predation risk. In SHO, individuals emulate the predatory interactions between groups of prey and predators by two types of search agents: the members of a selfish herd (the prey) and a pack of hungry predators. Depending on their classification as either a prey or a predator, each individual is conducted by a set of unique evolutionary operators inspired by such prey-predator relationship. These unique traits allow SHO to improve the balance between exploration and exploitation without altering the population size. To illustrate the proficiency and robustness of the proposed method, it is compared to other well-known evolutionary optimization approaches such as Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Firefly Algorithm (FA), Differential Evolution (DE), Genetic Algorithms (GA), Crow Search Algorithm (CSA), Dragonfly Algorithm (DA), Moth-flame Optimization Algorithm (MOA) and Sine Cosine Algorithm (SCA). The comparison examines several standard benchmark functions, commonly considered within the literature of evolutionary algorithms. The experimental results show the remarkable performance of our proposed approach against those of the other compared methods, and as such SHO is proven to be an excellent alternative to solve global optimization problems.
Collapse
|
Journal Article |
8 |
102 |
3
|
Jiang J, Fan JA. Global Optimization of Dielectric Metasurfaces Using a Physics-Driven Neural Network. NANO LETTERS 2019; 19:5366-5372. [PMID: 31294997 DOI: 10.1021/acs.nanolett.9b01857] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
We present a global optimizer, based on a conditional generative neural network, which can output ensembles of highly efficient topology-optimized metasurfaces operating across a range of parameters. A key feature of the network is that it initially generates a distribution of devices that broadly samples the design space and then shifts and refines this distribution toward favorable design space regions over the course of optimization. Training is performed by calculating the forward and adjoint electromagnetic simulations of outputted devices and using the subsequent efficiency gradients for backpropagation. With metagratings operating across a range of wavelengths and angles as a model system, we show that devices produced from the trained generative network have efficiencies comparable to or better than the best devices produced by adjoint-based topology optimization, while requiring less computational cost. Our reframing of adjoint-based optimization to the training of a generative neural network applies generally to physical systems that can utilize gradients to improve performance.
Collapse
|
|
6 |
67 |
4
|
Penas DR, González P, Egea JA, Doallo R, Banga JR. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy. BMC Bioinformatics 2017; 18:52. [PMID: 28109249 PMCID: PMC5251293 DOI: 10.1186/s12859-016-1452-4] [Citation(s) in RCA: 64] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2016] [Accepted: 12/24/2016] [Indexed: 12/02/2022] Open
Abstract
Background The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. Results The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. Conclusions The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models. Electronic supplementary material The online version of this article (doi:10.1186/s12859-016-1452-4) contains supplementary material, which is available to authorized users.
Collapse
|
Journal Article |
8 |
64 |
5
|
Gábor A, Villaverde AF, Banga JR. Parameter identifiability analysis and visualization in large-scale kinetic models of biosystems. BMC SYSTEMS BIOLOGY 2017; 11:54. [PMID: 28476119 PMCID: PMC5420165 DOI: 10.1186/s12918-017-0428-y] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2017] [Accepted: 04/25/2017] [Indexed: 01/13/2023]
Abstract
Background Kinetic models of biochemical systems usually consist of ordinary differential equations that have many unknown parameters. Some of these parameters are often practically unidentifiable, that is, their values cannot be uniquely determined from the available data. Possible causes are lack of influence on the measured outputs, interdependence among parameters, and poor data quality. Uncorrelated parameters can be seen as the key tuning knobs of a predictive model. Therefore, before attempting to perform parameter estimation (model calibration) it is important to characterize the subset(s) of identifiable parameters and their interplay. Once this is achieved, it is still necessary to perform parameter estimation, which poses additional challenges. Methods We present a methodology that (i) detects high-order relationships among parameters, and (ii) visualizes the results to facilitate further analysis. We use a collinearity index to quantify the correlation between parameters in a group in a computationally efficient way. Then we apply integer optimization to find the largest groups of uncorrelated parameters. We also use the collinearity index to identify small groups of highly correlated parameters. The results files can be visualized using Cytoscape, showing the identifiable and non-identifiable groups of parameters together with the model structure in the same graph. Results Our contributions alleviate the difficulties that appear at different stages of the identifiability analysis and parameter estimation process. We show how to combine global optimization and regularization techniques for calibrating medium and large scale biological models with moderate computation times. Then we evaluate the practical identifiability of the estimated parameters using the proposed methodology. The identifiability analysis techniques are implemented as a MATLAB toolbox called VisId, which is freely available as open source from GitHub (https://github.com/gabora/visid). Conclusions Our approach is geared towards scalability. It enables the practical identifiability analysis of dynamic models of large size, and accelerates their calibration. The visualization tool allows modellers to detect parts that are problematic and need refinement or reformulation, and provides experimentalists with information that can be helpful in the design of new experiments. Electronic supplementary material The online version of this article (doi:10.1186/s12918-017-0428-y) contains supplementary material, which is available to authorized users.
Collapse
|
Journal Article |
8 |
63 |
6
|
Rieger TR, Allen RJ, Bystricky L, Chen Y, Colopy GW, Cui Y, Gonzalez A, Liu Y, White RD, Everett RA, Banks HT, Musante CJ. Improving the generation and selection of virtual populations in quantitative systems pharmacology models. PROGRESS IN BIOPHYSICS AND MOLECULAR BIOLOGY 2018; 139:15-22. [PMID: 29902482 DOI: 10.1016/j.pbiomolbio.2018.06.002] [Citation(s) in RCA: 55] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 05/17/2018] [Accepted: 06/04/2018] [Indexed: 11/16/2022]
Abstract
Quantitative systems pharmacology (QSP) models aim to describe mechanistically the pathophysiology of disease and predict the effects of therapies on that disease. For most drug development applications, it is important to predict not only the mean response to an intervention but also the distribution of responses, due to inter-patient variability. Given the necessary complexity of QSP models, and the sparsity of relevant human data, the parameters of QSP models are often not well determined. One approach to overcome these limitations is to develop alternative virtual patients (VPs) and virtual populations (Vpops), which allow for the exploration of parametric uncertainty and reproduce inter-patient variability in response to perturbation. Here we evaluated approaches to improve the efficiency of generating Vpops. We aimed to generate Vpops without sacrificing diversity of the VPs' pathophysiologies and phenotypes. To do this, we built upon a previously published approach (Allen et al., 2016) by (a) incorporating alternative optimization algorithms (genetic algorithm and Metropolis-Hastings) or alternatively (b) augmenting the optimized objective function. Each method improved the baseline algorithm by requiring significantly fewer plausible patients (precursors to VPs) to create a reasonable Vpop.
Collapse
|
Research Support, N.I.H., Extramural |
7 |
55 |
7
|
Qian ZM, Wang SH, Cheng XE, Chen YQ. An effective and robust method for tracking multiple fish in video image based on fish head detection. BMC Bioinformatics 2016; 17:251. [PMID: 27338122 PMCID: PMC4917973 DOI: 10.1186/s12859-016-1138-y] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2015] [Accepted: 06/13/2016] [Indexed: 11/14/2022] Open
Abstract
Background Fish tracking is an important step for video based analysis of fish behavior. Due to severe body deformation and mutual occlusion of multiple swimming fish, accurate and robust fish tracking from video image sequence is a highly challenging problem. The current tracking methods based on motion information are not accurate and robust enough to track the waving body and handle occlusion. In order to better overcome these problems, we propose a multiple fish tracking method based on fish head detection. Results The shape and gray scale characteristics of the fish image are employed to locate the fish head position. For each detected fish head, we utilize the gray distribution of the head region to estimate the fish head direction. Both the position and direction information from fish detection are then combined to build a cost function of fish swimming. Based on the cost function, global optimization method can be applied to associate the target between consecutive frames. Results show that our method can accurately detect the position and direction information of fish head, and has a good tracking performance for dozens of fish. Conclusion The proposed method can successfully obtain the motion trajectories for dozens of fish so as to provide more precise data to accommodate systematic analysis of fish behavior. Electronic supplementary material The online version of this article (doi:10.1186/s12859-016-1138-y) contains supplementary material, which is available to authorized users.
Collapse
|
Journal Article |
9 |
35 |
8
|
Charbonnier C, Chagué S, Kolo FC, Chow JCK, Lädermann A. A patient-specific measurement technique to model shoulder joint kinematics. Orthop Traumatol Surg Res 2014; 100:715-9. [PMID: 25281547 DOI: 10.1016/j.otsr.2014.06.015] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2014] [Revised: 06/09/2014] [Accepted: 06/24/2014] [Indexed: 02/02/2023]
Abstract
BACKGROUND Measuring dynamic in vivo shoulder kinematics is crucial to better understanding numerous pathologies. Motion capture systems using skin-mounted markers offer good solutions for non-invasive assessment of shoulder kinematics during dynamic movement. However, none of the current motion capture techniques have been used to study translation values at the joint, which is crucial to assess shoulder instability. The aim of the present study was to develop a dedicated patient-specific measurement technique based on motion capture and magnetic resonance imaging (MRI) to determine shoulder kinematics accurately. HYPOTHESIS Estimation of both rotations and translations at the shoulder joint using motion capture is feasible thanks to a patient-specific kinematic chain of the shoulder complex reconstructed from MRI data. MATERIALS AND METHODS We implemented a patient-specific kinematic chain model of the shoulder complex with loose constraints on joint translation. To assess the effectiveness of the technique, six subjects underwent data acquisition simultaneously with fluoroscopy and motion capture during flexion and empty-can abduction. The reference 3D shoulder kinematics was reconstructed from fluoroscopy and compared to that obtained from the new technique using skin markers. RESULTS Root mean square errors (RMSE) for shoulder orientation were within 4° (mean range: 2.0°-3.4°) for each anatomical axis and each motion. For glenohumeral translations, maximum RMSE for flexion was 3.7mm and 3.5mm for empty-can abduction (mean range: 1.9-3.3mm). Although the translation errors were significant, the computed patterns of humeral translation showed good agreement with published data. DISCUSSION To our knowledge, this study is the first attempt to calculate both rotations and translations at the shoulder joint based on skin-mounted markers. Results were encouraging and can serve as reference for future developments. The proposed technique could provide valuable kinematic data for the study of shoulder pathologies. LEVEL OF EVIDENCE Basic Science Study.
Collapse
|
Comparative Study |
11 |
29 |
9
|
Emam MM, Houssein EH, Ghoniem RM. A modified reptile search algorithm for global optimization and image segmentation: Case study brain MRI images. Comput Biol Med 2023; 152:106404. [PMID: 36521356 DOI: 10.1016/j.compbiomed.2022.106404] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 10/29/2022] [Accepted: 12/03/2022] [Indexed: 12/12/2022]
Abstract
In this paper, we proposed an enhanced reptile search algorithm (RSA) for global optimization and selected optimal thresholding values for multilevel image segmentation. RSA is a recent metaheuristic optimization algorithm depending on the hunting behavior of crocodiles. RSA is inclined to inadequate diversity, local optima, and unbalanced exploitation abilities as other metaheuristic algorithms. The RUNge Kutta optimizer (RUN) is a novel metaheuristic algorithm that has demonstrated effectiveness in solving real-world optimization problems. The enhanced solution quality (ESQ) in RUN utilizes the thus-far best solution to promote the quality of solutions, improve the convergence speed, and effectively balance the exploration and exploitation steps. Also, the Scale factor (SF) has a randomized adaptation nature, which helps RUN in further improving the exploration and exploitation steps. This parameter ensures a smooth transition from exploration to exploitation. In order to mitigate the drawbacks of the RSA algorithm, this paper proposed a modified RSA (mRSA), which combines the RSA algorithm with the RUN. The ESQ mechanism and the scale factor boost the original RSA's performance, enhance convergence speed, bypass local optimum, and enhance the balance between exploitation and exploration. The validity of mRSA was verified using two experimental sequences. First, we applied mRSA to CEC'2020 benchmark functions of various types and dimensions, showing that mRSA has more robust search capabilities than the original RSA and popular counterpart algorithms concerning statistical, convergence, and diversity measurements. The second experiment evaluated mRSA for a real-world application to solve magnetic resonance imaging (MRI) brain image segmentation. Overall experimental results confirm that the mRSA has a strong optimization ability. Also, mRSA method is a more successful multilevel thresholding segmentation and outperforms comparison methods according to different performance measures.
Collapse
|
|
2 |
28 |
10
|
Task based synthesis of serial manipulators. J Adv Res 2015; 6:479-92. [PMID: 26257946 PMCID: PMC4522582 DOI: 10.1016/j.jare.2014.12.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2014] [Revised: 12/05/2014] [Accepted: 12/15/2014] [Indexed: 11/22/2022] Open
Abstract
Computing the optimal geometric structure of manipulators is one of the most intricate problems in contemporary robot kinematics. Robotic manipulators are designed and built to perform certain predetermined tasks. There is a very close relationship between the structure of the manipulator and its kinematic performance. It is therefore important to incorporate such task requirements during the design and synthesis of the robotic manipulators. Such task requirements and performance constraints can be specified in terms of the required end-effector positions, orientations and velocities along the task trajectory. In this work, we present a comprehensive method to develop the optimal geometric structure (DH parameters) of a non-redundant six degree of freedom serial manipulator from task descriptions. In this work we define, develop and test a methodology to design optimal manipulator configurations based on task descriptions. This methodology is devised to investigate all possible manipulator configurations that can satisfy the task performance requirements under imposed joint constraints. Out of all the possible structures, the structures that can reach all the task points with the required orientations are selected. Next, these candidate structures are tested to see whether they can attain end-effector velocities in arbitrary directions within the user defined joint constraints, so that they can deliver the best kinematic performance. Additionally least power consuming configurations are also identified.
Collapse
|
|
10 |
17 |
11
|
Sulimov VB, Kutov DC, Taschilova AS, Ilin IS, Tyrtyshnikov EE, Sulimov AV. Docking Paradigm in Drug Design. Curr Top Med Chem 2021; 21:507-546. [PMID: 33292135 DOI: 10.2174/1568026620666201207095626] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2020] [Revised: 09/28/2020] [Accepted: 10/16/2020] [Indexed: 11/22/2022]
Abstract
Docking is in demand for the rational computer aided structure based drug design. A review of docking methods and programs is presented. Different types of docking programs are described. They include docking of non-covalent small ligands, protein-protein docking, supercomputer docking, quantum docking, the new generation of docking programs and the application of docking for covalent inhibitors discovery. Taking into account the threat of COVID-19, we present here a short review of docking applications to the discovery of inhibitors of SARS-CoV and SARS-CoV-2 target proteins, including our own result of the search for inhibitors of SARS-CoV-2 main protease using docking and quantum chemical post-processing. The conclusion is made that docking is extremely important in the fight against COVID-19 during the process of development of antivirus drugs having a direct action on SARS-CoV-2 target proteins.
Collapse
|
Review |
4 |
14 |
12
|
Dos Santos Rocha MSR, Pratto B, de Sousa R, Almeida RMRG, Cruz AJGD. A kinetic model for hydrothermal pretreatment of sugarcane straw. BIORESOURCE TECHNOLOGY 2017; 228:176-185. [PMID: 28063360 DOI: 10.1016/j.biortech.2016.12.087] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Revised: 12/21/2016] [Accepted: 12/22/2016] [Indexed: 06/06/2023]
Abstract
This work presents kinetic models of cellulose and hemicellulose extraction during hydrothermal pretreatment of sugarcane straw. Biomass was treated under conditions of 180, 195, and 210°C, using a solid/liquid ratio of 1:10 (w/v). In this study, cellobiose, glucose, formic acid and hydroxymethylfurfural (from cellulosic fraction) and xylose, arabinose, acetic acid, glucuronic acid and furfural (from hemicellulosic fraction) were taken into account in the kinetic parameters determination. The global search algorithm Simulated Annealing was used to fit the models. At 195°C/15min, 85% of hemicellulose and 21% of cellulose removal was reached. For the confidence regions, it was observed that it can be broad, which is coherent with the fact that the parameters are highly correlated. Kinetic models proposed for both cellulosic and hemicellulosic fractions degradation fitted well to the experimental data.
Collapse
|
|
8 |
12 |
13
|
Gros C, De Leener B, Dupont SM, Martin AR, Fehlings MG, Bakshi R, Tummala S, Auclair V, McLaren DG, Callot V, Cohen-Adad J, Sdika M. Automatic spinal cord localization, robust to MRI contrasts using global curve optimization. Med Image Anal 2017; 44:215-227. [PMID: 29288983 DOI: 10.1016/j.media.2017.12.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2017] [Revised: 09/29/2017] [Accepted: 12/02/2017] [Indexed: 12/14/2022]
Abstract
During the last two decades, MRI has been increasingly used for providing valuable quantitative information about spinal cord morphometry, such as quantification of the spinal cord atrophy in various diseases. However, despite the significant improvement of MR sequences adapted to the spinal cord, automatic image processing tools for spinal cord MRI data are not yet as developed as for the brain. There is nonetheless great interest in fully automatic and fast processing methods to be able to propose quantitative analysis pipelines on large datasets without user bias. The first step of most of these analysis pipelines is to detect the spinal cord, which is challenging to achieve automatically across the broad range of MRI contrasts, field of view, resolutions and pathologies. In this paper, a fully automated, robust and fast method for detecting the spinal cord centerline on MRI volumes is introduced. The algorithm uses a global optimization scheme that attempts to strike a balance between a probabilistic localization map of the spinal cord center point and the overall spatial consistency of the spinal cord centerline (i.e. the rostro-caudal continuity of the spinal cord). Additionally, a new post-processing feature, which aims to automatically split brain and spine regions is introduced, to be able to detect a consistent spinal cord centerline, independently from the field of view. We present data on the validation of the proposed algorithm, known as "OptiC", from a large dataset involving 20 centers, 4 contrasts (T2-weighted n = 287, T1-weighted n = 120, T2∗-weighted n = 307, diffusion-weighted n = 90), 501 subjects including 173 patients with a variety of neurologic diseases. Validation involved the gold-standard centerline coverage, the mean square error between the true and predicted centerlines and the ability to accurately separate brain and spine regions. Overall, OptiC was able to cover 98.77% of the gold-standard centerline, with a mean square error of 1.02 mm. OptiC achieved superior results compared to a state-of-the-art spinal cord localization technique based on the Hough transform, especially on pathological cases with an averaged mean square error of 1.08 mm vs. 13.16 mm (Wilcoxon signed-rank test p-value < .01). Images containing brain regions were identified with a 99% precision, on which brain and spine regions were separated with a distance error of 9.37 mm compared to ground-truth. Validation results on a challenging dataset suggest that OptiC could reliably be used for subsequent quantitative analyses tasks, opening the door to more robust analysis on pathological cases.
Collapse
|
Validation Study |
8 |
11 |
14
|
Galvão BRL, Viegas LP, Salahub DR, Lourenço MP. Reliability of semiempirical and DFTB methods for the global optimization of the structures of nanoclusters. J Mol Model 2020; 26:303. [PMID: 33064203 DOI: 10.1007/s00894-020-04484-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Accepted: 07/21/2020] [Indexed: 11/26/2022]
Abstract
In this work, we explore the possibility of using computationally inexpensive electronic structure methods, such as semiempirical and DFTB calculations, for the search of the global minimum (GM) structure of chemical systems. The basic prerequisite that these inexpensive methods will need to fulfill is that their lowest energy structures can be used as starting point for a subsequent local optimization at a benchmark level that will yield its GM. If this is possible, one could bypass the global optimization at the expensive method, which is currently impossible except for very small molecules. Specifically, we test our methods with clusters of second row elements including systems of several bonding types, such as alkali, metal, and covalent clusters. The results reveal that the DFTB3 method yields reasonable results and is a potential candidate for this type of applications. Even though the DFTB2 approach using standard parameters is proven to yield poor results, we show that a re-parametrization of only its repulsive part is enough to achieve excellent results, even when applied to larger systems outside the training set.
Collapse
|
|
5 |
10 |
15
|
Duarte BPM, Wong WK, Atkinson AC. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination. J MULTIVARIATE ANAL 2015; 135:11-24. [PMID: 27330230 DOI: 10.1016/j.jmva.2014.11.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.
Collapse
|
|
10 |
9 |
16
|
Computational study for protein-protein docking using global optimization and empirical potentials. Int J Mol Sci 2008; 9:65-77. [PMID: 19325720 PMCID: PMC2635596 DOI: 10.3390/ijms9010065] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2007] [Accepted: 01/15/2008] [Indexed: 11/17/2022] Open
Abstract
Protein-protein interactions are important for biochemical processes in biological systems. The 3D structure of the macromolecular complex resulting from the protein-protein association is a very useful source to understand its specific functions. This work focuses on computational study for protein-protein docking, where the individually crystallized structures of interacting proteins are treated as rigid, and the conformational space generated by the two interacting proteins is explored extensively. The energy function consists of intermolecular electrostatic potential, desolvation free energy represented by empirical contact potential, and simple repulsive energy terms. The conformational space is six dimensional, represented by translational vectors and rotational angles formed between two interacting proteins. The conformational sampling is carried out by the search algorithms such as simulated annealing (SA), conformational space annealing (CSA), and CSA combined with SA simulations (combined CSA/SA). Benchmark tests are performed on a set of 18 protein-protein complexes selected from various protein families to examine feasibility of these search methods coupled with the energy function above for protein docking study.
Collapse
|
Journal Article |
17 |
9 |
17
|
Beykal B, Avraamidou S, Pistikopoulos IPE, Onel M, Pistikopoulos EN. DOMINO: Data-driven Optimization of bi-level Mixed-Integer NOnlinear Problems. JOURNAL OF GLOBAL OPTIMIZATION : AN INTERNATIONAL JOURNAL DEALING WITH THEORETICAL AND COMPUTATIONAL ASPECTS OF SEEKING GLOBAL OPTIMA AND THEIR APPLICATIONS IN SCIENCE, MANAGEMENT AND ENGINEERING 2020; 78:1-36. [PMID: 32753792 PMCID: PMC7402589 DOI: 10.1007/s10898-020-00890-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Accepted: 02/12/2020] [Indexed: 05/21/2023]
Abstract
The Data-driven Optimization of bi-level Mixed-Integer NOnlinear problems (DOMINO) framework is presented for addressing the optimization of bi-level mixed-integer nonlinear programming problems. In this framework, bi-level optimization problems are approximated as single-level optimization problems by collecting samples of the upper-level objective and solving the lower-level problem to global optimality at those sampling points. This process is done through the integration of the DOMINO framework with a grey-box optimization solver to perform design of experiments on the upper-level objective, and to consecutively approximate and optimize bi-level mixed-integer nonlinear programming problems that are challenging to solve using exact methods. The performance of DOMINO is assessed through solving numerous bi-level benchmark problems, a land allocation problem in Food-Energy-Water Nexus, and through employing different data-driven optimization methodologies, including both local and global methods. Although this data-driven approach cannot provide a theoretical guarantee to global optimality, we present an algorithmic advancement that can guarantee feasibility to large-scale bi-level optimization problems when the lower-level problem is solved to global optimality at convergence.
Collapse
|
research-article |
5 |
8 |
18
|
Horsak B, Pobatschnig B, Schwab C, Baca A, Kranzl A, Kainz H. Reliability of joint kinematic calculations based on direct kinematic and inverse kinematic models in obese children. Gait Posture 2018; 66:201-207. [PMID: 30199779 DOI: 10.1016/j.gaitpost.2018.08.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Revised: 08/21/2018] [Accepted: 08/22/2018] [Indexed: 02/07/2023]
Abstract
BACKGROUND In recent years, the reliability of inverse (IK) and direct kinematic (DK) models in gait analysis have been assessed intensively, but mainly for lean populations. However, obesity is a growing issue. So far, the sparse results available for the reliability of clinical gait analysis in obese populations are limited to direct kinematic models. Reliability error-margins for inverse kinematic models in obese populations have not been reported yet. RESEARCH QUESTIONS Is there a difference in the reliability of IK models compared with a DK model in obese children? Are there any differences in the joint kinematic output between IK and DK models? METHODS A test-retest study was conducted using three-dimensional gait analysis data from two obese female and eight obese male participants from an earlier study. Data were analyzed using a DK model and two OpenSim-based IK models. Test-retest reliability was compared by calculating the Standard Error of Measurement (SEM) along with similar absolute reliability measures. A Friedman Test was used to assess whether there were any significant differences in the reliability between the models. Kinematic output of the models was compared by using Statistical Parametric Mapping (SPM). RESULTS No significant differences were found in the reliability between the DK and IK models. The SPM analysis indicated several significant differences between both IK models and the DK approach. Most of these differences were continuous offsets. SIGNIFICANCE Reliability values showed clinically acceptable error-margins and were comparable between all models. Therefore, our results support the careful use of IK models in overweight or obese populations, e.g. for musculoskeletal modelling studies. The inconsistent kinematic output can mainly be explained by different model conventions and anatomical segment coordinate frame definitions.
Collapse
|
Comparative Study |
7 |
8 |
19
|
Manheim DC, Detwiler RL. Accurate and reliable estimation of kinetic parameters for environmental engineering applications: A global, multi objective, Bayesian optimization approach. MethodsX 2019; 6:1398-1414. [PMID: 31245280 PMCID: PMC6582191 DOI: 10.1016/j.mex.2019.05.035] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Accepted: 05/30/2019] [Indexed: 11/16/2022] Open
Abstract
Accurate and reliable predictions of bacterial growth and metabolism from unstructured kinetic models are critical to the proper operation and design of engineered biological treatment and remediation systems. As such, parameter estimation has progressed into a routine challenge in the field of Environmental Engineering. Among the main issues identified with parameter estimation, the model-data calibration approach is a crucial, yet an often overlooked and difficult optimization problem. Here, a novel and rigorous global, multi objective, and fully Bayesian optimization approach that overcomes challenges associated with multi-variate, sparse and noisy data, as well as highly non-linear model structures commonly encountered in Environmental Engineering practice is presented. This optimization approach allows an improved definition and targeting of the compromise solution space for all multivariate problems, allowing efficient convergence, and a Bayesian component to thoroughly explore parameter and model prediction uncertainty. This global optimization approach outperformed, in terms of parameter accuracy and precision, standard, local non-linear regression routines and overcomes issues associated with premature convergence and addresses overfitting of different variables in the calibration process. •A sequential single, multi-objective, and Bayesian optimization workflow was developed to accurately and reliably estimate unstructured kinetic model parameters.•The global, single objective approach defines the global optimum (the best compromise solution) and "extreme" parameter solutions for each variable, while the global, multi-objective approach confirms the "best" compromise solution space for the Bayesian search to target and convergence is assessed using the single objective results.•The Approximate Bayesian Computational approach fully explores parameter and model prediction uncertainty targeting the compromise solution space previously identified.
Collapse
|
research-article |
6 |
7 |
20
|
Zhao Y, Liu S. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints. SPRINGERPLUS 2016; 5:1302. [PMID: 27547676 PMCID: PMC4978663 DOI: 10.1186/s40064-016-2984-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2016] [Accepted: 08/02/2016] [Indexed: 11/10/2022]
Abstract
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Collapse
|
|
9 |
6 |
21
|
Baltean-Lugojan R, Misener R. Piecewise parametric structure in the pooling problem: from sparse strongly-polynomial solutions to NP-hardness. JOURNAL OF GLOBAL OPTIMIZATION : AN INTERNATIONAL JOURNAL DEALING WITH THEORETICAL AND COMPUTATIONAL ASPECTS OF SEEKING GLOBAL OPTIMA AND THEIR APPLICATIONS IN SCIENCE, MANAGEMENT AND ENGINEERING 2017; 71:655-690. [PMID: 30956395 PMCID: PMC6417401 DOI: 10.1007/s10898-017-0577-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2017] [Accepted: 10/08/2017] [Indexed: 06/09/2023]
Abstract
The standard pooling problem is a NP-hard subclass of non-convex quadratically-constrained optimization problems that commonly arises in process systems engineering applications. We take a parametric approach to uncovering topological structure and sparsity, focusing on the single quality standard pooling problem in its p-formulation. The structure uncovered in this approach validates Professor Christodoulos A. Floudas' intuition that pooling problems are rooted in piecewise-defined functions. We introduce dominant active topologies under relaxed flow availability to explicitly identify pooling problem sparsity and show that the sparse patterns of active topological structure are associated with a piecewise objective function. Finally, the paper explains the conditions under which sparsity vanishes and where the combinatorial complexity emerges to cross over the P / NP boundary. We formally present the results obtained and their derivations for various specialized single quality pooling problem subclasses.
Collapse
|
research-article |
8 |
6 |
22
|
An efficient rotational direction heap-based optimization with orthogonal structure for medical diagnosis. Comput Biol Med 2022; 146:105563. [PMID: 35551010 DOI: 10.1016/j.compbiomed.2022.105563] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 04/24/2022] [Accepted: 04/24/2022] [Indexed: 12/17/2022]
Abstract
The heap-based optimizer (HBO) is an optimization method proposed in recent years that may face local stagnation problems and show slow convergence speed due to the lack of detailed analysis of optimal solutions and a comprehensive search. Therefore, to mitigate these drawbacks and strengthen the performance of the algorithm in the field of medical diagnosis, a new MGOHBO method is proposed by introducing the modified Rosenbrock's rotational direction method (MRM), an operator from the grey wolf optimizer (GWM), and an orthogonal learning strategy (OL). The MGOHBO is compared with eleven famous and improved optimizers on IEEE CEC 2017. The results on benchmark functions indicate that the boosted MGOHBO has several significant advantages in terms of convergence accuracy and speed of the process. Additionally, this article analyzed the diversity and balance of MGOHBO in detail. Finally, the proposed MGOHBO algorithm is utilized to optimize the kernel extreme learning machines (KELM), and a new MGOHBO-KELM is proposed. To validate the performance of MGOHBO-KELM, seven disease diagnostic questions were introduced for testing in this work. In contrast to advanced models such as HBO-KELM and BP, it can be concluded that the MGOHBO-KELM model can achieve optimal results, which also proves that it has practical significance in solving medical diagnosis problems.
Collapse
|
|
3 |
6 |
23
|
Corbera S, Olazagoitia JL, Lozano JA. Multi-objective global optimization of a butterfly valve using genetic algorithms. ISA TRANSACTIONS 2016; 63:401-412. [PMID: 27056745 DOI: 10.1016/j.isatra.2016.03.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2015] [Revised: 02/15/2016] [Accepted: 03/13/2016] [Indexed: 06/05/2023]
Abstract
A butterfly valve is a type of valve typically used for isolating or regulating flow where the closing mechanism takes the form of a disc. For a long time, the attention of many researchers has focused on carrying out structural (FEM) and computational fluid dynamics (CFD) analysis in order to increase the performance of this type of flow-control device. This paper proposes a novel multi-objective approach for the design optimization of a butterfly valve using advanced genetic algorithms based on Pareto dominance. Firstly, after defining the need for this study and analyzing previous papers on the subject, the initial butterfly valve is presented and the initial fluid and structural analysis are carried out. Secondly, the optimization problem is defined and the optimization strategy is presented. The design variables are identified and a parameterization model of the valve is made. Thirdly, initial design candidates are generated by DOE and design optimization using genetic algorithms is performed. In this part of the process structural and CFD analysis are calculated for each candidate simultaneously. The optimization process involves various types of software and Python scripts are needed for their interaction and the connection of all steps. Finally, a set of optimal solutions is obtained and the optimum design that provides a 65.4% stress reduction, a 5% mass reduction and a 11.3% flow increase is selected in accordance with manufacturer preferences. Validation of the results is provided by comparing experimental test results with the values obtained for the initial design. The results demonstrate the capability and potential of the proposed methodology.
Collapse
|
|
9 |
5 |
24
|
Bowyer JE, Lc de Los Santos E, Styles KM, Fullwood A, Corre C, Bates DG. Modeling the architecture of the regulatory system controlling methylenomycin production in Streptomyces coelicolor. J Biol Eng 2017; 11:30. [PMID: 29026441 PMCID: PMC5625687 DOI: 10.1186/s13036-017-0071-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Accepted: 07/18/2017] [Indexed: 01/07/2023] Open
Abstract
Background The antibiotic methylenomycin A is produced naturally by Streptomyces coelicolor A3(2), a model organism for streptomycetes. This compound is of particular interest to synthetic biologists because all of the associated biosynthetic, regulatory and resistance genes are located on a single cluster on the SCP1 plasmid, making the entire module easily transferable between different bacterial strains. Understanding further the regulation and biosynthesis of the methylenomycin producing gene cluster could assist in the identification of motifs that can be exploited in synthetic regulatory systems for the rational engineering of novel natural products and antibiotics. Results We identify and validate a plausible architecture for the regulatory system controlling methylenomycin production in S. coelicolor using mathematical modeling approaches. Model selection via an approximate Bayesian computation (ABC) approach identifies three candidate model architectures that are most likely to produce the available experimental data, from a set of 48 possible candidates. Subsequent global optimization of the parameters of these model architectures identifies a single model that most accurately reproduces the dynamical response of the system, as captured by time series data on methylenomycin production. Further analyses of variants of this model architecture that capture the effects of gene knockouts also reproduce qualitative experimental results observed in mutant S. coelicolor strains. Conclusions The mechanistic mathematical model developed in this study recapitulates current biological knowledge of the regulation and biosynthesis of the methylenomycin producing gene cluster, and can be used in future studies to make testable predictions and formulate experiments to further improve our understanding of this complex regulatory system.
Collapse
|
Journal Article |
8 |
5 |
25
|
Akyol S. A new hybrid method based on Aquila optimizer and tangent search algorithm for global optimization. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2022; 14:8045-8065. [PMID: 35968266 PMCID: PMC9358922 DOI: 10.1007/s12652-022-04347-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 07/11/2022] [Indexed: 05/25/2023]
Abstract
Since no single algorithm can provide the optimal solutions for all problems, new metaheuristic methods are always being proposed or developed by combining current algorithms or creating adaptable versions. Metaheuristic methods should have a balanced exploitation and exploration stages. One of these two talents may be sufficient in some metaheuristic methods, while the other may be insufficient. By integrating the strengths of the two algorithms and hybridizing them, a more efficient algorithm can be formed. In this paper, the Aquila optimizer-tangent search algorithm (AO-TSA) is proposed as a new hybrid approach that uses the intensification stage of the tangent search algorithm (TSA) instead of the limited exploration stage to improve the Aquila optimizer's exploitation capabilities (AO). In addition, the local minimum escape stage of TSA is applied in AO-TSA to avoid the local minimum stagnation problem. The performance of AO-TSA is compared with other current metaheuristic algorithms using a total of twenty-one benchmark functions consisting of six unimodal, six multimodal, six fixed-dimension multimodal, and three modern CEC 2019 benchmark functions according to different metrics. Furthermore, two real engineering design problems are also used for performance comparison. Sensitivity analysis and statistical test analysis are also performed. Experimental results show that hybrid AO-TSA gives promising results and seems an effective method for global solution search and optimization problems.
Collapse
|
research-article |
3 |
4 |