1
|
Neural network emulation of the human ventricular cardiomyocyte action potential for more efficient computations in pharmacological studies. eLife 2024; 12:RP91911. [PMID: 38598284 PMCID: PMC11006416 DOI: 10.7554/elife.91911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024] Open
Abstract
Computer models of the human ventricular cardiomyocyte action potential (AP) have reached a level of detail and maturity that has led to an increasing number of applications in the pharmaceutical sector. However, interfacing the models with experimental data can become a significant computational burden. To mitigate the computational burden, the present study introduces a neural network (NN) that emulates the AP for given maximum conductances of selected ion channels, pumps, and exchangers. Its applicability in pharmacological studies was tested on synthetic and experimental data. The NN emulator potentially enables massive speed-ups compared to regular simulations and the forward problem (find drugged AP for pharmacological parameters defined as scaling factors of control maximum conductances) on synthetic data could be solved with average root-mean-square errors (RMSE) of 0.47 mV in normal APs and of 14.5 mV in abnormal APs exhibiting early afterdepolarizations (72.5% of the emulated APs were alining with the abnormality, and the substantial majority of the remaining APs demonstrated pronounced proximity). This demonstrates not only very fast and mostly very accurate AP emulations but also the capability of accounting for discontinuities, a major advantage over existing emulation strategies. Furthermore, the inverse problem (find pharmacological parameters for control and drugged APs through optimization) on synthetic data could be solved with high accuracy shown by a maximum RMSE of 0.22 in the estimated pharmacological parameters. However, notable mismatches were observed between pharmacological parameters estimated from experimental data and distributions obtained from the Comprehensive in vitro Proarrhythmia Assay initiative. This reveals larger inaccuracies which can be attributed particularly to the fact that small tissue preparations were studied while the emulator was trained on single cardiomyocyte data. Overall, our study highlights the potential of NN emulators as powerful tool for an increased efficiency in future quantitative systems pharmacology studies.
Collapse
|
2
|
Efficient sensitivity analysis for biomechanical models with correlated inputs. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2024; 40:e3797. [PMID: 38116742 DOI: 10.1002/cnm.3797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 09/08/2023] [Accepted: 11/26/2023] [Indexed: 12/21/2023]
Abstract
In most variance-based sensitivity analysis (SA) approaches applied to biomechanical models, statistical independence of the model input is assumed. However, often the model inputs are correlated. This might alter the interpretation of the SA results, which may severely impact the guidance provided during model development and personalization. Potential reasons for the infrequent usage of SA techniques that account for input correlation are the associated high computational costs, especially for models with many parameters, and the fact that the input correlation structure is often unknown. The aim of this study was to propose an efficient correlated global sensitivity analysis method by applying a surrogate model-based approach. Furthermore, this article demonstrates how correlated SA should be interpreted and how the applied method can guide the modeler during model development and personalization, even when the correlation structure is not entirely known beforehand. The proposed methodology was applied to a typical example of a pulse wave propagation model and resulted in accurate SA results that could be obtained at a theoretically 27,000× lower computational cost compared to the correlated SA approach without employing a surrogate model. Furthermore, our results demonstrate that input correlations can significantly affect SA results, which emphasizes the need to thoroughly investigate the effect of input correlations during model development. We conclude that our proposed surrogate-based SA approach allows modelers to efficiently perform correlated SA to complex biomechanical models and allows modelers to focus on input prioritization, input fixing and model reduction, or assessing the dependency structure between parameters.
Collapse
|
3
|
Instantaneous Generation of Subject-Specific Finite Element Models of the Hip Capsule. Bioengineering (Basel) 2023; 11:37. [PMID: 38247914 PMCID: PMC10813259 DOI: 10.3390/bioengineering11010037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 12/22/2023] [Indexed: 01/23/2024] Open
Abstract
Subject-specific hip capsule models could offer insights into impingement and dislocation risk when coupled with computer-aided surgery, but model calibration is time-consuming using traditional techniques. This study developed a framework for instantaneously generating subject-specific finite element (FE) capsule representations from regression models trained with a probabilistic approach. A validated FE model of the implanted hip capsule was evaluated probabilistically to generate a training dataset relating capsule geometry and material properties to hip laxity. Multivariate regression models were trained using 90% of trials to predict capsule properties based on hip laxity and attachment site information. The regression models were validated using the remaining 10% of the training set by comparing differences in hip laxity between the original trials and the regression-derived capsules. Root mean square errors (RMSEs) in laxity predictions ranged from 1.8° to 2.3°, depending on the type of laxity used in the training set. The RMSE, when predicting the laxity measured from five cadaveric specimens with total hip arthroplasty, was 4.5°. Model generation time was reduced from days to milliseconds. The results demonstrated the potential of regression-based training to instantaneously generate subject-specific FE models and have implications for integrating subject-specific capsule models into surgical planning software.
Collapse
|
4
|
Influence of the mechanical and geometrical parameters on the cellular uptake of nanoparticles: A stochastic approach. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2022; 38:e3598. [PMID: 35343089 DOI: 10.1002/cnm.3598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 03/16/2022] [Accepted: 03/19/2022] [Indexed: 06/14/2023]
Abstract
Nanoparticles (NPs) are used for drug delivery with enhanced selectivity and reduced side-effect toxicity in cancer treatments. Based on the literature, the influence of the NPs mechanical and geometrical properties on their cellular uptake has been studied through experimental investigations. However, due to the difficulty to vary the parameters independently in such a complex system, it remains hard to efficiently conclude on the influence of each one of them on the cellular internalization of a NP. In this context, different mechanical / mathematical models for the cellular uptake of NPs have been developed. In this paper, we numerically investigate the influence of the NP's aspect ratio, the membrane tension and the cell-NP adhesion on the uptake of the NP using the model introduced in1 coupled with a numerical stochastic scheme to measure the weight of each one of the aforementioned parameters. The results reveal that the aspect ratio of the particle is the most influential parameter on the wrapping of the particle by the cell membrane. Then the adhesion contributes twice as much as the membrane tension. Our numerical results match the previous experimental observations.
Collapse
|
5
|
Continuous-Time Surrogate Models for Data-Driven Dynamic Optimization. ESCAPE. EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING 2022; 51:205-210. [PMID: 36622647 PMCID: PMC9823268 DOI: 10.1016/b978-0-323-95879-0.50035-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
This work addresses the control optimization of time-varying systems without the full discretization of the underlying high-fidelity models and derives optimal control trajectories using surrogate modeling and data-driven optimization. Time-varying systems are ubiquitous in the chemical process industry and their systematic control is essential for ensuring each system to be operated at the desired settings. To this end, we postulate nonlinear continuous-time control action trajectories using time-varying surrogate models and derive the parameters of these functional forms using data-driven optimization. Data-driven optimization allows us to collect data from the high-fidelity model without pursuing any discretization and fine-tune candidate control trajectories based on the retrieved input-output information from the nonlinear system. We test exponential and polynomial surrogate forms for the control trajectories and explore various data-driven optimization strategies (local vs. global and sample-based vs. model-based) to test the consistency of each approach for controlling dynamic systems. The applicability of our approach is demonstrated on a motivating example and a CSTR control case study with favorable results.
Collapse
|
6
|
Projecting Climate Dependent Coastal Flood Risk With a Hybrid Statistical Dynamical Model. EARTH'S FUTURE 2021; 9:e2021EF002285. [PMID: 35864860 PMCID: PMC9286665 DOI: 10.1029/2021ef002285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 10/27/2021] [Accepted: 11/13/2021] [Indexed: 06/15/2023]
Abstract
Numerical models for tides, storm surge, and wave runup have demonstrated ability to accurately define spatially varying flood surfaces. However these models are typically too computationally expensive to dynamically simulate the full parameter space of future oceanographic, atmospheric, and hydrologic conditions that will constructively compound in the nearshore to cause both extreme event and nuisance flooding during the 21st century. A surrogate modeling framework of waves, winds, and tides is developed in this study to efficiently predict spatially varying nearshore and estuarine water levels contingent on any combination of offshore forcing conditions. The surrogate models are coupled with a time-dependent stochastic climate emulator that provides efficient downscaling for hypothetical iterations of offshore conditions. Together, the hybrid statistical-dynamical framework can assess present day and future coastal flood risk, including the chronological characteristics of individual flood and wave-induced dune overtopping events and their changes into the future. The framework is demonstrated at Naval Base Coronado in San Diego, CA, utilizing the regional Coastal Storm Modeling System (CoSMoS; composed of Delft3D and XBeach) as the dynamic simulator and Gaussian process regression as the surrogate modeling tool. Validation of the framework uses both in-situ tide gauge observations within San Diego Bay, and a nearshore cross-shore array deployment of pressure sensors in the open beach surf zone. The framework reveals the relative influence of large-scale climate variability on future coastal flood resilience metrics relevant to the management of an open coast artificial berm, as well as the stochastic nature of future total water levels.
Collapse
|
7
|
Bridging implementation gaps to connect large ecological datasets and complex models. Ecol Evol 2021; 11:18271-18287. [PMID: 35003672 PMCID: PMC8717344 DOI: 10.1002/ece3.8420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 11/12/2021] [Accepted: 11/16/2021] [Indexed: 11/09/2022] Open
Abstract
Merging robust statistical methods with complex simulation models is a frontier for improving ecological inference and forecasting. However, bringing these tools together is not always straightforward. Matching data with model output, determining starting conditions, and addressing high dimensionality are some of the complexities that arise when attempting to incorporate ecological field data with mechanistic models directly using sophisticated statistical methods. To illustrate these complexities and pragmatic paths forward, we present an analysis using tree-ring basal area reconstructions in Denali National Park (DNPP) to constrain successional trajectories of two spruce species (Picea mariana and Picea glauca) simulated by a forest gap model, University of Virginia Forest Model Enhanced-UVAFME. Through this process, we provide preliminary ecological inference about the long-term competitive dynamics between slow-growing P. mariana and relatively faster-growing P. glauca. Incorporating tree-ring data into UVAFME allowed us to estimate a bias correction for stand age with improved parameter estimates. We found that higher parameter values for P. mariana minimum growth under stress and P. glauca maximum growth rate were key to improving simulations of coexistence, agreeing with recent research that faster-growing P. glauca may outcompete P. mariana under climate change scenarios. The implementation challenges we highlight are a crucial part of the conversation for how to bring models together with data to improve ecological inference and forecasting.
Collapse
|
8
|
Efficient Surrogate Modeling and Design Optimization of Compact Integrated On-Chip Inductors Based on Multi-Fidelity EM Simulation Models. MICROMACHINES 2021; 12:mi12111341. [PMID: 34832753 PMCID: PMC8624611 DOI: 10.3390/mi12111341] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 10/19/2021] [Accepted: 10/27/2021] [Indexed: 11/20/2022]
Abstract
High-performance and small-size on-chip inductors play a critical role in contemporary radio-frequency integrated circuits. This work presents a reliable surrogate modeling technique combining low-fidelity EM simulation models, response surface approximations based on kriging interpolation, and space mapping technology. The reported method is useful for the development of broadband and highly accurate data-driven models of integrated inductors within a practical timeframe, especially in terms of the computational expense of training data acquisition. Application of the constructed surrogate model for rapid design optimization of a compact on-chip inductor is demonstrated. The optimized EM-validated design solution can be reached at a low computational cost, which is a considerable improvement over existing approaches. In addition, this work provides a description and illustrates the usefulness of a multi-fidelity design optimization method incorporating EM computational models of graduated complexity and local polynomial approximations managed by an output space mapping optimization framework. As shown by the application example, the final design solution is obtained at the cost of a few high-fidelity EM simulations of a small-size integrated coil. A supplementary description of variable-fidelity EM computational models and a trade-off between model accuracy and its processing time complements the work.
Collapse
|
9
|
From Prediction to Prescription: Evolutionary Optimization of Nonpharmaceutical Interventions in the COVID-19 Pandemic. IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION : A PUBLICATION OF THE IEEE NEURAL NETWORKS COUNCIL 2021; 25:386-401. [PMID: 36694708 PMCID: PMC8545006 DOI: 10.1109/tevc.2021.3063217] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 12/05/2020] [Accepted: 02/24/2021] [Indexed: 05/12/2023]
Abstract
Several models have been developed to predict how the COVID-19 pandemic spreads, and how it could be contained with nonpharmaceutical interventions, such as social distancing restrictions and school and business closures. This article demonstrates how evolutionary AI can be used to facilitate the next step, i.e., determining most effective intervention strategies automatically. Through evolutionary surrogate-assisted prescription, it is possible to generate a large number of candidate strategies and evaluate them with predictive models. In principle, strategies can be customized for different countries and locales, and balance the need to contain the pandemic and the need to minimize their economic impact. Early experiments suggest that workplace and school restrictions are the most important and need to be designed carefully. They also demonstrate that results of lifting restrictions can be unreliable, and suggest creative ways in which restrictions can be implemented softly, e.g., by alternating them over time. As more data becomes available, the approach can be increasingly useful in dealing with COVID-19 as well as possible future pandemics.
Collapse
|
10
|
Improved surrogates in inertial confinement fusion with manifold and cycle consistencies. Proc Natl Acad Sci U S A 2020; 117:9741-9746. [PMID: 32312816 PMCID: PMC7211929 DOI: 10.1073/pnas.1916634117] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Neural networks have demonstrated remarkable success in predictive modeling. However, when applied to surrogate modeling, they 1) are often nonrobust, 2) require large amounts of data, and 3) are inadequate for estimating the inversion process; i.e., they do not capture parameter sensitivities well. We propose a different form of self-consistency regularization by incorporating an inverse surrogate into the learning process and show that it leads to highly robust, self-consistent surrogate models for complex scientific applications. Neural networks have become the method of choice in surrogate modeling because of their ability to characterize arbitrary, high-dimensional functions in a data-driven fashion. This paper advocates for the training of surrogates that are 1) consistent with the physical manifold, resulting in physically meaningful predictions, and 2) cyclically consistent with a jointly trained inverse model; i.e., backmapping predictions through the inverse results in the original input parameters. We find that these two consistencies lead to surrogates that are superior in terms of predictive performance, are more resilient to sampling artifacts, and tend to be more data efficient. Using inertial confinement fusion (ICF) as a test-bed problem, we model a one-dimensional semianalytic numerical simulator and demonstrate the effectiveness of our approach.
Collapse
|
11
|
Surfing on Fitness Landscapes: A Boost on Optimization by Fourier Surrogate Modeling. ENTROPY 2020; 22:e22030285. [PMID: 33286059 PMCID: PMC7516743 DOI: 10.3390/e22030285] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Revised: 02/18/2020] [Accepted: 02/27/2020] [Indexed: 11/29/2022]
Abstract
Surfing in rough waters is not always as fun as wave riding the “big one”. Similarly, in optimization problems, fitness landscapes with a huge number of local optima make the search for the global optimum a hard and generally annoying game. Computational Intelligence optimization metaheuristics use a set of individuals that “surf” across the fitness landscape, sharing and exploiting pieces of information about local fitness values in a joint effort to find out the global optimum. In this context, we designed surF, a novel surrogate modeling technique that leverages the discrete Fourier transform to generate a smoother, and possibly easier to explore, fitness landscape. The rationale behind this idea is that filtering out the high frequencies of the fitness function and keeping only its partial information (i.e., the low frequencies) can actually be beneficial in the optimization process. We prove our theory by combining surF with a settings free variant of Particle Swarm Optimization (PSO) based on Fuzzy Logic, called Fuzzy Self-Tuning PSO. Specifically, we introduce a new algorithm, named F3ST-PSO, which performs a preliminary exploration on the surrogate model followed by a second optimization using the actual fitness function. We show that F3ST-PSO can lead to improved performances, notably using the same budget of fitness evaluations.
Collapse
|
12
|
Single and Multi-Objective Optimization of a Three-Dimensional Unbalanced Split-and-Recombine Micromixer. MICROMACHINES 2019; 10:mi10100711. [PMID: 31640175 PMCID: PMC6843656 DOI: 10.3390/mi10100711] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2019] [Revised: 10/14/2019] [Accepted: 10/18/2019] [Indexed: 11/25/2022]
Abstract
The three-dimensional geometry of a micromixer with an asymmetrical split-and-recombine mechanism was optimized to enhance the fluid-mixing capability at a Reynolds number of 20. Single and multi-objective optimizations were carried out by using particle swarm optimization and a genetic algorithm on a modeled surrogate surface. Surrogate modeling was performed using the computational results for the mixing. Mixing and flow analyses were carried out by solving the convection–diffusion equation in combination with the three-dimensional continuity and momentum equations. The optimization was carried out with two design variables related to dimensionless geometric parameters. The mixing effectiveness was chosen as the objective function for the single-objective optimization, and the pressure drop and mixing index at the outlet were chosen for the multi-objective optimization. The sampling points in the design space were determined using a design of experiment technique called Latin hypercube sampling. The surrogates for the objective functions were developed using a Kriging model. The single-objective optimization resulted in 58.9% enhancement of the mixing effectiveness compared to the reference design. The multi-objective optimization provided Pareto-optimal solutions that showed a maximum increase of 48.5% in the mixing index and a maximum decrease of 55.0% in the pressure drop in comparison to the reference design.
Collapse
|
13
|
Parameter Estimation with Data-Driven Nonparametric Likelihood Functions. ENTROPY 2019; 21:e21060559. [PMID: 33267273 PMCID: PMC7515048 DOI: 10.3390/e21060559] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Revised: 05/30/2019] [Accepted: 06/01/2019] [Indexed: 11/17/2022]
Abstract
In this paper, we consider a surrogate modeling approach using a data-driven nonparametric likelihood function constructed on a manifold on which the data lie (or to which they are close). The proposed method represents the likelihood function using a spectral expansion formulation known as the kernel embedding of the conditional distribution. To respect the geometry of the data, we employ this spectral expansion using a set of data-driven basis functions obtained from the diffusion maps algorithm. The theoretical error estimate suggests that the error bound of the approximate data-driven likelihood function is independent of the variance of the basis functions, which allows us to determine the amount of training data for accurate likelihood function estimations. Supporting numerical results to demonstrate the robustness of the data-driven likelihood functions for parameter estimation are given on instructive examples involving stochastic and deterministic differential equations. When the dimension of the data manifold is strictly less than the dimension of the ambient space, we found that the proposed approach (which does not require the knowledge of the data manifold) is superior compared to likelihood functions constructed using standard parametric basis functions defined on the ambient coordinates. In an example where the data manifold is not smooth and unknown, the proposed method is more robust compared to an existing polynomial chaos surrogate model which assumes a parametric likelihood, the non-intrusive spectral projection. In fact, the estimation accuracy is comparable to direct MCMC estimates with only eight likelihood function evaluations that can be done offline as opposed to 4000 sequential function evaluations, whenever direct MCMC can be performed. A robust accurate estimation is also found using a likelihood function trained on statistical averages of the chaotic 40-dimensional Lorenz-96 model on a wide parameter domain.
Collapse
|
14
|
Johnson Cook Material and Failure Model Parameters Estimation of AISI-1045 Medium Carbon Steel for Metal Forming Applications. MATERIALS 2019; 12:ma12040609. [PMID: 30781637 PMCID: PMC6416717 DOI: 10.3390/ma12040609] [Citation(s) in RCA: 81] [Impact Index Per Article: 16.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2018] [Revised: 02/04/2019] [Accepted: 02/12/2019] [Indexed: 11/16/2022]
Abstract
Consistent and reasonable characterization of the material behavior under the coupled effects of strain, strain rate and temperature on the material flow stress is remarkably crucial in order to design as well as optimize the process parameters in the metal forming industrial practice. The objective of this work was to formulate an appropriate flow stress model to characterize the flow behavior of AISI-1045 medium carbon steel over a practical range of deformation temperatures (650⁻950 ∘ C) and strain rates (0.05⁻1.0 s - 1 ). Subsequently, the Johnson-Cook flow stress model was adopted for modeling and predicting the material flow behavior at elevated temperatures. Furthermore, surrogate models were developed based on the constitutive relations, and the model constants were estimated using the experimental results. As a result, the constitutive flow stress model was formed and the constructed model was examined systematically against experimental data by both numerical and graphical validations. In addition, to predict the material damage behavior, the failure model proposed by Johnson and Cook was used, and to determine the model parameters, seven different specimens, including flat, smooth round bars and pre-notched specimens, were tested at room temperature under quasi strain rate conditions. From the results, it can be seen that the developed model over predicts the material behavior at a low temperature for all strain rates. However, overall, the developed model can produce a fairly accurate and precise estimation of flow behavior with good correlation to the experimental data under high temperature conditions. Furthermore, the damage model parameters estimated in this research can be used to model the metal forming simulations, and valuable prediction results for the work material can be achieved.
Collapse
|
15
|
Abstract
This article deals with Gaussian process surrogate models for the Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES)-several already existing and two by the authors recently proposed models are presented. The work discusses different variants of surrogate model exploitation and focuses on the benefits of employing the Gaussian process uncertainty prediction, especially during the selection of points for the evaluation with a surrogate model. The experimental part of the article thoroughly compares and evaluates the five presented Gaussian process surrogate and six other state-of-the-art optimizers on the COCO benchmarks. The algorithm presented in most detail, DTS-CMA-ES, which combines cheap surrogate-model predictions with the objective function evaluations in every iteration, is shown to approach the function optimum at least comparably fast and often faster than the state-of-the-art black-box optimizers for budgets of roughly 25-100 function evaluations per dimension, in 10- and less-dimensional spaces even for 25-250 evaluations per dimension.
Collapse
|
16
|
Cost⁻Benefit Optimization of Structural Health Monitoring Sensor Networks. SENSORS 2018; 18:s18072174. [PMID: 29986433 PMCID: PMC6068495 DOI: 10.3390/s18072174] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Revised: 07/04/2018] [Accepted: 07/04/2018] [Indexed: 11/25/2022]
Abstract
Structural health monitoring (SHM) allows the acquisition of information on the structural integrity of any mechanical system by processing data, measured through a set of sensors, in order to estimate relevant mechanical parameters and indicators of performance. Herein we present a method to perform the cost–benefit optimization of a sensor network by defining the density, type, and positioning of the sensors to be deployed. The effectiveness (benefit) of an SHM system may be quantified by means of information theory, namely through the expected Shannon information gain provided by the measured data, which allows the inherent uncertainties of the experimental process (i.e., those associated with the prediction error and the parameters to be estimated) to be accounted for. In order to evaluate the computationally expensive Monte Carlo estimator of the objective function, a framework comprising surrogate models (polynomial chaos expansion), model order reduction methods (principal component analysis), and stochastic optimization methods is introduced. Two optimization strategies are proposed: the maximization of the information provided by the measured data, given the technological, identifiability, and budgetary constraints; and the maximization of the information–cost ratio. The application of the framework to a large-scale structural problem, the Pirelli tower in Milan, is presented, and the two comprehensive optimization methods are compared.
Collapse
|
17
|
Data-Efficient Design Exploration through Surrogate-Assisted Illumination. EVOLUTIONARY COMPUTATION 2018; 26:381-410. [PMID: 29883202 DOI: 10.1162/evco_a_00231] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Design optimization techniques are often used at the beginning of the design process to explore the space of possible designs. In these domains illumination algorithms, such as MAP-Elites, are promising alternatives to classic optimization algorithms because they produce diverse, high-quality solutions in a single run, instead of only a single near-optimal solution. Unfortunately, these algorithms currently require a large number of function evaluations, limiting their applicability. In this article, we introduce a new illumination algorithm, Surrogate-Assisted Illumination (SAIL), that leverages surrogate modeling techniques to create a map of the design space according to user-defined features while minimizing the number of fitness evaluations. On a two-dimensional airfoil optimization problem, SAIL produces hundreds of diverse but high-performing designs with several orders of magnitude fewer evaluations than MAP-Elites or CMA-ES. We demonstrate that SAIL is also capable of producing maps of high-performing designs in realistic three-dimensional aerodynamic tasks with an accurate flow simulation. Data-efficient design exploration with SAIL can help designers understand what is possible, beyond what is optimal, by considering more than pure objective-based optimization.
Collapse
|
18
|
Multi-Objective Optimizations of a Serpentine Micromixer with Crossing Channels at Low and High Reynolds Numbers. MICROMACHINES 2018; 9:mi9030110. [PMID: 30424044 PMCID: PMC6187566 DOI: 10.3390/mi9030110] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2018] [Revised: 02/26/2018] [Accepted: 03/01/2018] [Indexed: 11/16/2022]
Abstract
In order to maximize the mixing performance of a micromixer with an integrated three-dimensional serpentine and split-and-recombination configuration, multi-objective optimizations were performed at two different Reynolds numbers, 1 and 120, based on numerical simulation. Numerical analyses of fluid flow and mixing in the micromixer were performed using three-dimensional Navier-Stokes equations and convection-diffusion equation. Three dimensionless design variables that were related to the geometry of the micromixer were selected as design variables for optimization. Mixing index at the exit and pressure drop through the micromixer were employed as two objective functions. A parametric study was carried out to explore the effects of the design variables on the objective functions. Latin hypercube sampling method as a design-of-experiment technique has been used to select design points in the design space. Surrogate modeling of the objective functions was performed by using radial basis neural network. Concave Pareto-optimal curves comprising of Pareto-optimal solutions that represents the trade-off between the objective functions were obtained using a multi-objective genetic algorithm at Re = 1 and 120. Through the optimizations, maximum enhancements of 18.8% and 6.0% in mixing index were achieved at Re = 1 and 120, respectively.
Collapse
|