176
|
Bhattacharya P, Altai Z, Qasim M, Viceconti M. A multiscale model to predict current absolute risk of femoral fracture in a postmenopausal population. Biomech Model Mechanobiol 2019; 18:301-318. [PMID: 30276488 PMCID: PMC6418062 DOI: 10.1007/s10237-018-1081-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2018] [Accepted: 09/24/2018] [Indexed: 02/06/2023]
Abstract
Osteoporotic hip fractures are a major healthcare problem. Fall severity and bone strength are important risk factors of hip fracture. This study aims to obtain a mechanistic explanation for fracture risk in dependence of these risk factors. A novel modelling approach is developed that combines models at different scales to overcome the challenge of a large space-time domain of interest and considers the variability of impact forces between potential falls in a subject. The multiscale model and its component models are verified with respect to numerical approximations made therein, the propagation of measurement uncertainties of model inputs is quantified, and model predictions are validated against experimental and clinical data. The main results are model predicted absolute risk of current fracture (ARF0) that ranged from 1.93 to 81.6% (median 36.1%) for subjects in a retrospective cohort of 98 postmenopausal British women (49 fracture cases and 49 controls); ARF0 was computed up to a precision of 1.92 percentage points (pp) due to numerical approximations made in the model; ARF0 possessed an uncertainty of 4.00 pp due to uncertainties in measuring model inputs; ARF0 classified observed fracture status in the above cohort with AUC = 0.852 (95% CI 0.753-0.918), 77.6% specificity (95% CI 63.4-86.5%) and 81.6% sensitivity (95% CI 68.3-91.1%). These results demonstrate that ARF0 can be computed using the model with sufficient precision to distinguish between subjects and that the novel mechanism of fracture risk determination based on fall dynamics, hip impact and bone strength can be considered validated.
Collapse
|
177
|
Gadd C, Xing W, Nezhad MM, Shah AA. A Surrogate Modelling Approach Based on Nonlinear Dimension Reduction for Uncertainty Quantification in Groundwater Flow Models. Transp Porous Media 2019; 126:39-77. [PMID: 30872876 PMCID: PMC6390720 DOI: 10.1007/s11242-018-1065-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Accepted: 04/13/2018] [Indexed: 11/15/2022]
Abstract
In this paper, we develop a surrogate modelling approach for capturing the output field (e.g. the pressure head) from groundwater flow models involving a stochastic input field (e.g. the hydraulic conductivity). We use a Karhunen–Loève expansion for a log-normally distributed input field and apply manifold learning (local tangent space alignment) to perform Gaussian process Bayesian inference using Hamiltonian Monte Carlo in an abstract feature space, yielding outputs for arbitrary unseen inputs. We also develop a framework for forward uncertainty quantification in such problems, including analytical approximations of the mean of the marginalized distribution (with respect to the inputs). To sample from the distribution, we present Monte Carlo approach. Two examples are presented to demonstrate the accuracy of our approach: a Darcy flow model with contaminant transport in 2-d and a Richards equation model in 3-d.
Collapse
|
178
|
Sahli Costabal F, Choy JS, Sack KL, Guccione JM, Kassab GS, Kuhl E. Multiscale characterization of heart failure. Acta Biomater 2019; 86:66-76. [PMID: 30630123 DOI: 10.1016/j.actbio.2018.12.053] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 12/28/2018] [Accepted: 12/31/2018] [Indexed: 12/27/2022]
Abstract
Dilated cardiomyopathy is a progressive irreversible disease associated with contractile dysfunction and heart failure. During dilated cardiomyopathy, elevated diastolic wall strains trigger mechanotransduction pathways that initiate the addition of sarcomeres in series and an overall increase in myocyte length. At the whole organ level, this results in a chronic dilation of the ventricles, an increase in end diastolic and end systolic volumes, and a decrease in ejection fraction. However, how exactly changes in sarcomere number translate into changes in myocyte morphology, and how these cellular changes translate into ventricular dilation remains incompletely understood. Here we combined a chronic animal study, continuum growth modeling, and machine learning to quantify correlations between sarcomere dynamics, myocyte morphology, and ventricular dilation. In an eight-week long volume overload study of six pigs, we found that the average sarcomere number increased by +3.8%/week, from 47 to 62, resulting in a myocyte lengthening of +3.3%/week, from 85 to 108 μm, while the sarcomere length and myocyte width remained unchanged. At the same time, the average end diastolic volume increased by +6.0%/week. Using continuum growth modeling and Bayesian inference, we correlated alterations on the subcellular, cellular, and organ scales and found that the serial sarcomere number explained 88% of myocyte lengthening, which, in turn, explained 54% of cardiac dilation. Our results demonstrate that sarcomere number and myocyte length are closely correlated and constitute the major determinants of dilated heart failure. We anticipate our study to be a starting point for more sophisticated multiscale models of heart failure. Our study suggests that altering sarcomere turnover-and with it myocyte morphology and ventricular dimensions-could be a potential therapeutic target to attenuate or reverse the progression of heart failure. STATEMENT OF SIGNIFICANCE: Heart failure is a significant global health problem that affects more than 25 million people worldwide and increases in prevalence as the population ages. Heart failure has been studied excessively at various scales; yet, there is no compelling concept to connect knowledge from the subcellular, cellular, and organ level across the scales. Here we combined a chronic animal study, continuum growth modeling, and machine learning to quantify correlations between sarcomere dynamics, myocyte morphology, and ventricular dilation. We found that the serial sarcomere number explained 88% of myocyte lengthening, which, in turn, explained 54% of cardiac dilation. Our results show that sarcomere number and myocyte length are closely correlated and constitute the major determinants of dilated heart failure. This suggests that altering the sarcomere turnover-and with it myocyte morphology and ventricular dimensions-could be a potential therapeutic target to attenuate or reverse heart failure.
Collapse
|
179
|
Tran JS, Schiavazzi DE, Kahn AM, Marsden AL. Uncertainty quantification of simulated biomechanical stimuli in coronary artery bypass grafts. COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING 2019; 345:402-428. [PMID: 31223175 PMCID: PMC6586227 DOI: 10.1016/j.cma.2018.10.024] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Coronary artery bypass graft surgery (CABG) is performed on more than 400,000 patients annually in the U.S. However, saphenous vein grafts (SVGs) implanted during CABG exhibit poor patency compared to arterial grafts, with failure rates up to 40% within 10 years after surgery. Differences in mechanical stimuli are known to play a role in driving maladaptation and have been correlated with endothelial damage and thrombus formation. As these quantities are difficult to measure in vivo, multi-scale coronary models offer a way to quantify them, while accounting for complex coronary physiology. However, prior studies have primarily focused on deterministic evaluations, without reporting variability in the model parameters due to uncertainty. This study aims to assess confidence in multi-scale predictions of wall shear stress and wall strain while accounting for uncertainty in peripheral hemodynamics and material properties. Boundary condition distributions are computed by assimilating uncertain clinical data, while spatial variations of vessel wall stiffness are obtained through approximation by a random field. We developed a stochastic submodeling approach to mitigate the computational burden of repeated multi-scale model evaluations to focus exclusively on the bypass grafts. This produces a two-level decomposition of quantities of interest into submodel contributions and full model/submodel discrepancies. We leverage these two levels in the context of forward uncertainty propagation using a previously proposed multi-resolution approach. The time- and space-averaged wall shear stress is well estimated with a coefficient of variation of <35%, but ignorance about the spatial distribution on the wall elastic modulus and thickness lead to large variations in an objective measure of wall strain, with coefficients of variation up to 100%. Sensitivity analysis reveals how the interactions between the flow and material parameters contribute to output variability.
Collapse
|
180
|
Roosa K, Chowell G. Assessing parameter identifiability in compartmental dynamic models using a computational approach: application to infectious disease transmission models. Theor Biol Med Model 2019; 16:1. [PMID: 30642334 PMCID: PMC6332839 DOI: 10.1186/s12976-018-0097-6] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Accepted: 12/10/2018] [Indexed: 01/11/2023] Open
Abstract
Background Mathematical modeling is now frequently used in outbreak investigations to understand underlying mechanisms of infectious disease dynamics, assess patterns in epidemiological data, and forecast the trajectory of epidemics. However, the successful application of mathematical models to guide public health interventions lies in the ability to reliably estimate model parameters and their corresponding uncertainty. Here, we present and illustrate a simple computational method for assessing parameter identifiability in compartmental epidemic models. Methods We describe a parametric bootstrap approach to generate simulated data from dynamical systems to quantify parameter uncertainty and identifiability. We calculate confidence intervals and mean squared error of estimated parameter distributions to assess parameter identifiability. To demonstrate this approach, we begin with a low-complexity SEIR model and work through examples of increasingly more complex compartmental models that correspond with applications to pandemic influenza, Ebola, and Zika. Results Overall, parameter identifiability issues are more likely to arise with more complex models (based on number of equations/states and parameters). As the number of parameters being jointly estimated increases, the uncertainty surrounding estimated parameters tends to increase, on average, as well. We found that, in most cases, R0 is often robust to parameter identifiability issues affecting individual parameters in the model. Despite large confidence intervals and higher mean squared error of other individual model parameters, R0 can still be estimated with precision and accuracy. Conclusions Because public health policies can be influenced by results of mathematical modeling studies, it is important to conduct parameter identifiability analyses prior to fitting the models to available data and to report parameter estimates with quantified uncertainty. The method described is helpful in these regards and enhances the essential toolkit for conducting model-based inferences using compartmental dynamic models. Electronic supplementary material The online version of this article (10.1186/s12976-018-0097-6) contains supplementary material, which is available to authorized users.
Collapse
|
181
|
Gashi K, Bosboom EMH, van de Vosse FN. The influence of model order reduction on the computed fractional flow reserve using parameterized coronary geometries. J Biomech 2019; 82:313-323. [PMID: 30471791 DOI: 10.1016/j.jbiomech.2018.11.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 11/02/2018] [Accepted: 11/03/2018] [Indexed: 11/25/2022]
Abstract
Computational fluid dynamics (CFD) models combined with patient-specific imaging data are used to non-invasively predict functional significance of coronary lesions. This approach to predict the fractional flow reserve (FFR) is shown to have a high diagnostic accuracy when comparing against invasively measured FFR. However, one of the main drawbacks is the high computational effort needed for preprocessing and computations. Hence, uncertainty quantification may become unfeasible. Reduction of complexity is desirable, computationally inexpensive models with high diagnostic accuracy are preferred. We present a parametric comparison study for three types of CFD models (2D axisymmetric, Semi-3D and 3D) in which we study the impact of model reduction on three models on the predicted FFR. In total 200 coronary geometries were generated for seven geometrical characteristics e.g. stenosis severity, stenosis length and vessel curvature. The effect of time-averaged flow was investigated using unsteady, mean steady and a root mean square (RMS) steady flow. The 3D unsteady model was regarded as reference model. Results show that when using an unsteady or RMS flow, predicted FFR hardly varies between models contrary to using average flows. The 2D model with RMS flow has a high diagnostic accuracy (0.99), reduces computational time by a factor 162,000 and the introduced model error is well below the clinical relevant differences. Stenosis severity, length, curvature and tapering cause most discrepancies when using a lower order model. An uncertainty analysis showed that this can be explained by the low variability that is caused by variations in stenosis asymmetry.
Collapse
|
182
|
Nemmers T, Narayan A, Banerjee S. Bayesian modeling and uncertainty quantification for descriptive social networks. STATISTICS AND ITS INTERFACE 2019; 12:181-191. [PMID: 30662582 PMCID: PMC6335039 DOI: 10.4310/sii.2019.v12.n1.a15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This article presents a simple and easily implementable Bayesian approach to model and quantify uncertainty in small descriptive social networks. While statistical methods for analyzing networks have seen burgeoning activity over the last decade or so, ranging from social sciences to genetics, such methods usually involve sophisticated stochastic models whose estimation requires substantial structure and information in the networks. At the other end of the analytic spectrum, there are purely descriptive methods based upon quantities and axioms in computational graph theory. In social networks, popular descriptive measures include, but are not limited to, the so called Krackhardt's axioms. Another approach, recently gaining attention, is the use of PageRank algorithms. While these descriptive approaches provide insight into networks with limited information, including small networks, there is, as yet, little research detailing a statistical approach for small networks. This article aims to contribute at the interface of Bayesian statistical inference and social network analysis by offering practicing social scientists a relatively straightforward Bayesian approach to account for uncertainty while conducting descriptive social network analysis. The emphasis is on computational feasibility and easy implementation using existing R packages, such as sna and rjags, that are available from the Comprehensive R Archive Network (https://cran.r-project.org/). We analyze a network comprising 18 websites from the US and UK to discern transnational identities, previously analyzed using descriptive graph theory with no uncertainty quantification, using fully Bayesian model-based inference.
Collapse
|
183
|
Chen Y, Chang H, Meng J, Zhang D. Ensemble Neural Networks (ENN): A gradient-free stochastic method. Neural Netw 2018; 110:170-185. [PMID: 30562650 DOI: 10.1016/j.neunet.2018.11.009] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Revised: 10/22/2018] [Accepted: 11/26/2018] [Indexed: 11/16/2022]
Abstract
In this study, an efficient stochastic gradient-free method, the ensemble neural networks (ENN), is developed. In the ENN, the optimization process relies on covariance matrices rather than derivatives. The covariance matrices are calculated by the ensemble randomized maximum likelihood algorithm (EnRML), which is an inverse modeling method. The ENN is able to simultaneously provide estimations and perform uncertainty quantification since it is built under the Bayesian framework. The ENN is also robust to small training data size because the ensemble of stochastic realizations essentially enlarges the training dataset. This constitutes a desirable characteristic, especially for real-world engineering applications. In addition, the ENN does not require the calculation of gradients, which enables the use of complicated neuron models and loss functions in neural networks. We experimentally demonstrate benefits of the proposed model, in particular showing that the ENN performs much better than the traditional Bayesian neural networks (BNN). The EnRML in ENN is a substitution of gradient-based optimization algorithms, which means that it can be directly combined with the feed-forward process in other existing (deep) neural networks, such as convolutional neural networks (CNN) and recurrent neural networks (RNN), broadening future applications of the ENN.
Collapse
|
184
|
Fossan FE, Sturdy J, Müller LO, Strand A, Bråten AT, Jørgensen A, Wiseth R, Hellevik LR. Uncertainty Quantification and Sensitivity Analysis for Computational FFR Estimation in Stable Coronary Artery Disease. Cardiovasc Eng Technol 2018; 9:597-622. [PMID: 30382522 DOI: 10.1007/s13239-018-00388-w] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Accepted: 10/12/2018] [Indexed: 12/21/2022]
Abstract
PURPOSE The main objectives of this study are to validate a reduced-order model for the estimation of the fractional flow reserve (FFR) index based on blood flow simulations that incorporate clinical imaging and patient-specific characteristics, and to assess the uncertainty of FFR predictions with respect to input data on a per patient basis. METHODS We consider 13 patients with symptoms of stable coronary artery disease for which 24 invasive FFR measurements are available. We perform an extensive sensitivity analysis on the parameters related to the construction of a reduced-order (hybrid 1D-0D) model for FFR predictions. Next we define an optimal setting by comparing reduced-order model predictions with solutions based on the 3D incompressible Navier-Stokes equations. Finally, we characterize prediction uncertainty with respect to input data and identify the most influential inputs by means of sensitivity analysis. RESULTS Agreement between FFR computed by the reduced-order model and by the full 3D model was satisfactory, with a bias ([Formula: see text]) of [Formula: see text] at the 24 measured locations. Moreover, the uncertainty related to the factor by which peripheral resistance is reduced from baseline to hyperemic conditions proved to be the most influential parameter for FFR predictions, whereas uncertainty in stenosis geometry had greater effect in cases with low FFR. CONCLUSION Model errors related to solving a simplified reduced-order model rather than a full 3D problem were small compared with uncertainty related to input data. Improved measurement of coronary blood flow has the potential to reduce uncertainty in computational FFR predictions significantly.
Collapse
|
185
|
Real-World Variability in the Prediction of Intracranial Aneurysm Wall Shear Stress: The 2015 International Aneurysm CFD Challenge. Cardiovasc Eng Technol 2018; 9:544-564. [PMID: 30203115 PMCID: PMC6290689 DOI: 10.1007/s13239-018-00374-2] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/17/2018] [Accepted: 08/11/2018] [Indexed: 11/04/2022]
Abstract
Purpose Image-based computational fluid dynamics (CFD) is widely used to predict intracranial aneurysm wall shear stress (WSS), particularly with the goal of improving rupture risk assessment. Nevertheless, concern has been expressed over the variability of predicted WSS and inconsistent associations with rupture. Previous challenges, and studies from individual groups, have focused on individual aspects of the image-based CFD pipeline. The aim of this Challenge was to quantify the total variability of the whole pipeline. Methods 3D rotational angiography image volumes of five middle cerebral artery aneurysms were provided to participants, who were free to choose their segmentation methods, boundary conditions, and CFD solver and settings. Participants were asked to fill out a questionnaire about their solution strategies and experience with aneurysm CFD, and provide surface distributions of WSS magnitude, from which we objectively derived a variety of hemodynamic parameters. Results A total of 28 datasets were submitted, from 26 teams with varying levels of self-assessed experience. Wide variability of segmentations, CFD model extents, and inflow rates resulted in interquartile ranges of sac average WSS up to 56%, which reduced to < 30% after normalizing by parent artery WSS. Sac-maximum WSS and low shear area were more variable, while rank-ordering of cases by low or high shear showed only modest consensus among teams. Experience was not a significant predictor of variability. Conclusions Wide variability exists in the prediction of intracranial aneurysm WSS. While segmentation and CFD solver techniques may be difficult to standardize across groups, our findings suggest that some of the variability in image-based CFD could be reduced by establishing guidelines for model extents, inflow rates, and blood properties, and by encouraging the reporting of normalized hemodynamic parameters.
Collapse
|
186
|
Nikishova A, Veen L, Zun P, Hoekstra AG. Uncertainty Quantification of a Multiscale Model for In-Stent Restenosis. Cardiovasc Eng Technol 2018; 9:761-774. [PMID: 30136082 PMCID: PMC6290695 DOI: 10.1007/s13239-018-00372-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Accepted: 08/09/2018] [Indexed: 12/11/2022]
Abstract
Purpose Coronary artery stenosis, or abnormal narrowing, is a widespread and potentially fatal cardiac disease. After treatment by balloon angioplasty and stenting, restenosis may occur inside the stent due to excessive neointima formation. Simulations of in-stent restenosis can provide new insight into this process. However, uncertainties due to variability in patient-specific parameters must be taken into account. Methods We performed an uncertainty quantification (UQ) study on a complex two-dimensional in-stent restenosis model. We used a quasi-Monte Carlo method for UQ of the neointimal area, and the Sobol sensitivity analysis (SA) to estimate the proportions of aleatory and epistemic uncertainties and to determine the most important input parameters. Results We observe approximately 30% uncertainty in the mean neointimal area as simulated by the model. Depending on whether a fast initial endothelium recovery occurs, the proportion of the model variance due to natural variability ranges from 15 to 35%. The endothelium regeneration time is identified as the most influential model parameter. Conclusion The model output contains a moderate quantity of uncertainty, and the model precision can be increased by obtaining a more certain value on the endothelium regeneration time. We conclude that the quasi-Monte Carlo UQ and the Sobol SA are reliable methods for estimating uncertainties in the response of complicated multiscale cardiovascular models.
Collapse
|
187
|
Marquis AD, Arnold A, Dean-Bernhoft C, Carlson BE, Olufsen MS. Practical identifiability and uncertainty quantification of a pulsatile cardiovascular model. Math Biosci 2018; 304:9-24. [PMID: 30017910 DOI: 10.1016/j.mbs.2018.07.001] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2017] [Revised: 05/01/2018] [Accepted: 07/02/2018] [Indexed: 11/17/2022]
Abstract
Mathematical models are essential tools to study how the cardiovascular system maintains homeostasis. The utility of such models is limited by the accuracy of their predictions, which can be determined by uncertainty quantification (UQ). A challenge associated with the use of UQ is that many published methods assume that the underlying model is identifiable (e.g. that a one-to-one mapping exists from the parameter space to the model output). In this study we present a novel workflow to calibrate a lumped-parameter model to left ventricular pressure and volume time series data. Key steps include using (1) literature and available data to determine nominal parameter values; (2) sensitivity analysis and subset selection to determine a set of identifiable parameters; (3) optimization to find a point estimate for identifiable parameters; and (4) frequentist and Bayesian UQ calculations to assess the predictive capability of the model. Our results show that it is possible to determine 5 identifiable model parameters that can be estimated to our experimental data from three rats, and that computed UQ intervals capture the measurement and model error.
Collapse
|
188
|
Dhamala J, Arevalo HJ, Sapp J, Horácek BM, Wu KC, Trayanova NA, Wang L. Quantifying the uncertainty in model parameters using Gaussian process-based Markov chain Monte Carlo in cardiac electrophysiology. Med Image Anal 2018; 48:43-57. [PMID: 29843078 DOI: 10.1016/j.media.2018.05.007] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2017] [Revised: 03/17/2018] [Accepted: 05/14/2018] [Indexed: 02/02/2023]
Abstract
Model personalization requires the estimation of patient-specific tissue properties in the form of model parameters from indirect and sparse measurement data. Moreover, a low-dimensional representation of the parameter space is needed, which often has a limited ability to reveal the underlying tissue heterogeneity. As a result, significant uncertainty can be associated with the estimated values of the model parameters which, if left unquantified, will lead to unknown variability in model outputs that will hinder their reliable clinical adoption. Probabilistic estimation of model parameters, however, remains an unresolved challenge. Direct Markov Chain Monte Carlo (MCMC) sampling of the posterior distribution function (pdf) of the parameters is infeasible because it involves repeated evaluations of the computationally expensive simulation model. To accelerate this inference, one popular approach is to construct a computationally efficient surrogate and sample from this approximation. However, by sampling from an approximation, efficiency is gained at the expense of sampling accuracy. In this paper, we address this issue by integrating surrogate modeling of the posterior pdf into accelerating the Metropolis-Hastings (MH) sampling of the exact posterior pdf. It is achieved by two main components: (1) construction of a Gaussian process (GP) surrogate of the exact posterior pdf by actively selecting training points that allow for a good global approximation accuracy with a focus on the regions of high posterior probability; and (2) use of the GP surrogate to improve the proposal distribution in MH sampling, in order to improve the acceptance rate. The presented framework is evaluated in its estimation of the local tissue excitability of a cardiac electrophysiological model in both synthetic data experiments and real data experiments. In addition, the obtained posterior distributions of model parameters are interpreted in relation to the factors contributing to parameter uncertainty, including different low-dimensional representations of the parameter space, parameter non-identifiability, and parameter correlations.
Collapse
|
189
|
Warner JE, Norman RB, Blattnig SR. HZETRN radiation transport validation using balloon-based experimental data. LIFE SCIENCES IN SPACE RESEARCH 2018; 17:23-31. [PMID: 29753410 DOI: 10.1016/j.lssr.2018.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Revised: 02/12/2018] [Accepted: 02/21/2018] [Indexed: 06/08/2023]
Abstract
The deterministic radiation transport code HZETRN (High charge (Z) and Energy TRaNsport) was developed by NASA to study the effects of cosmic radiation on astronauts and instrumentation shielded by various materials. This work presents an analysis of computed differential flux from HZETRN compared with measurement data from three balloon-based experiments over a range of atmospheric depths, particle types, and energies. Model uncertainties were quantified using an interval-based validation metric that takes into account measurement uncertainty both in the flux and the energy at which it was measured. Average uncertainty metrics were computed for the entire dataset as well as subsets of the measurements (by experiment, particle type, energy, etc.) to reveal any specific trends of systematic over- or under-prediction by HZETRN. The distribution of individual model uncertainties was also investigated to study the range and dispersion of errors beyond just single scalar and interval metrics. The differential fluxes from HZETRN were generally well-correlated with balloon-based measurements; the median relative model difference across the entire dataset was determined to be 30%. The distribution of model uncertainties, however, revealed that the range of errors was relatively broad, with approximately 30% of the uncertainties exceeding ± 40%. The distribution also indicated that HZETRN systematically under-predicts the measurement dataset as a whole, with approximately 80% of the relative uncertainties having negative values. Instances of systematic bias for subsets of the data were also observed, including a significant underestimation of alpha particles and protons for energies below 2.5 GeV/u. Muons were found to be systematically over-predicted at atmospheric depths deeper than 50 g/cm2 but under-predicted for shallower depths. Furthermore, a systematic under-prediction of alpha particles and protons was observed below the geomagnetic cutoff, suggesting that improvements to the light ion production cross sections in HZETRN should be investigated.
Collapse
|
190
|
Breustedt B, Blanchardon E, Castellani CM, Etherington G, Franck D, Giussani A, Hofmann W, Lebacq AL, Li WB, Noßke D, Lopez MA. EURADOS work on internal dosimetry. Ann ICRP 2018; 47:75-82. [PMID: 29664321 DOI: 10.1177/0146645318756232] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
European Radiation Dosimetry Group (EURADOS) Working Group 7 is a network on internal dosimetry that brings together researchers from more than 60 institutions in 21 countries. The work of the group is organised into task groups that focus on different aspects, such as development and implementation of biokinetic models (e.g. for diethylenetriamine penta-acetic acid decorporation therapy), individual monitoring and the dose assessment process, Monte Carlo simulations for internal dosimetry, uncertainties in internal dosimetry, and internal microdosimetry. Several intercomparison exercises and training courses have been organised. The IDEAS guidelines, which describe - based on the International Commission on Radiological Protection's (ICRP) biokinetic models and dose coefficients - a structured approach to the assessment of internal doses from monitoring data, are maintained and updated by the group. In addition, Technical Recommendations for Monitoring Individuals for Occupational Intakes of Radionuclides have been elaborated on behalf of the European Commission, DG-ENER (TECHREC Project, 2014-2016, coordinated by EURADOS). Quality assurance of the ICRP biokinetic models by calculation of retention and excretion functions for different scenarios has been performed and feedback was provided to ICRP. An uncertainty study of the recent caesium biokinetic model quantified the overall uncertainties, and identified the sensitive parameters of the model. A report with guidance on the application of ICRP biokinetic models and dose coefficients is being drafted at present. These and other examples of the group's activities, which complement the work of ICRP, are presented.
Collapse
|
191
|
Lagergren J, Reeder A, Hamilton F, Smith RC, Flores KB. Forecasting and Uncertainty Quantification Using a Hybrid of Mechanistic and Non-mechanistic Models for an Age-Structured Population Model. Bull Math Biol 2018; 80:1578-1595. [PMID: 29611108 DOI: 10.1007/s11538-018-0421-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Accepted: 03/23/2018] [Indexed: 11/30/2022]
Abstract
In this paper, we present a new method for the prediction and uncertainty quantification of data-driven multivariate systems. Traditionally, either mechanistic or non-mechanistic modeling methodologies have been used for prediction; however, it is uncommon for the two to be incorporated together. We compare the forecast accuracy of mechanistic modeling, using Bayesian inference, a non-mechanistic modeling approach based on state space reconstruction, and a novel hybrid methodology composed of the two for an age-structured population data set. The data come from cannibalistic flour beetles, in which it is observed that the adults preying on the eggs and pupae result in non-equilibrium population dynamics. Uncertainty quantification methods for the hybrid models are outlined and illustrated for these data. We perform an analysis of the results from Bayesian inference for the mechanistic model and hybrid models to suggest reasons why hybrid modeling methodology may enable more accurate forecasts of multivariate systems than traditional approaches.
Collapse
|
192
|
Sjölund J, Eklund A, Özarslan E, Herberthson M, Bånkestad M, Knutsson H. Bayesian uncertainty quantification in linear models for diffusion MRI. Neuroimage 2018; 175:272-285. [PMID: 29604453 DOI: 10.1016/j.neuroimage.2018.03.059] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2017] [Revised: 02/16/2018] [Accepted: 03/25/2018] [Indexed: 01/22/2023] Open
Abstract
Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification.
Collapse
|
193
|
Felsberg A, Ross JO, Schlosser C, Kirchner G. Simulating the mesoscale transport of krypton-85. JOURNAL OF ENVIRONMENTAL RADIOACTIVITY 2018; 181:85-93. [PMID: 29128689 DOI: 10.1016/j.jenvrad.2017.10.015] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Revised: 10/18/2017] [Accepted: 10/19/2017] [Indexed: 06/07/2023]
Abstract
Due to its half-life, chemical inertness and low solubility in water, radioactive 85Kr is a valuable tracer for testing the performance of atmospheric dispersion models in simulating long-range transport of pollutants. This paper evaluates the capability of simulating the dispersion of radiokrypton emitted by a nuclear fuel reprocessing plant in north-west France. Three time periods during which elevated activity concentrations of 85Kr in ground level air were detected in south-west Germany are chosen. Simulations have been performed using the HYSPLIT code and the European Centre for Median-Range Weather Forecasts (ECMWF) data base. Although their results show a slight trend of underestimating the measured 85Kr concentrations, there is a significant correlation and moderate scatter between observations and simulations with about 50% of the results being within a factor of two of the measured concentrations. The simulated travel time distributions provided a valuable tool for providing additional insight into the dispersion of the tracer radionuclides and for identifying potential causes of deviations between measured and calculated concentrations.
Collapse
|
194
|
Uncertainty quantification and sensitivity analysis of an arterial wall mechanics model for evaluation of vascular drug therapies. Biomech Model Mechanobiol 2017; 17:55-69. [PMID: 28755237 PMCID: PMC5807551 DOI: 10.1007/s10237-017-0944-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2017] [Accepted: 07/17/2017] [Indexed: 02/07/2023]
Abstract
Quantification of the uncertainty in constitutive model predictions describing arterial wall mechanics is vital towards non-invasive assessment of vascular drug therapies. Therefore, we perform uncertainty quantification to determine uncertainty in mechanical characteristics describing the vessel wall response upon loading. Furthermore, a global variance-based sensitivity analysis is performed to pinpoint measurements that are most rewarding to be measured more precisely. We used previously published carotid diameter–pressure and intima–media thickness (IMT) data (measured in triplicate), and Holzapfel–Gasser–Ogden models. A virtual data set containing 5000 diastolic and systolic diameter–pressure points, and IMT values was generated by adding measurement error to the average of the measured data. The model was fitted to single-exponential curves calculated from the data, obtaining distributions of constitutive parameters and constituent load bearing parameters. Additionally, we (1) simulated vascular drug treatment to assess the relevance of model uncertainty and (2) evaluated how increasing the number of measurement repetitions influences model uncertainty. We found substantial uncertainty in constitutive parameters. Simulating vascular drug treatment predicted a 6% point reduction in collagen load bearing (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$L_\mathrm {coll}$$\end{document}Lcoll), approximately 50% of its uncertainty. Sensitivity analysis indicated that the uncertainty in \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$L_{\mathrm {coll}}$$\end{document}Lcoll was primarily caused by noise in distension and IMT measurements. Spread in \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$L_{\mathrm {coll}}$$\end{document}Lcoll could be decreased by 50% when increasing the number of measurement repetitions from 3 to 10. Model uncertainty, notably that in \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$L_{\mathrm {coll}}$$\end{document}Lcoll, could conceal effects of vascular drug therapy. However, this uncertainty could be reduced by increasing the number of measurement repetitions of distension and wall thickness measurements used for model parameterisation.
Collapse
|
195
|
Zaffora B, Magistris M, Saporta G, Chevalier JP. Uncertainty quantification applied to the radiological characterization of radioactive waste. Appl Radiat Isot 2017; 127:142-149. [PMID: 28605702 DOI: 10.1016/j.apradiso.2017.06.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 05/03/2017] [Accepted: 06/01/2017] [Indexed: 10/19/2022]
Abstract
This paper describes the process adopted at the European Organization for Nuclear Research (CERN) to quantify uncertainties affecting the characterization of very-low-level radioactive waste. Radioactive waste is a by-product of the operation of high-energy particle accelerators. Radioactive waste must be characterized to ensure its safe disposal in final repositories. Characterizing radioactive waste means establishing the list of radionuclides together with their activities. The estimated activity levels are compared to the limits given by the national authority of the waste disposal. The quantification of the uncertainty affecting the concentration of the radionuclides is therefore essential to estimate the acceptability of the waste in the final repository but also to control the sorting, volume reduction and packaging phases of the characterization process. The characterization method consists of estimating the activity of produced radionuclides either by experimental methods or statistical approaches. The uncertainties are estimated using classical statistical methods and uncertainty propagation. A mixed multivariate random vector is built to generate random input parameters for the activity calculations. The random vector is a robust tool to account for the unknown radiological history of legacy waste. This analytical technique is also particularly useful to generate random chemical compositions of materials when the trace element concentrations are not available or cannot be measured. The methodology was validated using a waste population of legacy copper activated at CERN. The methodology introduced here represents a first approach for the uncertainty quantification (UQ) of the characterization process of waste produced at particle accelerators.
Collapse
|
196
|
Gorguluarslan RM, Choi SK, Saldana CJ. Uncertainty quantification and validation of 3D lattice scaffolds for computer-aided biomedical applications. J Mech Behav Biomed Mater 2017; 71:428-440. [PMID: 28431361 DOI: 10.1016/j.jmbbm.2017.04.011] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2016] [Revised: 03/07/2017] [Accepted: 04/10/2017] [Indexed: 11/30/2022]
Abstract
A methodology is proposed for uncertainty quantification and validation to accurately predict the mechanical response of lattice structures used in the design of scaffolds. Effective structural properties of the scaffolds are characterized using a developed multi-level stochastic upscaling process that propagates the quantified uncertainties at strut level to the lattice structure level. To obtain realistic simulation models for the stochastic upscaling process and minimize the experimental cost, high-resolution finite element models of individual struts were reconstructed from the micro-CT scan images of lattice structures which are fabricated by selective laser melting. The upscaling method facilitates the process of determining homogenized strut properties to reduce the computational cost of the detailed simulation model for the scaffold. Bayesian Information Criterion is utilized to quantify the uncertainties with parametric distributions based on the statistical data obtained from the reconstructed strut models. A systematic validation approach that can minimize the experimental cost is also developed to assess the predictive capability of the stochastic upscaling method used at the strut level and lattice structure level. In comparison with physical compression test results, the proposed methodology of linking the uncertainty quantification with the multi-level stochastic upscaling method enabled an accurate prediction of the elastic behavior of the lattice structure with minimal experimental cost by accounting for the uncertainties induced by the additive manufacturing process.
Collapse
|
197
|
Hou X, Hodges BR, Feng D, Liu Q. Uncertainty quantification and reliability assessment in operational oil spill forecast modeling system. MARINE POLLUTION BULLETIN 2017; 116:420-433. [PMID: 28126397 DOI: 10.1016/j.marpolbul.2017.01.038] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Revised: 01/14/2017] [Accepted: 01/17/2017] [Indexed: 06/06/2023]
Abstract
As oil transport increasing in the Texas bays, greater risks of ship collisions will become a challenge, yielding oil spill accidents as a consequence. To minimize the ecological damage and optimize rapid response, emergency managers need to be informed with how fast and where oil will spread as soon as possible after a spill. The state-of-the-art operational oil spill forecast modeling system improves the oil spill response into a new stage. However uncertainty due to predicted data inputs often elicits compromise on the reliability of the forecast result, leading to misdirection in contingency planning. Thus understanding the forecast uncertainty and reliability become significant. In this paper, Monte Carlo simulation is implemented to provide parameters to generate forecast probability maps. The oil spill forecast uncertainty is thus quantified by comparing the forecast probability map and the associated hindcast simulation. A HyosPy-based simple statistic model is developed to assess the reliability of an oil spill forecast in term of belief degree. The technologies developed in this study create a prototype for uncertainty and reliability analysis in numerical oil spill forecast modeling system, providing emergency managers to improve the capability of real time operational oil spill response and impact assessment.
Collapse
|
198
|
Su NQ, Pernot P, Xu X, Savin A. When does a functional correctly describe both the structure and the energy of the transition state? J Mol Model 2017; 23:65. [PMID: 28185112 DOI: 10.1007/s00894-017-3229-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2016] [Accepted: 01/12/2017] [Indexed: 11/30/2022]
Abstract
Requiring that several properties are well reproduced is a severe test on density functional approximations. This can be assessed through the estimation of joint and conditional success probabilities. An example is provided for a small set of molecules, for properties characterizing the transition states (geometries and energies).
Collapse
|
199
|
Schiavazzi D, Doostan A, Iaccarino G, Marsden A. A generalized multi-resolution expansion for uncertainty propagation with application to cardiovascular modeling. COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING 2017; 314:196-221. [PMID: 28845061 PMCID: PMC5568857 DOI: 10.1016/j.cma.2016.09.024] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Computational models are used in a variety of fields to improve our understanding of complex physical phenomena. Recently, the realism of model predictions has been greatly enhanced by transitioning from deterministic to stochastic frameworks, where the effects of the intrinsic variability in parameters, loads, constitutive properties, model geometry and other quantities can be more naturally included. A general stochastic system may be characterized by a large number of arbitrarily distributed and correlated random inputs, and a limited support response with sharp gradients or event discontinuities. This motivates continued research into novel adaptive algorithms for uncertainty propagation, particularly those handling high dimensional, arbitrarily distributed random inputs and non-smooth stochastic responses. In this work, we generalize a previously proposed multi-resolution approach to uncertainty propagation to develop a method that improves computational efficiency, can handle arbitrarily distributed random inputs and non-smooth stochastic responses, and naturally facilitates adaptivity, i.e., the expansion coefficients encode information on solution refinement. Our approach relies on partitioning the stochastic space into elements that are subdivided along a single dimension, or, in other words, progressive refinements exhibiting a binary tree representation. We also show how these binary refinements are particularly effective in avoiding the exponential increase in the multi-resolution basis cardinality and significantly reduce the regression complexity for moderate to high dimensional random inputs. The performance of the approach is demonstrated through previously proposed uncertainty propagation benchmarks and stochastic multi-scale finite element simulations in cardiovascular flow.
Collapse
|
200
|
Moslehi M, de Barros FPJ. Uncertainty quantification of environmental performance metrics in heterogeneous aquifers with long-range correlations. JOURNAL OF CONTAMINANT HYDROLOGY 2017; 196:21-29. [PMID: 27989551 DOI: 10.1016/j.jconhyd.2016.12.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Revised: 12/04/2016] [Accepted: 12/05/2016] [Indexed: 06/06/2023]
Abstract
We investigate how the uncertainty stemming from disordered porous media that display long-range correlation in the hydraulic conductivity (K) field propagates to predictions of environmental performance metrics (EPMs). In this study, the EPMs are quantities that are of relevance to risk analysis and remediation, such as peak flux-averaged concentration, early and late arrival times among others. By using stochastic simulations, we quantify the uncertainty associated with the EPMs for a given disordered spatial structure of the K-field and identify the probability distribution function (PDF) model that best captures the statistics of the EPMs of interest. Results indicate that the probabilistic distribution of the EPMs considered in this study follows lognormal PDF. Finally, through the use of information theory, we reveal how the persistent/anti-persistent correlation structure of the K-field influences the EPMs and corresponding uncertainties.
Collapse
|