151
|
Uncertainty quantification of parenchymal tracer distribution using random diffusion and convective velocity fields. Fluids Barriers CNS 2019; 16:32. [PMID: 31564250 PMCID: PMC6767654 DOI: 10.1186/s12987-019-0152-7] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 09/07/2019] [Indexed: 11/28/2022] Open
Abstract
Background Influx and clearance of substances in the brain parenchyma occur by a combination of diffusion and convection, but the relative importance of these mechanisms is unclear. Accurate modeling of tracer distributions in the brain relies on parameters that are partially unknown and with literature values varying by several orders of magnitude. In this work, we rigorously quantified the variability of tracer distribution in the brain resulting from uncertainty in diffusion and convection model parameters. Methods Using the convection–diffusion–reaction equation, we simulated tracer distribution in the brain parenchyma after intrathecal injection. Several models were tested to assess the uncertainty both in type of diffusion and velocity fields and also the importance of their magnitude. Our results were compared with experimental MRI results of tracer enhancement. Results In models of pure diffusion, the expected amount of tracer in the gray matter reached peak value after 15 h, while the white matter did not reach peak within 24 h with high likelihood. Models of the glymphatic system were similar qualitatively to the models of pure diffusion with respect to expected time to peak but displayed less variability. However, the expected time to peak was reduced to 11 h when an additional directionality was prescribed for the glymphatic circulation. In a model including drainage directly from the brain parenchyma, time to peak occured after 6–8 h for the gray matter. Conclusion Even when uncertainties are taken into account, we find that diffusion alone is not sufficient to explain transport of tracer deep into the white matter as seen in experimental data. A glymphatic velocity field may increase transport if a large-scale directional structure is included in the glymphatic circulation.
Collapse
|
152
|
Ye Y, Ruiz-Martinez A, Wang P, Tartakovsky DM. Quantification of Predictive Uncertainty in Models of FtsZ ring assembly in Escherichia coli. J Theor Biol 2019; 484:110006. [PMID: 31539529 DOI: 10.1016/j.jtbi.2019.110006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Revised: 09/06/2019] [Accepted: 09/12/2019] [Indexed: 11/24/2022]
Abstract
Quantitative predictions of FtsZ protein polymerization are essential for understanding the self-regulating mechanisms in biochemical systems. Due to structural complexity and parametric uncertainty, existing kinetic models remain incomplete and their predictions error-prone. To address such challenges, we perform probabilistic uncertainty quantification and global sensitivity analysis of the concentrations of various protein species predicted with a recent FtsZ protein polymerization model. Our results yield a ranked list of modeling shortcomings that can be improved in order to develop more accurate predictions and more realistic representations of key mechanisms of such biochemical systems and their response to changes in internal or external conditions. Our conclusions and improvement recommendations can be extended to other kinetics models.
Collapse
|
153
|
Stobb MT, Monroe DM, Leiderman K, Sindi SS. Assessing the impact of product inhibition in a chromogenic assay. Anal Biochem 2019; 580:62-71. [PMID: 31091429 DOI: 10.1016/j.ab.2019.05.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 04/29/2019] [Accepted: 05/01/2019] [Indexed: 12/30/2022]
Abstract
Chromogenic substrates (CS) are synthetic substrates used to monitor the activity of a target enzyme. It has been reported that some CSs display competitive product inhibition with their target enzyme. Thus, in assays where enzyme activity is continuously monitored over long periods of time, the product inhibition may significantly interfere with the reactions being monitored. Despite this knowledge, it is rare for CSs to be directly incorporated into mathematical models that simulate these assays. This devalues the predictive power of the models. In this study, we examined the interactions between a single enzyme, coagulation factor Xa, and its chromogenic substrate. We developed, and experimentally validated, a mathematical model of a chromogenic assay for factor Xa that explicitly included product inhibition from the CS. We employed Bayesian inference, in the form of Markov-Chain Monte Carlo, to estimate the strength of the product inhibition and other sources of uncertainty such as pipetting error and kinetic rate constants. Our model, together with carefully calibrated biochemistry experiments, allowed for full characterization of the strength and impact of product inhibition in the assay. The effect of CS product inhibition in more complex reaction mixtures was further explored using mathematical models.
Collapse
|
154
|
Jensen MA, Wang YY, Lai SK, Forest MG, McKinley SA. Antibody-Mediated Immobilization of Virions in Mucus. Bull Math Biol 2019; 81:4069-4099. [PMID: 31468263 PMCID: PMC6764938 DOI: 10.1007/s11538-019-00653-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2019] [Accepted: 07/24/2019] [Indexed: 01/01/2023]
Abstract
Antibodies have been shown to hinder the movement of herpes simplex virus virions in cervicovaginal mucus, as well as other viruses in other mucus secretions. However, it has not been possible to directly observe the mechanisms underlying this phenomenon, so the nature of virion-antibody-mucin interactions remain poorly understood. In this work, we analyzed thousands of virion traces from single particle tracking experiments to explicate how antibodies must cooperate to immobilize virions for relatively long time periods. First, using a clustering analysis, we observed a clear separation between two classes of virion behavior: freely diffusing and immobilized. While the proportion of freely diffusing virions decreased with antibody concentration, the magnitude of their diffusivity did not, implying an all-or-nothing dichotomy in the pathwise effect of the antibodies. Proceeding under the assumption that all binding events are reversible, we used a novel switch-point detection method to conclude that there are very few, if any, state switches on the experimental timescale of 20 s. To understand this slow state switching, we analyzed a recently proposed continuous-time Markov chain model for binding kinetics and virion movement. Model analysis implied that virion immobilization requires cooperation by multiple antibodies that are simultaneously bound to the virion and mucin matrix and that there is an entanglement phenomenon that accelerates antibody-mucin binding when a virion is immobilized. In addition to developing a widely applicable framework for analyzing multistate particle behavior, this work substantially enhances our mechanistic understanding of how antibodies can reinforce a mucus barrier against passive invasive species.
Collapse
|
155
|
Peirlinck M, Sahli Costabal F, Sack KL, Choy JS, Kassab GS, Guccione JM, De Beule M, Segers P, Kuhl E. Using machine learning to characterize heart failure across the scales. Biomech Model Mechanobiol 2019; 18:1987-2001. [PMID: 31240511 DOI: 10.1007/s10237-019-01190-w] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Accepted: 06/16/2019] [Indexed: 12/31/2022]
Abstract
Heart failure is a progressive chronic condition in which the heart undergoes detrimental changes in structure and function across multiple scales in time and space. Multiscale models of cardiac growth can provide a patient-specific window into the progression of heart failure and guide personalized treatment planning. Yet, the predictive potential of cardiac growth models remains poorly understood. Here, we quantify predictive power of a stretch-driven growth model using a chronic porcine heart failure model, subject-specific multiscale simulation, and machine learning techniques. We combine hierarchical modeling, Bayesian inference, and Gaussian process regression to quantify the uncertainty of our experimental measurements during an 8-week long study of volume overload in six pigs. We then propagate the experimental uncertainties from the organ scale through our computational growth model and quantify the agreement between experimentally measured and computationally predicted alterations on the cellular scale. Our study suggests that stretch is the major stimulus for myocyte lengthening and demonstrates that a stretch-driven growth model alone can explain [Formula: see text] of the observed changes in myocyte morphology. We anticipate that our approach will allow us to design, calibrate, and validate a new generation of multiscale cardiac growth models to explore the interplay of various subcellular-, cellular-, and organ-level contributors to heart failure. Using machine learning in heart failure research has the potential to combine information from different sources, subjects, and scales to provide a more holistic picture of the failing heart and point toward new treatment strategies.
Collapse
|
156
|
Costabal FS, Matsuno K, Yao J, Perdikaris P, Kuhl E. Machine learning in drug development: Characterizing the effect of 30 drugs on the QT interval using Gaussian process regression, sensitivity analysis, and uncertainty quantification. COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING 2019; 348:313-333. [PMID: 32863454 PMCID: PMC7454226 DOI: 10.1016/j.cma.2019.01.033] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Prolonged QT intervals are a major risk factor for ventricular arrhythmias and a leading cause of sudden cardiac death. Various drugs are known to trigger QT interval prolongation and increase the proarrhythmic potential. Yet, how precisely the action of drugs on the cellular level translates into QT interval prolongation on the whole organ level remains insufficiently understood. Here we use machine learning techniques to systematically characterize the effect of 30 common drugs on the QT interval. We combine information from high fidelity three-dimensional human heart simulations with low fidelity one-dimensional cable simulations to build a surrogate model for the QT interval using multi-fidelity Gaussian process regression. Once trained and cross-validated, we apply our surrogate model to perform sensitivity analysis and uncertainty quantification. Our sensitivity analysis suggests that compounds that block the rapid delayed rectifier potassium current I Kr have the greatest prolonging effect of the QT interval, and that blocking the L-type calcium current I CaL and late sodium current I NaL shortens the QT interval. Our uncertainty quantification allows us to propagate the experimental variability from individual block-concentration measurements into the QT interval and reveals that QT interval uncertainty is mainly driven by the variability in I Kr block. In a final validation study, we demonstrate an excellent agreement between our predicted QT interval changes and the changes observed in a randomized clinical trial for the drugs dofetilide, quinidine, ranolazine, and verapamil. We anticipate that both the machine learning methods and the results of this study will have great potential in the efficient development of safer drugs.
Collapse
|
157
|
Bayesian inference of constitutive model parameters from uncertain uniaxial experiments on murine tendons. J Mech Behav Biomed Mater 2019; 96:285-300. [PMID: 31078970 DOI: 10.1016/j.jmbbm.2019.04.037] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Revised: 03/20/2019] [Accepted: 04/18/2019] [Indexed: 01/25/2023]
Abstract
Constitutive models for biological tissue are typically formulated as a mixture of constituents and the overall response is then assembled by superposition or compatibility. This ensures the stress response of the biological tissue to be in the range of a given constitutive relationship, guaranteeing that at least one parameter combination exists so that an experimental response can be sufficiently well captured. Another, perhaps more challenging, problem is to use constitutive models as a proxy to infer the structure/function of a biological tissue from experiments. In other words, we determine the optimal set of parameters by solving an inverse problem and use these parameters to infer the integrity of the tissue constituents. In previous studies, we focused on the mechanical stress-stretch response of the murine patellar tendon at various age and healing timepoints and solved the inverse problem using three constitutive models, i.e., the Freed-Rajagopal, Gasser-Ogden-Holzapfel and Shearer in order of increasing microstructural detail. Herein, we extend this work by adopting a Bayesian perspective on parameter estimation and implement the constitutive relations in the tulip library for uncertainty analysis, critically analyzing parameter marginals, correlations, identifiability and sensitivity. Our results show the importance of investigating the variability of parameter estimates and that results from optimization may be misleading, particularly for models with many parameters inferred from limited experimental evidence. In our study, we show that different age and healing conditions do not correspond to statistically significant separation among the Gasser-Ogden-Holzapfel and Shearer model parameters, while the phenomenological Freed-Rajagopal model is instead characterized by better indentifiability and parameter learning. Use of the complete experimental observations rather than averaged stress-stretch responses appears to positively constrain inference and results appear to be invariant with respect to the scaling of the experimental uncertainty.
Collapse
|
158
|
Effects of left ventricle wall thickness uncertainties on cardiac mechanics. Biomech Model Mechanobiol 2019; 18:1415-1427. [PMID: 31025130 DOI: 10.1007/s10237-019-01153-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Accepted: 04/19/2019] [Indexed: 01/22/2023]
Abstract
Computational models of the heart have reached a level of maturity that enables sophisticated patient-specific simulations and hold potential for important applications in diagnosis and therapy planning. However, such clinical use puts strict demands on the reliability and accuracy of the models and requires the sensitivity of the model predictions due to errors and uncertainty in the model inputs to be quantified. The models typically contain a large number of parameters, which are difficult to measure and therefore associated with considerable uncertainty. Additionally, patient-specific geometries are usually constructed by semi-manual processing of medical images and must be assumed to be a potential source of model uncertainty. In this paper, we assess the model accuracy by considering the impact of geometrical uncertainties, which typically occur in image-based computational geometries. An approach based on 17 AHA segments diagram is used to consider uncertainties in wall thickness and also in the material properties and fiber orientation, and we perform a comprehensive uncertainty quantification and sensitivity analysis based on polynomial chaos expansions. The quantities considered include stress, strain and global deformation parameters of the left ventricle. The results indicate that important quantities of interest may be more affected by wall thickness, and highlight the need for accurate geometry reconstructions in patient-specific cardiac mechanics models.
Collapse
|
159
|
Gadd C, Xing W, Nezhad MM, Shah AA. A Surrogate Modelling Approach Based on Nonlinear Dimension Reduction for Uncertainty Quantification in Groundwater Flow Models. Transp Porous Media 2019; 126:39-77. [PMID: 30872876 PMCID: PMC6390720 DOI: 10.1007/s11242-018-1065-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Accepted: 04/13/2018] [Indexed: 11/15/2022]
Abstract
In this paper, we develop a surrogate modelling approach for capturing the output field (e.g. the pressure head) from groundwater flow models involving a stochastic input field (e.g. the hydraulic conductivity). We use a Karhunen–Loève expansion for a log-normally distributed input field and apply manifold learning (local tangent space alignment) to perform Gaussian process Bayesian inference using Hamiltonian Monte Carlo in an abstract feature space, yielding outputs for arbitrary unseen inputs. We also develop a framework for forward uncertainty quantification in such problems, including analytical approximations of the mean of the marginalized distribution (with respect to the inputs). To sample from the distribution, we present Monte Carlo approach. Two examples are presented to demonstrate the accuracy of our approach: a Darcy flow model with contaminant transport in 2-d and a Richards equation model in 3-d.
Collapse
|
160
|
Sahli Costabal F, Choy JS, Sack KL, Guccione JM, Kassab GS, Kuhl E. Multiscale characterization of heart failure. Acta Biomater 2019; 86:66-76. [PMID: 30630123 DOI: 10.1016/j.actbio.2018.12.053] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 12/28/2018] [Accepted: 12/31/2018] [Indexed: 12/27/2022]
Abstract
Dilated cardiomyopathy is a progressive irreversible disease associated with contractile dysfunction and heart failure. During dilated cardiomyopathy, elevated diastolic wall strains trigger mechanotransduction pathways that initiate the addition of sarcomeres in series and an overall increase in myocyte length. At the whole organ level, this results in a chronic dilation of the ventricles, an increase in end diastolic and end systolic volumes, and a decrease in ejection fraction. However, how exactly changes in sarcomere number translate into changes in myocyte morphology, and how these cellular changes translate into ventricular dilation remains incompletely understood. Here we combined a chronic animal study, continuum growth modeling, and machine learning to quantify correlations between sarcomere dynamics, myocyte morphology, and ventricular dilation. In an eight-week long volume overload study of six pigs, we found that the average sarcomere number increased by +3.8%/week, from 47 to 62, resulting in a myocyte lengthening of +3.3%/week, from 85 to 108 μm, while the sarcomere length and myocyte width remained unchanged. At the same time, the average end diastolic volume increased by +6.0%/week. Using continuum growth modeling and Bayesian inference, we correlated alterations on the subcellular, cellular, and organ scales and found that the serial sarcomere number explained 88% of myocyte lengthening, which, in turn, explained 54% of cardiac dilation. Our results demonstrate that sarcomere number and myocyte length are closely correlated and constitute the major determinants of dilated heart failure. We anticipate our study to be a starting point for more sophisticated multiscale models of heart failure. Our study suggests that altering sarcomere turnover-and with it myocyte morphology and ventricular dimensions-could be a potential therapeutic target to attenuate or reverse the progression of heart failure. STATEMENT OF SIGNIFICANCE: Heart failure is a significant global health problem that affects more than 25 million people worldwide and increases in prevalence as the population ages. Heart failure has been studied excessively at various scales; yet, there is no compelling concept to connect knowledge from the subcellular, cellular, and organ level across the scales. Here we combined a chronic animal study, continuum growth modeling, and machine learning to quantify correlations between sarcomere dynamics, myocyte morphology, and ventricular dilation. We found that the serial sarcomere number explained 88% of myocyte lengthening, which, in turn, explained 54% of cardiac dilation. Our results show that sarcomere number and myocyte length are closely correlated and constitute the major determinants of dilated heart failure. This suggests that altering the sarcomere turnover-and with it myocyte morphology and ventricular dimensions-could be a potential therapeutic target to attenuate or reverse heart failure.
Collapse
|
161
|
Tran JS, Schiavazzi DE, Kahn AM, Marsden AL. Uncertainty quantification of simulated biomechanical stimuli in coronary artery bypass grafts. COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING 2019; 345:402-428. [PMID: 31223175 PMCID: PMC6586227 DOI: 10.1016/j.cma.2018.10.024] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Coronary artery bypass graft surgery (CABG) is performed on more than 400,000 patients annually in the U.S. However, saphenous vein grafts (SVGs) implanted during CABG exhibit poor patency compared to arterial grafts, with failure rates up to 40% within 10 years after surgery. Differences in mechanical stimuli are known to play a role in driving maladaptation and have been correlated with endothelial damage and thrombus formation. As these quantities are difficult to measure in vivo, multi-scale coronary models offer a way to quantify them, while accounting for complex coronary physiology. However, prior studies have primarily focused on deterministic evaluations, without reporting variability in the model parameters due to uncertainty. This study aims to assess confidence in multi-scale predictions of wall shear stress and wall strain while accounting for uncertainty in peripheral hemodynamics and material properties. Boundary condition distributions are computed by assimilating uncertain clinical data, while spatial variations of vessel wall stiffness are obtained through approximation by a random field. We developed a stochastic submodeling approach to mitigate the computational burden of repeated multi-scale model evaluations to focus exclusively on the bypass grafts. This produces a two-level decomposition of quantities of interest into submodel contributions and full model/submodel discrepancies. We leverage these two levels in the context of forward uncertainty propagation using a previously proposed multi-resolution approach. The time- and space-averaged wall shear stress is well estimated with a coefficient of variation of <35%, but ignorance about the spatial distribution on the wall elastic modulus and thickness lead to large variations in an objective measure of wall strain, with coefficients of variation up to 100%. Sensitivity analysis reveals how the interactions between the flow and material parameters contribute to output variability.
Collapse
|
162
|
Roosa K, Chowell G. Assessing parameter identifiability in compartmental dynamic models using a computational approach: application to infectious disease transmission models. Theor Biol Med Model 2019; 16:1. [PMID: 30642334 PMCID: PMC6332839 DOI: 10.1186/s12976-018-0097-6] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Accepted: 12/10/2018] [Indexed: 01/11/2023] Open
Abstract
Background Mathematical modeling is now frequently used in outbreak investigations to understand underlying mechanisms of infectious disease dynamics, assess patterns in epidemiological data, and forecast the trajectory of epidemics. However, the successful application of mathematical models to guide public health interventions lies in the ability to reliably estimate model parameters and their corresponding uncertainty. Here, we present and illustrate a simple computational method for assessing parameter identifiability in compartmental epidemic models. Methods We describe a parametric bootstrap approach to generate simulated data from dynamical systems to quantify parameter uncertainty and identifiability. We calculate confidence intervals and mean squared error of estimated parameter distributions to assess parameter identifiability. To demonstrate this approach, we begin with a low-complexity SEIR model and work through examples of increasingly more complex compartmental models that correspond with applications to pandemic influenza, Ebola, and Zika. Results Overall, parameter identifiability issues are more likely to arise with more complex models (based on number of equations/states and parameters). As the number of parameters being jointly estimated increases, the uncertainty surrounding estimated parameters tends to increase, on average, as well. We found that, in most cases, R0 is often robust to parameter identifiability issues affecting individual parameters in the model. Despite large confidence intervals and higher mean squared error of other individual model parameters, R0 can still be estimated with precision and accuracy. Conclusions Because public health policies can be influenced by results of mathematical modeling studies, it is important to conduct parameter identifiability analyses prior to fitting the models to available data and to report parameter estimates with quantified uncertainty. The method described is helpful in these regards and enhances the essential toolkit for conducting model-based inferences using compartmental dynamic models. Electronic supplementary material The online version of this article (10.1186/s12976-018-0097-6) contains supplementary material, which is available to authorized users.
Collapse
|
163
|
Gashi K, Bosboom EMH, van de Vosse FN. The influence of model order reduction on the computed fractional flow reserve using parameterized coronary geometries. J Biomech 2019; 82:313-323. [PMID: 30471791 DOI: 10.1016/j.jbiomech.2018.11.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 11/02/2018] [Accepted: 11/03/2018] [Indexed: 11/25/2022]
Abstract
Computational fluid dynamics (CFD) models combined with patient-specific imaging data are used to non-invasively predict functional significance of coronary lesions. This approach to predict the fractional flow reserve (FFR) is shown to have a high diagnostic accuracy when comparing against invasively measured FFR. However, one of the main drawbacks is the high computational effort needed for preprocessing and computations. Hence, uncertainty quantification may become unfeasible. Reduction of complexity is desirable, computationally inexpensive models with high diagnostic accuracy are preferred. We present a parametric comparison study for three types of CFD models (2D axisymmetric, Semi-3D and 3D) in which we study the impact of model reduction on three models on the predicted FFR. In total 200 coronary geometries were generated for seven geometrical characteristics e.g. stenosis severity, stenosis length and vessel curvature. The effect of time-averaged flow was investigated using unsteady, mean steady and a root mean square (RMS) steady flow. The 3D unsteady model was regarded as reference model. Results show that when using an unsteady or RMS flow, predicted FFR hardly varies between models contrary to using average flows. The 2D model with RMS flow has a high diagnostic accuracy (0.99), reduces computational time by a factor 162,000 and the introduced model error is well below the clinical relevant differences. Stenosis severity, length, curvature and tapering cause most discrepancies when using a lower order model. An uncertainty analysis showed that this can be explained by the low variability that is caused by variations in stenosis asymmetry.
Collapse
|
164
|
Nemmers T, Narayan A, Banerjee S. Bayesian modeling and uncertainty quantification for descriptive social networks. STATISTICS AND ITS INTERFACE 2019; 12:181-191. [PMID: 30662582 PMCID: PMC6335039 DOI: 10.4310/sii.2019.v12.n1.a15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This article presents a simple and easily implementable Bayesian approach to model and quantify uncertainty in small descriptive social networks. While statistical methods for analyzing networks have seen burgeoning activity over the last decade or so, ranging from social sciences to genetics, such methods usually involve sophisticated stochastic models whose estimation requires substantial structure and information in the networks. At the other end of the analytic spectrum, there are purely descriptive methods based upon quantities and axioms in computational graph theory. In social networks, popular descriptive measures include, but are not limited to, the so called Krackhardt's axioms. Another approach, recently gaining attention, is the use of PageRank algorithms. While these descriptive approaches provide insight into networks with limited information, including small networks, there is, as yet, little research detailing a statistical approach for small networks. This article aims to contribute at the interface of Bayesian statistical inference and social network analysis by offering practicing social scientists a relatively straightforward Bayesian approach to account for uncertainty while conducting descriptive social network analysis. The emphasis is on computational feasibility and easy implementation using existing R packages, such as sna and rjags, that are available from the Comprehensive R Archive Network (https://cran.r-project.org/). We analyze a network comprising 18 websites from the US and UK to discern transnational identities, previously analyzed using descriptive graph theory with no uncertainty quantification, using fully Bayesian model-based inference.
Collapse
|
165
|
Chen Y, Chang H, Meng J, Zhang D. Ensemble Neural Networks (ENN): A gradient-free stochastic method. Neural Netw 2018; 110:170-185. [PMID: 30562650 DOI: 10.1016/j.neunet.2018.11.009] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Revised: 10/22/2018] [Accepted: 11/26/2018] [Indexed: 11/16/2022]
Abstract
In this study, an efficient stochastic gradient-free method, the ensemble neural networks (ENN), is developed. In the ENN, the optimization process relies on covariance matrices rather than derivatives. The covariance matrices are calculated by the ensemble randomized maximum likelihood algorithm (EnRML), which is an inverse modeling method. The ENN is able to simultaneously provide estimations and perform uncertainty quantification since it is built under the Bayesian framework. The ENN is also robust to small training data size because the ensemble of stochastic realizations essentially enlarges the training dataset. This constitutes a desirable characteristic, especially for real-world engineering applications. In addition, the ENN does not require the calculation of gradients, which enables the use of complicated neuron models and loss functions in neural networks. We experimentally demonstrate benefits of the proposed model, in particular showing that the ENN performs much better than the traditional Bayesian neural networks (BNN). The EnRML in ENN is a substitution of gradient-based optimization algorithms, which means that it can be directly combined with the feed-forward process in other existing (deep) neural networks, such as convolutional neural networks (CNN) and recurrent neural networks (RNN), broadening future applications of the ENN.
Collapse
|
166
|
Fossan FE, Sturdy J, Müller LO, Strand A, Bråten AT, Jørgensen A, Wiseth R, Hellevik LR. Uncertainty Quantification and Sensitivity Analysis for Computational FFR Estimation in Stable Coronary Artery Disease. Cardiovasc Eng Technol 2018; 9:597-622. [PMID: 30382522 DOI: 10.1007/s13239-018-00388-w] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Accepted: 10/12/2018] [Indexed: 12/21/2022]
Abstract
PURPOSE The main objectives of this study are to validate a reduced-order model for the estimation of the fractional flow reserve (FFR) index based on blood flow simulations that incorporate clinical imaging and patient-specific characteristics, and to assess the uncertainty of FFR predictions with respect to input data on a per patient basis. METHODS We consider 13 patients with symptoms of stable coronary artery disease for which 24 invasive FFR measurements are available. We perform an extensive sensitivity analysis on the parameters related to the construction of a reduced-order (hybrid 1D-0D) model for FFR predictions. Next we define an optimal setting by comparing reduced-order model predictions with solutions based on the 3D incompressible Navier-Stokes equations. Finally, we characterize prediction uncertainty with respect to input data and identify the most influential inputs by means of sensitivity analysis. RESULTS Agreement between FFR computed by the reduced-order model and by the full 3D model was satisfactory, with a bias ([Formula: see text]) of [Formula: see text] at the 24 measured locations. Moreover, the uncertainty related to the factor by which peripheral resistance is reduced from baseline to hyperemic conditions proved to be the most influential parameter for FFR predictions, whereas uncertainty in stenosis geometry had greater effect in cases with low FFR. CONCLUSION Model errors related to solving a simplified reduced-order model rather than a full 3D problem were small compared with uncertainty related to input data. Improved measurement of coronary blood flow has the potential to reduce uncertainty in computational FFR predictions significantly.
Collapse
|
167
|
Bhattacharya P, Altai Z, Qasim M, Viceconti M. A multiscale model to predict current absolute risk of femoral fracture in a postmenopausal population. Biomech Model Mechanobiol 2018; 18:301-318. [PMID: 30276488 PMCID: PMC6418062 DOI: 10.1007/s10237-018-1081-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2018] [Accepted: 09/24/2018] [Indexed: 02/06/2023]
Abstract
Osteoporotic hip fractures are a major healthcare problem. Fall severity and bone strength are important risk factors of hip fracture. This study aims to obtain a mechanistic explanation for fracture risk in dependence of these risk factors. A novel modelling approach is developed that combines models at different scales to overcome the challenge of a large space–time domain of interest and considers the variability of impact forces between potential falls in a subject. The multiscale model and its component models are verified with respect to numerical approximations made therein, the propagation of measurement uncertainties of model inputs is quantified, and model predictions are validated against experimental and clinical data. The main results are model predicted absolute risk of current fracture (ARF0) that ranged from 1.93 to 81.6% (median 36.1%) for subjects in a retrospective cohort of 98 postmenopausal British women (49 fracture cases and 49 controls); ARF0 was computed up to a precision of 1.92 percentage points (pp) due to numerical approximations made in the model; ARF0 possessed an uncertainty of 4.00 pp due to uncertainties in measuring model inputs; ARF0 classified observed fracture status in the above cohort with AUC = 0.852 (95% CI 0.753–0.918), 77.6% specificity (95% CI 63.4–86.5%) and 81.6% sensitivity (95% CI 68.3–91.1%). These results demonstrate that ARF0 can be computed using the model with sufficient precision to distinguish between subjects and that the novel mechanism of fracture risk determination based on fall dynamics, hip impact and bone strength can be considered validated.
Collapse
|
168
|
Real-World Variability in the Prediction of Intracranial Aneurysm Wall Shear Stress: The 2015 International Aneurysm CFD Challenge. Cardiovasc Eng Technol 2018; 9:544-564. [PMID: 30203115 PMCID: PMC6290689 DOI: 10.1007/s13239-018-00374-2] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/17/2018] [Accepted: 08/11/2018] [Indexed: 11/04/2022]
Abstract
Purpose Image-based computational fluid dynamics (CFD) is widely used to predict intracranial aneurysm wall shear stress (WSS), particularly with the goal of improving rupture risk assessment. Nevertheless, concern has been expressed over the variability of predicted WSS and inconsistent associations with rupture. Previous challenges, and studies from individual groups, have focused on individual aspects of the image-based CFD pipeline. The aim of this Challenge was to quantify the total variability of the whole pipeline. Methods 3D rotational angiography image volumes of five middle cerebral artery aneurysms were provided to participants, who were free to choose their segmentation methods, boundary conditions, and CFD solver and settings. Participants were asked to fill out a questionnaire about their solution strategies and experience with aneurysm CFD, and provide surface distributions of WSS magnitude, from which we objectively derived a variety of hemodynamic parameters. Results A total of 28 datasets were submitted, from 26 teams with varying levels of self-assessed experience. Wide variability of segmentations, CFD model extents, and inflow rates resulted in interquartile ranges of sac average WSS up to 56%, which reduced to < 30% after normalizing by parent artery WSS. Sac-maximum WSS and low shear area were more variable, while rank-ordering of cases by low or high shear showed only modest consensus among teams. Experience was not a significant predictor of variability. Conclusions Wide variability exists in the prediction of intracranial aneurysm WSS. While segmentation and CFD solver techniques may be difficult to standardize across groups, our findings suggest that some of the variability in image-based CFD could be reduced by establishing guidelines for model extents, inflow rates, and blood properties, and by encouraging the reporting of normalized hemodynamic parameters.
Collapse
|
169
|
Nikishova A, Veen L, Zun P, Hoekstra AG. Uncertainty Quantification of a Multiscale Model for In-Stent Restenosis. Cardiovasc Eng Technol 2018; 9:761-774. [PMID: 30136082 PMCID: PMC6290695 DOI: 10.1007/s13239-018-00372-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Accepted: 08/09/2018] [Indexed: 12/11/2022]
Abstract
Purpose Coronary artery stenosis, or abnormal narrowing, is a widespread and potentially fatal cardiac disease. After treatment by balloon angioplasty and stenting, restenosis may occur inside the stent due to excessive neointima formation. Simulations of in-stent restenosis can provide new insight into this process. However, uncertainties due to variability in patient-specific parameters must be taken into account. Methods We performed an uncertainty quantification (UQ) study on a complex two-dimensional in-stent restenosis model. We used a quasi-Monte Carlo method for UQ of the neointimal area, and the Sobol sensitivity analysis (SA) to estimate the proportions of aleatory and epistemic uncertainties and to determine the most important input parameters. Results We observe approximately 30% uncertainty in the mean neointimal area as simulated by the model. Depending on whether a fast initial endothelium recovery occurs, the proportion of the model variance due to natural variability ranges from 15 to 35%. The endothelium regeneration time is identified as the most influential model parameter. Conclusion The model output contains a moderate quantity of uncertainty, and the model precision can be increased by obtaining a more certain value on the endothelium regeneration time. We conclude that the quasi-Monte Carlo UQ and the Sobol SA are reliable methods for estimating uncertainties in the response of complicated multiscale cardiovascular models.
Collapse
|
170
|
Marquis AD, Arnold A, Dean-Bernhoft C, Carlson BE, Olufsen MS. Practical identifiability and uncertainty quantification of a pulsatile cardiovascular model. Math Biosci 2018; 304:9-24. [PMID: 30017910 DOI: 10.1016/j.mbs.2018.07.001] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2017] [Revised: 05/01/2018] [Accepted: 07/02/2018] [Indexed: 11/17/2022]
Abstract
Mathematical models are essential tools to study how the cardiovascular system maintains homeostasis. The utility of such models is limited by the accuracy of their predictions, which can be determined by uncertainty quantification (UQ). A challenge associated with the use of UQ is that many published methods assume that the underlying model is identifiable (e.g. that a one-to-one mapping exists from the parameter space to the model output). In this study we present a novel workflow to calibrate a lumped-parameter model to left ventricular pressure and volume time series data. Key steps include using (1) literature and available data to determine nominal parameter values; (2) sensitivity analysis and subset selection to determine a set of identifiable parameters; (3) optimization to find a point estimate for identifiable parameters; and (4) frequentist and Bayesian UQ calculations to assess the predictive capability of the model. Our results show that it is possible to determine 5 identifiable model parameters that can be estimated to our experimental data from three rats, and that computed UQ intervals capture the measurement and model error.
Collapse
|
171
|
Dhamala J, Arevalo HJ, Sapp J, Horácek BM, Wu KC, Trayanova NA, Wang L. Quantifying the uncertainty in model parameters using Gaussian process-based Markov chain Monte Carlo in cardiac electrophysiology. Med Image Anal 2018; 48:43-57. [PMID: 29843078 DOI: 10.1016/j.media.2018.05.007] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2017] [Revised: 03/17/2018] [Accepted: 05/14/2018] [Indexed: 02/02/2023]
Abstract
Model personalization requires the estimation of patient-specific tissue properties in the form of model parameters from indirect and sparse measurement data. Moreover, a low-dimensional representation of the parameter space is needed, which often has a limited ability to reveal the underlying tissue heterogeneity. As a result, significant uncertainty can be associated with the estimated values of the model parameters which, if left unquantified, will lead to unknown variability in model outputs that will hinder their reliable clinical adoption. Probabilistic estimation of model parameters, however, remains an unresolved challenge. Direct Markov Chain Monte Carlo (MCMC) sampling of the posterior distribution function (pdf) of the parameters is infeasible because it involves repeated evaluations of the computationally expensive simulation model. To accelerate this inference, one popular approach is to construct a computationally efficient surrogate and sample from this approximation. However, by sampling from an approximation, efficiency is gained at the expense of sampling accuracy. In this paper, we address this issue by integrating surrogate modeling of the posterior pdf into accelerating the Metropolis-Hastings (MH) sampling of the exact posterior pdf. It is achieved by two main components: (1) construction of a Gaussian process (GP) surrogate of the exact posterior pdf by actively selecting training points that allow for a good global approximation accuracy with a focus on the regions of high posterior probability; and (2) use of the GP surrogate to improve the proposal distribution in MH sampling, in order to improve the acceptance rate. The presented framework is evaluated in its estimation of the local tissue excitability of a cardiac electrophysiological model in both synthetic data experiments and real data experiments. In addition, the obtained posterior distributions of model parameters are interpreted in relation to the factors contributing to parameter uncertainty, including different low-dimensional representations of the parameter space, parameter non-identifiability, and parameter correlations.
Collapse
|
172
|
Warner JE, Norman RB, Blattnig SR. HZETRN radiation transport validation using balloon-based experimental data. LIFE SCIENCES IN SPACE RESEARCH 2018; 17:23-31. [PMID: 29753410 DOI: 10.1016/j.lssr.2018.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Revised: 02/12/2018] [Accepted: 02/21/2018] [Indexed: 06/08/2023]
Abstract
The deterministic radiation transport code HZETRN (High charge (Z) and Energy TRaNsport) was developed by NASA to study the effects of cosmic radiation on astronauts and instrumentation shielded by various materials. This work presents an analysis of computed differential flux from HZETRN compared with measurement data from three balloon-based experiments over a range of atmospheric depths, particle types, and energies. Model uncertainties were quantified using an interval-based validation metric that takes into account measurement uncertainty both in the flux and the energy at which it was measured. Average uncertainty metrics were computed for the entire dataset as well as subsets of the measurements (by experiment, particle type, energy, etc.) to reveal any specific trends of systematic over- or under-prediction by HZETRN. The distribution of individual model uncertainties was also investigated to study the range and dispersion of errors beyond just single scalar and interval metrics. The differential fluxes from HZETRN were generally well-correlated with balloon-based measurements; the median relative model difference across the entire dataset was determined to be 30%. The distribution of model uncertainties, however, revealed that the range of errors was relatively broad, with approximately 30% of the uncertainties exceeding ± 40%. The distribution also indicated that HZETRN systematically under-predicts the measurement dataset as a whole, with approximately 80% of the relative uncertainties having negative values. Instances of systematic bias for subsets of the data were also observed, including a significant underestimation of alpha particles and protons for energies below 2.5 GeV/u. Muons were found to be systematically over-predicted at atmospheric depths deeper than 50 g/cm2 but under-predicted for shallower depths. Furthermore, a systematic under-prediction of alpha particles and protons was observed below the geomagnetic cutoff, suggesting that improvements to the light ion production cross sections in HZETRN should be investigated.
Collapse
|
173
|
Breustedt B, Blanchardon E, Castellani CM, Etherington G, Franck D, Giussani A, Hofmann W, Lebacq AL, Li WB, Noßke D, Lopez MA. EURADOS work on internal dosimetry. Ann ICRP 2018; 47:75-82. [PMID: 29664321 DOI: 10.1177/0146645318756232] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
European Radiation Dosimetry Group (EURADOS) Working Group 7 is a network on internal dosimetry that brings together researchers from more than 60 institutions in 21 countries. The work of the group is organised into task groups that focus on different aspects, such as development and implementation of biokinetic models (e.g. for diethylenetriamine penta-acetic acid decorporation therapy), individual monitoring and the dose assessment process, Monte Carlo simulations for internal dosimetry, uncertainties in internal dosimetry, and internal microdosimetry. Several intercomparison exercises and training courses have been organised. The IDEAS guidelines, which describe - based on the International Commission on Radiological Protection's (ICRP) biokinetic models and dose coefficients - a structured approach to the assessment of internal doses from monitoring data, are maintained and updated by the group. In addition, Technical Recommendations for Monitoring Individuals for Occupational Intakes of Radionuclides have been elaborated on behalf of the European Commission, DG-ENER (TECHREC Project, 2014-2016, coordinated by EURADOS). Quality assurance of the ICRP biokinetic models by calculation of retention and excretion functions for different scenarios has been performed and feedback was provided to ICRP. An uncertainty study of the recent caesium biokinetic model quantified the overall uncertainties, and identified the sensitive parameters of the model. A report with guidance on the application of ICRP biokinetic models and dose coefficients is being drafted at present. These and other examples of the group's activities, which complement the work of ICRP, are presented.
Collapse
|
174
|
Lagergren J, Reeder A, Hamilton F, Smith RC, Flores KB. Forecasting and Uncertainty Quantification Using a Hybrid of Mechanistic and Non-mechanistic Models for an Age-Structured Population Model. Bull Math Biol 2018; 80:1578-1595. [PMID: 29611108 DOI: 10.1007/s11538-018-0421-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Accepted: 03/23/2018] [Indexed: 11/30/2022]
Abstract
In this paper, we present a new method for the prediction and uncertainty quantification of data-driven multivariate systems. Traditionally, either mechanistic or non-mechanistic modeling methodologies have been used for prediction; however, it is uncommon for the two to be incorporated together. We compare the forecast accuracy of mechanistic modeling, using Bayesian inference, a non-mechanistic modeling approach based on state space reconstruction, and a novel hybrid methodology composed of the two for an age-structured population data set. The data come from cannibalistic flour beetles, in which it is observed that the adults preying on the eggs and pupae result in non-equilibrium population dynamics. Uncertainty quantification methods for the hybrid models are outlined and illustrated for these data. We perform an analysis of the results from Bayesian inference for the mechanistic model and hybrid models to suggest reasons why hybrid modeling methodology may enable more accurate forecasts of multivariate systems than traditional approaches.
Collapse
|
175
|
Sjölund J, Eklund A, Özarslan E, Herberthson M, Bånkestad M, Knutsson H. Bayesian uncertainty quantification in linear models for diffusion MRI. Neuroimage 2018; 175:272-285. [PMID: 29604453 DOI: 10.1016/j.neuroimage.2018.03.059] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2017] [Revised: 02/16/2018] [Accepted: 03/25/2018] [Indexed: 01/22/2023] Open
Abstract
Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification.
Collapse
|