76
|
Li H, Nan Y, Del Ser J, Yang G. Region-based evidential deep learning to quantify uncertainty and improve robustness of brain tumor segmentation. Neural Comput Appl 2022; 35:22071-22085. [PMID: 37724130 PMCID: PMC10505106 DOI: 10.1007/s00521-022-08016-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 10/26/2022] [Indexed: 11/19/2022]
Abstract
Despite recent advances in the accuracy of brain tumor segmentation, the results still suffer from low reliability and robustness. Uncertainty estimation is an efficient solution to this problem, as it provides a measure of confidence in the segmentation results. The current uncertainty estimation methods based on quantile regression, Bayesian neural network, ensemble, and Monte Carlo dropout are limited by their high computational cost and inconsistency. In order to overcome these challenges, Evidential Deep Learning (EDL) was developed in recent work but primarily for natural image classification and showed inferior segmentation results. In this paper, we proposed a region-based EDL segmentation framework that can generate reliable uncertainty maps and accurate segmentation results, which is robust to noise and image corruption. We used the Theory of Evidence to interpret the output of a neural network as evidence values gathered from input features. Following Subjective Logic, evidence was parameterized as a Dirichlet distribution, and predicted probabilities were treated as subjective opinions. To evaluate the performance of our model on segmentation and uncertainty estimation, we conducted quantitative and qualitative experiments on the BraTS 2020 dataset. The results demonstrated the top performance of the proposed method in quantifying segmentation uncertainty and robustly segmenting tumors. Furthermore, our proposed new framework maintained the advantages of low computational cost and easy implementation and showed the potential for clinical application.
Collapse
|
77
|
Liu G, Jin H, Li J, Hu X, Li J. A Bayesian deep learning method for freeway incident detection with uncertainty quantification. ACCIDENT; ANALYSIS AND PREVENTION 2022; 176:106796. [PMID: 35985178 DOI: 10.1016/j.aap.2022.106796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 07/24/2022] [Accepted: 08/05/2022] [Indexed: 06/15/2023]
Abstract
Incident detection is fundamental for freeway management to reduce non-recurrent congestions and secondary incidents. Recently, machine learning technologies have made considerable progress in the incident detection field, but many still face challenges in uncertainty quantification due to the aleatoric uncertainty of traffic data and the epistemic uncertainty of model deviations. In this study, a Bayesian deep learning method was proposed for freeway incident detection with uncertainty quantification. A convolutional neural network variant was designed on a Bayesian framework, and mechanisms of Bayes by backpropagation and local reparameterization technics were used to update the weight of the proposed model. The predictive uncertainty of the proposed method was modeled jointly by integrating the aleatoric and epistemic uncertainty. The proposed model was tested on the PORTAL dataset and compared with four benchmark models: standard normal deviate, wavelet neural network, long-short term memory neural network, and convolutional neural network. The results show that the proposed model outperforms the baseline methods in terms of accuracy, detection rate and false alarm rate. Perturbation experiments were used to test the robustness of the model against noise. The results indicated that the aleatoric uncertainty of the model remained almost constant under different noise levels. The proposed method may benefit future studies on uncertainty quantification while using machine learning method in incident management and other fields in intelligent transportation systems.
Collapse
|
78
|
Alifa M, Castruccio S, Bolster D, Bravo M, Crippa P. Information entropy tradeoffs for efficient uncertainty reduction in estimates of air pollution mortality. ENVIRONMENTAL RESEARCH 2022; 212:113587. [PMID: 35654155 DOI: 10.1016/j.envres.2022.113587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 05/18/2022] [Accepted: 05/28/2022] [Indexed: 06/15/2023]
Abstract
Implementing effective policy to protect human health from the adverse effects of air pollution, such as premature mortality, requires reducing the uncertainty in health outcomes models. Here we present a novel method to reduce mortality uncertainty by increasing the amount of input data of air pollution and health outcomes, and then quantifying tradeoffs associated with the different data gained. We first present a study of long-term mortality from fine particulate matter (PM2.5) based on simulated data, followed by a real-world application of short-term PM2.5-related mortality in an urban area. We employ information yield curves to identify which variables more effectively reduce mortality uncertainty when increasing information. Our methodology can be used to explore how specific pollution scenarios will impact mortality and thus improve decision-making. The proposed framework is general and can be applied to any real case-scenario where knowledge in pollution, demographics, or health outcomes can be augmented through data acquisition or model improvements to generate more robust risk assessments.
Collapse
|
79
|
Zhang Z, Bagci U. Dynamic Linear Transformer for 3D Biomedical Image Segmentation. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2022; 13583:171-180. [PMID: 36780251 PMCID: PMC9911329 DOI: 10.1007/978-3-031-21014-3_18] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Transformer-based neural networks have surpassed promising performance on many biomedical image segmentation tasks due to a better global information modeling from the self-attention mechanism. However, most methods are still designed for 2D medical images while ignoring the essential 3D volume information. The main challenge for 3D Transformer-based segmentation methods is the quadratic complexity introduced by the self-attention mechanism [17]. In this paper, we are addressing these two research gaps, lack of 3D methods and computational complexity in Transformers, by proposing a novel Transformer architecture that has an encoder-decoder style architecture with linear complexity. Furthermore, we newly introduce a dynamic token concept to further reduce the token numbers for self-attention calculation. Taking advantage of the global information modeling, we provide uncertainty maps from different hierarchy stages. We evaluate this method on multiple challenging CT pancreas segmentation datasets. Our results show that our novel 3D Transformer-based segmentor could provide promising highly feasible segmentation performance and accurate uncertainty quantification using single annotation. Code is available https://github.com/freshman97/LinTransUNet.
Collapse
|
80
|
Vazquez J, Facelli JC. Conformal Prediction in Clinical Medical Sciences. JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2022; 6:241-252. [PMID: 35898853 PMCID: PMC9309105 DOI: 10.1007/s41666-021-00113-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 11/30/2021] [Accepted: 12/28/2021] [Indexed: 02/04/2023]
Abstract
The use of machine learning (ML) and artificial intelligence (AI) applications in medicine has attracted a great deal of attention in the medical literature, but little is known about how to use Conformal Predictions (CP) to assess the accuracy of individual predictions in clinical applications. We performed a comprehensive search in SCOPUS® to find papers reporting the use of CP in clinical applications. We identified 14 papers reporting the use of CP for clinical applications, and we briefly describe the methods and results reported in these papers. The literature reviewed shows that CP methods can be used in clinical applications to provide important insight into the accuracy of individual predictions. Unfortunately, the review also shows that most of the studies have been performed in isolation, without input from practicing clinicians, not providing comparisons among different approaches and not considering important socio-technical considerations leading to clinical adoption.
Collapse
|
81
|
De Meutter P, Delcloo AW. Uncertainty quantification of atmospheric transport and dispersion modelling using ensembles for CTBT verification applications. JOURNAL OF ENVIRONMENTAL RADIOACTIVITY 2022; 250:106918. [PMID: 35653875 DOI: 10.1016/j.jenvrad.2022.106918] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 04/22/2022] [Accepted: 05/16/2022] [Indexed: 06/15/2023]
Abstract
Airborne concentrations of specific radioactive xenon isotopes (referred to as "radioxenon") are monitored globally as part of the verification regime of the Comprehensive Nuclear-Test-Ban Treaty, as these could be the signatures of a nuclear explosion. However, civilian nuclear facilities emit a regulated amount of radioxenon that can interfere with the very sensitive monitoring network. One approach to deal with this civilian background of radioxenon for Treaty verification purposes, is to explicitly simulate the expected radioxenon concentration from civilian sources at monitoring stations using atmospheric transport modelling. However, atmospheric transport modelling is prone to uncertainty, and the absence of an uncertainty quantification can limit its use for detection screening. In this paper, several ensembles are assessed that could provide an atmospheric transport modelling uncertainty quantification. These ensembles are validated with radioxenon observations, and recommendations are given for atmospheric transport modelling uncertainty quantification. Finally, the added value of an ensemble for detection screening is illustrated.
Collapse
|
82
|
Verhaeghe J, Dhaese SAM, De Corte T, Vander Mijnsbrugge D, Aardema H, Zijlstra JG, Verstraete AG, Stove V, Colin P, Ongenae F, De Waele JJ, Van Hoecke S. Development and evaluation of uncertainty quantifying machine learning models to predict piperacillin plasma concentrations in critically ill patients. BMC Med Inform Decis Mak 2022; 22:224. [PMID: 36008808 PMCID: PMC9404625 DOI: 10.1186/s12911-022-01970-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 08/10/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Beta-lactam antimicrobial concentrations are frequently suboptimal in critically ill patients. Population pharmacokinetic (PopPK) modeling is the golden standard to predict drug concentrations. However, currently available PopPK models often lack predictive accuracy, making them less suited to guide dosing regimen adaptations. Furthermore, many currently developed models for clinical applications often lack uncertainty quantification. We, therefore, aimed to develop machine learning (ML) models for the prediction of piperacillin plasma concentrations while also providing uncertainty quantification with the aim of clinical practice. METHODS Blood samples for piperacillin analysis were prospectively collected from critically ill patients receiving continuous infusion of piperacillin/tazobactam. Interpretable ML models for the prediction of piperacillin concentrations were designed using CatBoost and Gaussian processes. Distribution-based Uncertainty Quantification was added to the CatBoost model using a proposed Quantile Ensemble method, useable for any model optimizing a quantile function. These models are subsequently evaluated using the distribution coverage error, a proposed interpretable uncertainty quantification calibration metric. Development and internal evaluation of the ML models were performed on the Ghent University Hospital database (752 piperacillin concentrations from 282 patients). Ensuing, ML models were compared with a published PopPK model on a database from the University Medical Centre of Groningen where a different dosing regimen is used (46 piperacillin concentrations from 15 patients.). RESULTS The best performing model was the Catboost model with an RMSE and [Formula: see text] of 31.94-0.64 and 33.53-0.60 for internal evaluation with and without previous concentration. Furthermore, the results prove the added value of the proposed Quantile Ensemble model in providing clinically useful individualized uncertainty predictions and show the limits of homoscedastic methods like Gaussian Processes in clinical applications. CONCLUSIONS Our results show that ML models can consistently estimate piperacillin concentrations with acceptable and high predictive accuracy when identical dosing regimens as in the training data are used while providing highly relevant uncertainty predictions. However, generalization capabilities to other dosing schemes are limited. Notwithstanding, incorporating ML models in therapeutic drug monitoring programs seems definitely promising and the current work provides a basis for validating the model in clinical practice.
Collapse
|
83
|
Miller DL, Becker EA, Forney KA, Roberts JJ, Cañadas A, Schick RS. Estimating uncertainty in density surface models. PeerJ 2022; 10:e13950. [PMID: 36032955 PMCID: PMC9415456 DOI: 10.7717/peerj.13950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 08/05/2022] [Indexed: 01/19/2023] Open
Abstract
Providing uncertainty estimates for predictions derived from species distribution models is essential for management but there is little guidance on potential sources of uncertainty in predictions and how best to combine these. Here we show where uncertainty can arise in density surface models (a multi-stage spatial modelling approach for distance sampling data), focussing on cetacean density modelling. We propose an extensible, modular, hybrid analytical-simulation approach to encapsulate these sources. We provide example analyses of fin whales Balaenoptera physalus in the California Current Ecosystem.
Collapse
|
84
|
Stevens M, Sunseri I, Alexanderian A. Hyper-differential sensitivity analysis for inverse problems governed by ODEs with application to COVID-19 modeling. Math Biosci 2022; 351:108887. [PMID: 35970242 PMCID: PMC9374496 DOI: 10.1016/j.mbs.2022.108887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 08/05/2022] [Accepted: 08/08/2022] [Indexed: 11/19/2022]
Abstract
We consider inverse problems governed by systems of ordinary differential equations (ODEs) that contain uncertain parameters in addition to the parameters being estimated. In such problems, which are common in applications, it is important to understand the sensitivity of the solution of the inverse problem to the uncertain model parameters. It is also of interest to understand the sensitivity of the inverse problem solution to different types of measurements or parameters describing the experimental setup. Hyper-differential sensitivity analysis (HDSA) is a sensitivity analysis approach that provides tools for such tasks. We extend existing HDSA methods by developing methods for quantifying the uncertainty in the estimated parameters. Specifically, we propose a linear approximation to the solution of the inverse problem that allows efficiently approximating the statistical properties of the estimated parameters. We also explore the use of this linear model for approximate global sensitivity analysis. As a driving application, we consider an inverse problem governed by a COVID–19 model. We present comprehensive computational studies that examine the sensitivity of this inverse problem to several uncertain model parameters and different types of measurement data. Our results also demonstrate the effectiveness of the linear approximation model for uncertainty quantification in inverse problems and for parameter screening.
Collapse
|
85
|
Oesterle J, Krämer N, Hennig P, Berens P. Probabilistic solvers enable a straight-forward exploration of numerical uncertainty in neuroscience models. J Comput Neurosci 2022; 50:485-503. [PMID: 35932442 PMCID: PMC9666333 DOI: 10.1007/s10827-022-00827-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/14/2022] [Accepted: 07/02/2022] [Indexed: 11/30/2022]
Abstract
Understanding neural computation on the mechanistic level requires models of neurons and neuronal networks. To analyze such models one typically has to solve coupled ordinary differential equations (ODEs), which describe the dynamics of the underlying neural system. These ODEs are solved numerically with deterministic ODE solvers that yield single solutions with either no, or only a global scalar error indicator on precision. It can therefore be challenging to estimate the effect of numerical uncertainty on quantities of interest, such as spike-times and the number of spikes. To overcome this problem, we propose to use recently developed sampling-based probabilistic solvers, which are able to quantify such numerical uncertainties. They neither require detailed insights into the kinetics of the models, nor are they difficult to implement. We show that numerical uncertainty can affect the outcome of typical neuroscience simulations, e.g. jittering spikes by milliseconds or even adding or removing individual spikes from simulations altogether, and demonstrate that probabilistic solvers reveal these numerical uncertainties with only moderate computational overhead.
Collapse
|
86
|
Upadhyay K, Giovanis DG, Alshareef A, Knutsen AK, Johnson CL, Carass A, Bayly PV, Shields MD, Ramesh K. Data-driven Uncertainty Quantification in Computational Human Head Models. COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING 2022; 398:115108. [PMID: 37994358 PMCID: PMC10664838 DOI: 10.1016/j.cma.2022.115108] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2023]
Abstract
Computational models of the human head are promising tools for estimating the impact-induced response of the brain, and thus play an important role in the prediction of traumatic brain injury. The basic constituents of these models (i.e., model geometry, material properties, and boundary conditions) are often associated with significant uncertainty and variability. As a result, uncertainty quantification (UQ), which involves quantification of the effect of this uncertainty and variability on the simulated response, becomes critical to ensure reliability of model predictions. Modern biofidelic head model simulations are associated with very high computational cost and high-dimensional inputs and outputs, which limits the applicability of traditional UQ methods on these systems. In this study, a two-stage, data-driven manifold learning-based framework is proposed for UQ of computational head models. This framework is demonstrated on a 2D subject-specific head model, where the goal is to quantify uncertainty in the simulated strain fields (i.e., output), given variability in the material properties of different brain substructures (i.e., input). In the first stage, a data-driven method based on multi-dimensional Gaussian kernel-density estimation and diffusion maps is used to generate realizations of the input random vector directly from the available data. Computational simulations of a small number of realizations provide input-output pairs for training data-driven surrogate models in the second stage. The surrogate models employ nonlinear dimensionality reduction using Grassmannian diffusion maps, Gaussian process regression to create a low-cost mapping between the input random vector and the reduced solution space, and geometric harmonics models for mapping between the reduced space and the Grassmann manifold. It is demonstrated that the surrogate models provide highly accurate approximations of the computational model while significantly reducing the computational cost. Monte Carlo simulations of the surrogate models are used for uncertainty propagation. UQ of the strain fields highlights significant spatial variation in model uncertainty, and reveals key differences in uncertainty among commonly used strain-based brain injury predictor variables.
Collapse
|
87
|
Olivares A, Staffetti E. Robust optimal control of compartmental models in epidemiology: Application to the COVID-19 pandemic. COMMUNICATIONS IN NONLINEAR SCIENCE & NUMERICAL SIMULATION 2022; 111:106509. [PMID: 35437340 PMCID: PMC9007991 DOI: 10.1016/j.cnsns.2022.106509] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Revised: 02/16/2022] [Accepted: 04/05/2022] [Indexed: 05/09/2023]
Abstract
In this paper, a spectral approach is used to formulate and solve robust optimal control problems for compartmental epidemic models, allowing the uncertainty propagation through the optimal control model to be represented by a polynomial expansion of its stochastic state variables. More specifically, a statistical moment-based polynomial chaos expansion is employed. The spectral expansion of the stochastic state variables allows the computation of their main statistics to be carried out, resulting in a compact and efficient representation of the variability of the optimal control model with respect to its random parameters. The proposed robust formulation provides the designers of the optimal control strategy of the epidemic model the capability to increase the predictability of the results by simply adding upper bounds on the variability of the state variables. Moreover, this approach yields a way to efficiently estimate the probability distributions of the stochastic state variables and conduct a global sensitivity analysis. To show the practical implementation of the proposed approach, a mathematical model of COVID-19 transmission is considered. The numerical results show that the spectral approach proposed to formulate and solve robust optimal control problems for compartmental epidemic models provides healthcare systems with a valuable tool to mitigate and control the impact of infectious diseases.
Collapse
|
88
|
Ahsan MA, Qayyum A, Razi A, Qadir J. An active learning method for diabetic retinopathy classification with uncertainty quantification. Med Biol Eng Comput 2022; 60:2797-2811. [PMID: 35859243 DOI: 10.1007/s11517-022-02633-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 06/24/2022] [Indexed: 02/04/2023]
Abstract
In recent years, deep learning (DL) techniques have provided state-of-the-art performance in medical imaging. However, good quality (annotated) medical data is in general hard to find due to the usually high cost of medical images, limited availability of expert annotators (e.g., radiologists), and the amount of time required for annotation. In addition, DL is data-hungry and its training requires extensive computational resources. Furthermore, DL being a black-box method lacks transparency on its inner working and lacks fundamental understanding behind decisions made by the model and consequently, this notion enhances the uncertainty on its predictions. To this end, we address these challenges by proposing a hybrid model, which uses a Bayesian convolutional neural network (BCNN) for uncertainty quantification, and an active learning approach for annotating the unlabeled data. The BCNN is used as a feature descriptor and these features are then used for training a model, in an active learning setting. We evaluate the proposed framework for diabetic retinopathy classification problem and demonstrate state-of-the-art performance in terms of different metrics.
Collapse
|
89
|
Amato F, Guignard F, Walch A, Mohajeri N, Scartezzini JL, Kanevski M. Spatio-temporal estimation of wind speed and wind power using extreme learning machines: predictions, uncertainty and technical potential. STOCHASTIC ENVIRONMENTAL RESEARCH AND RISK ASSESSMENT : RESEARCH JOURNAL 2022; 36:2049-2069. [PMID: 36101650 PMCID: PMC9463360 DOI: 10.1007/s00477-022-02219-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 03/15/2022] [Indexed: 06/15/2023]
Abstract
UNLABELLED With wind power providing an increasing amount of electricity worldwide, the quantification of its spatio-temporal variations and the related uncertainty is crucial for energy planners and policy-makers. Here, we propose a methodological framework which (1) uses machine learning to reconstruct a spatio-temporal field of wind speed on a regular grid from spatially irregularly distributed measurements and (2) transforms the wind speed to wind power estimates. Estimates of both model and prediction uncertainties, and of their propagation after transforming wind speed to power, are provided without any assumptions on data distributions. The methodology is applied to study hourly wind power potential on a grid of 250 × 250 m 2 for turbines of 100 m hub height in Switzerland, generating the first dataset of its type for the country. We show that the average annual power generation per turbine is 4.4 GWh. Results suggest that around 12,000 wind turbines could be installed on all 19,617 km 2 of available area in Switzerland resulting in a maximum technical wind potential of 53 TWh. To achieve the Swiss expansion goals of wind power for 2050, around 1000 turbines would be sufficient, corresponding to only 8% of the maximum estimated potential. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s00477-022-02219-w.
Collapse
|
90
|
Zerenner T, Di Lauro F, Dashti M, Berthouze L, Kiss IZ. Probabilistic predictions of SIS epidemics on networks based on population-level observations. Math Biosci 2022; 350:108854. [PMID: 35659615 DOI: 10.1016/j.mbs.2022.108854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 05/17/2022] [Accepted: 05/23/2022] [Indexed: 11/16/2022]
Abstract
We predict the future course of ongoing susceptible-infected-susceptible (SIS) epidemics on regular, Erdős-Rényi and Barabási-Albert networks. It is known that the contact network influences the spread of an epidemic within a population. Therefore, observations of an epidemic, in this case at the population-level, contain information about the underlying network. This information, in turn, is useful for predicting the future course of an ongoing epidemic. To exploit this in a prediction framework, the exact high-dimensional stochastic model of an SIS epidemic on a network is approximated by a lower-dimensional surrogate model. The surrogate model is based on a birth-and-death process; the effect of the underlying network is described by a parametric model for the birth rates. We demonstrate empirically that the surrogate model captures the intrinsic stochasticity of the epidemic once it reaches a point from which it will not die out. Bayesian parameter inference allows for uncertainty about the model parameters and the class of the underlying network to be incorporated directly into probabilistic predictions. An evaluation of a number of scenarios shows that in most cases the resulting prediction intervals adequately quantify the prediction uncertainty. As long as the population-level data is available over a long-enough period, even if not sampled frequently, the model leads to excellent predictions where the underlying network is correctly identified and prediction uncertainty mainly reflects the intrinsic stochasticity of the spreading epidemic. For predictions inferred from shorter observational periods, uncertainty about parameters and network class dominate prediction uncertainty. The proposed method relies on minimal data at population-level, which is always likely to be available. This, combined with its numerical efficiency, makes the proposed method attractive to be used either as a standalone inference and prediction scheme or in conjunction with other inference and/or predictive models.
Collapse
|
91
|
Dunne M, Mohammadi H, Challenor P, Borgo R, Porphyre T, Vernon I, Firat EE, Turkay C, Torsney-Weir T, Goldstein M, Reeve R, Fang H, Swallow B. Complex model calibration through emulation, a worked example for a stochastic epidemic model. Epidemics 2022; 39:100574. [PMID: 35617882 PMCID: PMC9109972 DOI: 10.1016/j.epidem.2022.100574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 04/22/2022] [Accepted: 04/29/2022] [Indexed: 12/03/2022] Open
Abstract
Uncertainty quantification is a formal paradigm of statistical estimation that aims to account for all uncertainties inherent in the modelling process of real-world complex systems. The methods are directly applicable to stochastic models in epidemiology, however they have thus far not been widely used in this context. In this paper, we provide a tutorial on uncertainty quantification of stochastic epidemic models, aiming to facilitate the use of the uncertainty quantification paradigm for practitioners with other complex stochastic simulators of applied systems. We provide a formal workflow including the important decisions and considerations that need to be taken, and illustrate the methods over a simple stochastic epidemic model of UK SARS-CoV-2 transmission and patient outcome. We also present new approaches to visualisation of outputs from sensitivity analyses and uncertainty quantification more generally in high input and/or output dimensions.
Collapse
|
92
|
Carpio A, Pierret E. Uncertainty quantification in Covid-19 spread: Lockdown effects. RESULTS IN PHYSICS 2022; 35:105375. [PMID: 35280115 PMCID: PMC8897887 DOI: 10.1016/j.rinp.2022.105375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 02/19/2022] [Accepted: 02/22/2022] [Indexed: 06/14/2023]
Abstract
We develop a Bayesian inference framework to quantify uncertainties in epidemiological models. We use SEIJR and SIJR models involving populations of susceptible, exposed, infective, diagnosed, dead and recovered individuals to infer from Covid-19 data rate constants, as well as their variations in response to lockdown measures. To account for confinement, we distinguish two susceptible populations at different risk: confined and unconfined. We show that transmission and recovery rates within them vary in response to facts, and that the diagnose rate is quite low, which leads to large amounts of undiagnosed infective individuals. A key unknown to predict the evolution of the epidemic is the fraction of the population affected by the virus, including asymptomatic subjects. Our study tracks its time evolution with quantified uncertainty from available official data, limited, however, by the data quality. We exemplify the technique with data from Spain, country in which late drastic lockdowns were enforced for months during the first wave of the current pandemic. In late actions and in the absence of other measures, spread is delayed but not stopped unless a large enough fraction of the population is confined until the asymptomatic population is depleted. To some extent, confinement can be replaced by strong distancing through masks in adequate circumstances.
Collapse
|
93
|
Torchio R, Arduino A, Zilberti L, Bottauscio O. A fast tool for the parametric analysis of human body exposed to LF electromagnetic fields in biomedical applications. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106543. [PMID: 34861616 DOI: 10.1016/j.cmpb.2021.106543] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 11/02/2021] [Accepted: 11/15/2021] [Indexed: 06/13/2023]
Abstract
A numerical procedure for analyzing electromagnetic (EM) fields interactions with biological tissues is presented. The proposed approach aims at drastically reducing the computational burden required by the repeated solution of large scale problems involving the interaction of the human body with EM fields, such as in the study of the time evolution of EM fields, uncertainty quantification, and inverse problems. The proposed volume integral equation (VIE), focused on low frequency applications, is a system of integral equations in terms of current density and scalar potential in the biological tissues excited by EM fields and/or electrodes connected to the human body. The proposed formulation requires the voxelization of the human body and takes advantage of the regularity of such discretization by speeding-up the computational procedure. Moreover, it exploits recent advancements in the solution of VIE by means of iterative preconditioned solvers and ad hoc parametric Model Order Reduction techniques. The efficiency of the proposed tool is demonstrated by applying it to a couple of realistic model problems: the assessment of the peripheral nerve stimulation, performed in terms of evaluation of the induced electric field, due to the gradient coils of a magnetic resonance imaging scanner during a clinical examination and the assessment of the exposure to environmental fields at 50 Hz of live-line workers with uncertain properties of the biological tissues. Thanks to the proposed method, uncertainty quantification analyses and time domain simulations are possible even for large scale problems and they can be performed on standard computers and reasonable computation time. Sample implementation of the method is made publicly available at https://github.com/UniPD-DII-ETCOMP/BioMOR.
Collapse
|
94
|
MejiaCruz Y, Jiang Z, Caicedo JM, Franco JM. Probabilistic Force Estimation and Event Localization (PFEEL) algorithm. ENGINEERING STRUCTURES 2022; 252:113535. [PMID: 35645429 PMCID: PMC9138175 DOI: 10.1016/j.engstruct.2021.113535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Localization of human activity using floor vibrations has gained attention in recent years. In human health technologies, floor vibrations have been recently used to estimate gait parameters to predict a patients' health status. Various methodologies such as using the characteristics of wave traveling (algorithms based on time of arrival) or the properties of structures (Force Estimation and Event Localization, FEEL, algorithm) have been investigated to localize the impact, fall, or step events. This paper presents a probabilistic approach that builds upon the FEEL algorithm to offer the advantage of eliminating the need for a robust experimental setup. The proposed Probabilistic Force Estimation and Event Localization (PFEEL) algorithm provides a probabilistic measure to an event's force estimation and localization using random variables associated with the floor's dynamics. The algorithm can also guide calibration by identifying calibration points that provide the maximum information. This reduces the number of calibration points needed, which has practical benefits during the implementation. In this manuscript, we presented the design, development, and validation of the algorithm.
Collapse
|
95
|
Russo M, Scarpa B. Learning in Medicine: The Importance of Statistical Thinking. Methods Mol Biol 2022; 2486:215-232. [PMID: 35437725 DOI: 10.1007/978-1-0716-2265-0_11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In many fields, including medicine and biology, there has been in the last years an increasing diffusion and availability of complex data from different sources. Examples include biological experiments or data from health care providers. These data encompass information that can potentially enhance therapeutic advancement and constitute the core of modern system medicine. When analyzing these complex data, it is important to appropriately quantify uncertainty, avoiding using only algorithmic and automated approaches, which are not always appropriate. Improper application of algorithmic approaches, which ignore domain knowledge, could result in filling the literature with imprecise and/or misleading conclusions. In this chapter, we highlight the importance of statistical thinking when leveraging complex data and models to enhance science progress. In particular, we discuss the reproducibility and replicability issues, the importance of uncertainty quantification, and some common pitfalls in the analysis of big data.
Collapse
|
96
|
Libotte GB, dos Anjos L, Almeida RCC, Malta SMC, Silva RS. Framework for enhancing the estimation of model parameters for data with a high level of uncertainty. NONLINEAR DYNAMICS 2022; 107:1919-1936. [PMID: 35017792 PMCID: PMC8736321 DOI: 10.1007/s11071-021-07069-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Accepted: 11/15/2021] [Indexed: 05/07/2023]
Abstract
Reliable data are essential to obtain adequate simulations for forecasting the dynamics of epidemics. In this context, several political, economic, and social factors may cause inconsistencies in the reported data, which reflect the capacity for realistic simulations and predictions. In the case of COVID-19, for example, such uncertainties are mainly motivated by large-scale underreporting of cases due to reduced testing capacity in some locations. In order to mitigate the effects of noise in the data used to estimate parameters of models, we propose strategies capable of improving the ability to predict the spread of the diseases. Using a compartmental model in a COVID-19 study case, we show that the regularization of data by means of Gaussian process regression can reduce the variability of successive forecasts, improving predictive ability. We also present the advantages of adopting parameters of compartmental models that vary over time, in detriment to the usual approach with constant values.
Collapse
|
97
|
Matveeva A, Leonenko V. Application of Gaussian process regression as a surrogate modeling method to assess the dynamics of COVID-19 propagation. PROCEDIA COMPUTER SCIENCE 2022; 212:340-347. [PMID: 36437869 PMCID: PMC9682405 DOI: 10.1016/j.procs.2022.11.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
In this research, we aimed to assess the possibility of using surrogate modeling methods to replace time-consuming calculations related to modeling of COVID-19 dynamics. The Gaussian process regression (GPR) was used as a surrogate to replace detailed simulations by a COVID-19 multiagent model. Experiments were conducted with various kernels, as a result, in accordance with the quality metrics of the models, kernels were identified in which the surrogate gives the most accurate result (Rational Quadratic kernel and Additive kernel). It was demonstrated that by smoothing the dynamics of COVID-19 propagation, it is possible to achieve greater accuracy in GPR training. The obtained results prove the potential possibility of using surrogate modeling methods to conduct an uncertainty quantification of the multiagent model of COVID-19 propagation.
Collapse
|
98
|
Deneer A, Fleck C. Mathematical Modelling in Plant Synthetic Biology. Methods Mol Biol 2022; 2379:209-251. [PMID: 35188665 DOI: 10.1007/978-1-0716-1791-5_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Mathematical modelling techniques are integral to current research in plant synthetic biology. Modelling approaches can provide mechanistic understanding of a system, allowing predictions of behaviour and thus providing a tool to help design and analyse biological circuits. In this chapter, we provide an overview of mathematical modelling methods and their significance for plant synthetic biology. Starting with the basics of dynamics, we describe the process of constructing a model over both temporal and spatial scales and highlight crucial approaches, such as stochastic modelling and model-based design. Next, we focus on the model parameters and the techniques required in parameter analysis. We then describe the process of selecting a model based on tests and criteria and proceed to methods that allow closer analysis of the system's behaviour. Finally, we highlight the importance of uncertainty in modelling approaches and how to deal with a lack of knowledge, noisy data, and biological variability; all aspects that play a crucial role in the cooperation between the experimental and modelling components. Overall, this chapter aims to illustrate the importance of mathematical modelling in plant synthetic biology, providing an introduction for those researchers who are working with or working on modelling techniques.
Collapse
|
99
|
Wang R, Chen H, Guan C. A Bayesian inference-based approach for performance prognostics towards uncertainty quantification and its applications on the marine diesel engine. ISA TRANSACTIONS 2021; 118:159-173. [PMID: 33642029 DOI: 10.1016/j.isatra.2021.02.024] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 01/28/2021] [Accepted: 02/09/2021] [Indexed: 06/12/2023]
Abstract
In this paper, the Bayesian analysis is introduced for the performance prognostics of the marine diesel engine to address the uncertainty of inferences and results by using probability distributions. Two Bayesian models are presented: the Bayesian neural networks model is used to implement health monitoring whilst the Bayesian logistic regression model quantifies the run-to-failure process of the marine diesel engine. The Variational Inference and the Markov Chain Monte Carlo algorithms learn and infer these two models' parameters, respectively. Additionally, by analyzing characteristics of the marine diesel engine, the instantaneous angular speed signals are selected as the condition monitoring data, which can be used to indirectly predict the indicated mean effective pressure and further assess the performance of the marine engine. To verify the superiority of the proposed framework based on the Bayesian models and indirect estimation, operational datasets from a real engine under normal and fault conditions are acquired. The proposed framework and other conventional methods are adopted to analyze the attained data. The results demonstrate that the proposed approach is superior to the other methods and has the potential to be applied as an on-line condition monitoring tool for the performance prognostics of the marine diesel engine.
Collapse
|
100
|
Albani RAS, Albani VVL, Migon HS, Silva Neto AJ. Uncertainty quantification and atmospheric source estimation with a discrepancy-based and a state-dependent adaptative MCMC. ENVIRONMENTAL POLLUTION (BARKING, ESSEX : 1987) 2021; 290:118039. [PMID: 34467885 DOI: 10.1016/j.envpol.2021.118039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 08/05/2021] [Accepted: 08/22/2021] [Indexed: 06/13/2023]
Abstract
We address the source characterization of atmospheric releases using adaptive strategies in Bayesian inference in combination with the numerical solution of the dispersion problem by a stabilized finite element method and uncertainty quantification in the measurements. The adaptive techniques accelerate the convergence of Monte Carlo Markov Chain (MCMC) algorithms, leading to accurate reconstructions of the source parameters. Such accuracy is illustrated by the comparison with results from previous works. Moreover, the technique used to simulate the corresponding dispersion problem allowed us to introduce relevant meteorological information. The uncertainty quantification also improves the quality of reconstructions. Numerical examples using data from the Copenhagen experimental campaign illustrate the effectiveness of the proposed methodology. We found errors in reconstructions ranging from 0.11% to 8.67% of the size of the search region, which is similar to results found in previous works using deterministic techniques, with comparable computational time.
Collapse
|