351
|
Mathematical analysis of the effects of controls on transmission dynamics of SARS-CoV-2. ALEXANDRIA ENGINEERING JOURNAL 2020; 59. [PMCID: PMC7524680 DOI: 10.1016/j.aej.2020.09.033] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
COVID-19, an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), starting from Wuhan city of China, plagued the world in the later part of 2019. We developed a deterministic model to study the transmission dynamics of the disease with two categories of the Susceptibles (ie Immigrant Susceptibles and Local Susceptible). The model is shown to have a globally stable disease-free equilibrium point whenever the basic reproduction number R0 is less than unity. The endemic equilibrium is also shown to be globally stable for R0>1 under some conditions. The spread of the disease is also shown to be highly sensitive to use of PPEs and personal hygiene (d), transmission probability (β), average number of contacts of infected person per unit time (day) (c), the rate at which the exposed develop clinical symptoms (δ) and the rate of recovery (ρ). Numerical simulation of the model is also done to illustrate the analytical results established.
Collapse
|
352
|
Lemecha Obsu L, Feyissa Balcha S. Optimal control strategies for the transmission risk of COVID-19. JOURNAL OF BIOLOGICAL DYNAMICS 2020; 14:590-607. [PMID: 32696723 DOI: 10.1080/17513758.2020.1788182] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2020] [Accepted: 05/26/2020] [Indexed: 05/24/2023]
Abstract
In this paper, we apply optimal control theory to a novel coronavirus (COVID-19) transmission model given by a system of non-linear ordinary differential equations. Optimal control strategies are obtained by minimizing the number of exposed and infected population considering the cost of implementation. The existence of optimal controls and characterization is established using Pontryagin's Maximum Principle. An expression for the basic reproduction number is derived in terms of control variables. Then the sensitivity of basic reproduction number with respect to model parameters is also analysed. Numerical simulation results demonstrated good agreement with our analytical results. Finally, the findings of this study shows that comprehensive impacts of prevention, intensive medical care and surface disinfection strategies outperform in reducing the disease epidemic with optimum implementation cost.
Collapse
|
353
|
Saegerman C, Bianchini J, Snoeck CJ, Moreno A, Chiapponi C, Zohari S, Ducatez MF. First expert elicitation of knowledge on drivers of emergence of influenza D in Europe. Transbound Emerg Dis 2020; 68:3349-3359. [PMID: 33249766 DOI: 10.1111/tbed.13938] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 11/23/2020] [Accepted: 11/23/2020] [Indexed: 10/22/2022]
Abstract
The influenza D virus (IDV) was first identified and characterized in 2011. Considering the virus' zoonotic potential, its genome nature (segmented RNA virus), its worldwide circulation in livestock and its role in bovine respiratory disease, an increased interest is given to IDV. However, few data are available on drivers of emergence of IDV. We first listed fifty possible drivers of emergence of IDV in ruminants and swine. As recently carried out for COVID-19 in pets (Transboundary and Emerging Diseases, 2020), a scoring system was developed per driver and scientific experts (N = 28) were elicited to (a) allocate a score to each driver, (b) weight the drivers' scores within each domain and (c) weight the different domains among themselves. An overall weighted score was calculated per driver, and drivers were ranked in decreasing order. Drivers with comparable likelihoods to play a role in the emergence of IDV in ruminants and swine in Europe were grouped using a regression tree analysis. Finally, the robustness of the expert elicitation was verified. Eight drivers were ranked with the highest probability to play a key role in the emergence of IDV: current species specificity of the causing agent of the disease; influence of (il)legal movements of live animals (ruminants, swine) from neighbouring/European Union member states and from third countries for the disease to (re-)emerge in a given country; detection of emergence; current knowledge of the pathogen; vaccine availability; animal density; and transport vehicles of live animals. As there is still limited scientific knowledge on the topic, expert elicitation of knowledge and multi-criteria decision analysis, in addition to clustering and sensitivity analyses, are very important to prioritize future studies, starting from the top eight drivers. The present methodology could be applied to other emerging animal diseases.
Collapse
|
354
|
Arya S, Todman H, Baker M, Hooton S, Millard A, Kreft JU, Hobman JL, Stekel DJ. A generalised model for generalised transduction: the importance of co-evolution and stochasticity in phage mediated antimicrobial resistance transfer. FEMS Microbiol Ecol 2020; 96:5850753. [PMID: 32490523 DOI: 10.1093/femsec/fiaa100] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Accepted: 06/02/2020] [Indexed: 01/21/2023] Open
Abstract
Antimicrobial resistance is a major global challenge. Of particular concern are mobilizable elements that can transfer resistance genes between bacteria, leading to pathogens with new combinations of resistance. To date, mathematical models have largely focussed on transfer of resistance by plasmids, with fewer studies on transfer by bacteriophages. We aim to understand how best to model transfer of resistance by transduction by lytic phages. We show that models of lytic bacteriophage infection with empirically derived realistic phage parameters lead to low numbers of bacteria, which, in low population or localised environments, lead to extinction of bacteria and phage. Models that include antagonistic co-evolution of phage and bacteria produce more realistic results. Furthermore, because of these low numbers, stochastic dynamics are shown to be important, especially to spread of resistance. When resistance is introduced, resistance can sometimes be fixed, and at other times die out, with the probability of each outcome sensitive to bacterial and phage parameters. Specifically, that outcome most strongly depends on the baseline death rate of bacteria, with phage-mediated spread favoured in benign environments with low mortality over more hostile environments. We conclude that larger-scale models should consider spatial compartmentalisation and heterogeneous microenviroments, while encompassing stochasticity and co-evolution.
Collapse
|
355
|
Depaoli S, Winter SD, Visser M. The Importance of Prior Sensitivity Analysis in Bayesian Statistics: Demonstrations Using an Interactive Shiny App. Front Psychol 2020; 11:608045. [PMID: 33324306 PMCID: PMC7721677 DOI: 10.3389/fpsyg.2020.608045] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Accepted: 10/30/2020] [Indexed: 11/25/2022] Open
Abstract
The current paper highlights a new, interactive Shiny App that can be used to aid in understanding and teaching the important task of conducting a prior sensitivity analysis when implementing Bayesian estimation methods. In this paper, we discuss the importance of examining prior distributions through a sensitivity analysis. We argue that conducting a prior sensitivity analysis is equally important when so-called diffuse priors are implemented as it is with subjective priors. As a proof of concept, we conducted a small simulation study, which illustrates the impact of priors on final model estimates. The findings from the simulation study highlight the importance of conducting a sensitivity analysis of priors. This concept is further extended through an interactive Shiny App that we developed. The Shiny App allows users to explore the impact of various forms of priors using empirical data. We introduce this Shiny App and thoroughly detail an example using a simple multiple regression model that users at all levels can understand. In this paper, we highlight how to determine the different settings for a prior sensitivity analysis, how to visually and statistically compare results obtained in the sensitivity analysis, and how to display findings and write up disparate results obtained across the sensitivity analysis. The goal is that novice users can follow the process outlined here and work within the interactive Shiny App to gain a deeper understanding of the role of prior distributions and the importance of a sensitivity analysis when implementing Bayesian methods. The intended audience is broad (e.g., undergraduate or graduate students, faculty, and other researchers) and can include those with limited exposure to Bayesian methods or the specific model presented here.
Collapse
|
356
|
Hsu CH, He Y, Hu C, Zhou W. A multiple imputation-based sensitivity analysis approach for data subject to missing not at random. Stat Med 2020; 39:3756-3771. [PMID: 32717095 PMCID: PMC10481859 DOI: 10.1002/sim.8691] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Revised: 05/31/2020] [Accepted: 06/14/2020] [Indexed: 11/12/2022]
Abstract
Missingness mechanism is in theory unverifiable based only on observed data. If there is a suspicion of missing not at random, researchers often perform a sensitivity analysis to evaluate the impact of various missingness mechanisms. In general, sensitivity analysis approaches require a full specification of the relationship between missing values and missingness probabilities. Such relationship can be specified based on a selection model, a pattern-mixture model or a shared parameter model. Under the selection modeling framework, we propose a sensitivity analysis approach using a nonparametric multiple imputation strategy. The proposed approach only requires specifying the correlation coefficient between missing values and selection (response) probabilities under a selection model. The correlation coefficient is a standardized measure and can be used as a natural sensitivity analysis parameter. The sensitivity analysis involves multiple imputations of missing values, yet the sensitivity parameter is only used to select imputing/donor sets. Hence, the proposed approach might be more robust against misspecifications of the sensitivity parameter. For illustration, the proposed approach is applied to incomplete measurements of level of preoperative Hemoglobin A1c, for patients who had high-grade carotid artery stenosisa and were scheduled for surgery. A simulation study is conducted to evaluate the performance of the proposed approach.
Collapse
|
357
|
Pathmanathan P, Galappaththige SK, Cordeiro JM, Kaboudian A, Fenton FH, Gray RA. Data-Driven Uncertainty Quantification for Cardiac Electrophysiological Models: Impact of Physiological Variability on Action Potential and Spiral Wave Dynamics. Front Physiol 2020; 11:585400. [PMID: 33329034 PMCID: PMC7711195 DOI: 10.3389/fphys.2020.585400] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 10/20/2020] [Indexed: 12/23/2022] Open
Abstract
Computational modeling of cardiac electrophysiology (EP) has recently transitioned from a scientific research tool to clinical applications. To ensure reliability of clinical or regulatory decisions made using cardiac EP models, it is vital to evaluate the uncertainty in model predictions. Model predictions are uncertain because there is typically substantial uncertainty in model input parameters, due to measurement error or natural variability. While there has been much recent uncertainty quantification (UQ) research for cardiac EP models, all previous work has been limited by either: (i) considering uncertainty in only a subset of the full set of parameters; and/or (ii) assigning arbitrary variation to parameters (e.g., ±10 or 50% around mean value) rather than basing the parameter uncertainty on experimental data. In our recent work we overcame the first limitation by performing UQ and sensitivity analysis using a novel canine action potential model, allowing all parameters to be uncertain, but with arbitrary variation. Here, we address the second limitation by extending our previous work to use data-driven estimates of parameter uncertainty. Overall, we estimated uncertainty due to population variability in all parameters in five currents active during repolarization: inward potassium rectifier, transient outward potassium, L-type calcium, rapidly and slowly activating delayed potassium rectifier; 25 parameters in total (all model parameters except fast sodium current parameters). A variety of methods was used to estimate the variability in these parameters. We then propagated the uncertainties through the model to determine their impact on predictions of action potential shape, action potential duration (APD) prolongation due to drug block, and spiral wave dynamics. Parameter uncertainty had a significant effect on model predictions, especially L-type calcium current parameters. Correlation between physiological parameters was determined to play a role in physiological realism of action potentials. Surprisingly, even model outputs that were relative differences, specifically drug-induced APD prolongation, were heavily impacted by the underlying uncertainty. This is the first data-driven end-to-end UQ analysis in cardiac EP accounting for uncertainty in the vast majority of parameters, including first in tissue, and demonstrates how future UQ could be used to ensure model-based decisions are robust to all underlying parameter uncertainties.
Collapse
|
358
|
Zhou W, Wang A, Wang X, Cheke RA, Xiao Y, Tang S. Impact of Hospital Bed Shortages on the Containment of COVID-19 in Wuhan. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:E8560. [PMID: 33218133 PMCID: PMC7698869 DOI: 10.3390/ijerph17228560] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 10/30/2020] [Accepted: 11/14/2020] [Indexed: 01/26/2023]
Abstract
The global outbreak of COVID-19 has caused worrying concern amongst the public and health authorities. The first and foremost problem that many countries face during the outbreak is a shortage of medical resources. In order to investigate the impact of a shortage of hospital beds on the COVID-19 outbreak, we formulated a piecewise smooth model for describing the limitation of hospital beds. We parameterized the model while using data on the cumulative numbers of confirmed cases, recovered cases, and deaths in Wuhan city from 10 January to 12 April 2020. The results showed that, even with strong prevention and control measures in Wuhan, slowing down the supply rate, reducing the maximum capacity, and delaying the supply time of hospital beds all aggravated the outbreak severity by magnifying the cumulative numbers of confirmed cases and deaths, lengthening the end time of the pandemic, enlarging the value of the effective reproduction number during the outbreak, and postponing the time when the threshold value was reduced to 1. Our results demonstrated that establishment of the Huoshenshan, Leishenshan, and Fangcang shelter hospitals avoided 22,786 people from being infected and saved 6524 lives. Furthermore, the intervention of supplying hospital beds avoided infections in 362,360 people and saved the lives of 274,591 persons. This confirmed that the quick establishment of the Huoshenshan, Leishenshan Hospitals, and Fangcang shelter hospitals, and the designation of other hospitals for COVID-19 patients played important roles in containing the outbreak in Wuhan.
Collapse
|
359
|
Burk KM, Narayan A, Orr JA. Efficient sampling for polynomial chaos-based uncertainty quantification and sensitivity analysis using weighted approximate Fekete points. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2020; 36:e3395. [PMID: 32794272 PMCID: PMC8138745 DOI: 10.1002/cnm.3395] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 07/17/2020] [Accepted: 08/02/2020] [Indexed: 05/12/2023]
Abstract
Performing uncertainty quantification (UQ) and sensitivity analysis (SA) is vital when developing a patient-specific physiological model because it can quantify model output uncertainty and estimate the effect of each of the model's input parameters on the mathematical model. By providing this information, UQ and SA act as diagnostic tools to evaluate model fidelity and compare model characteristics with expert knowledge and real world observation. Computational efficiency is an important part of UQ and SA methods and thus optimization is an active area of research. In this work, we investigate a new efficient sampling method for least-squares polynomial approximation, weighted approximate Fekete points (WAFP). We analyze the performance of this method by demonstrating its utility in stochastic analysis of a cardiovascular model that estimates changes in oxyhemoglobin saturation response. Polynomial chaos (PC) expansion using WAFP produced results similar to the more standard Monte Carlo in quantifying uncertainty and identifying the most influential model inputs (including input interactions) when modeling oxyhemoglobin saturation, PC expansion using WAFP was far more efficient. These findings show the usefulness of using WAFP based PC expansion to quantify uncertainty and analyze sensitivity of a oxyhemoglobin dissociation response model. Applying these techniques could help analyze the fidelity of other relevant models in preparation for clinical application.
Collapse
|
360
|
Illenberger N, Small DS, Shaw PA. Impact of Regression to the Mean on the Synthetic Control Method: Bias and Sensitivity Analysis. Epidemiology 2020; 31:815-822. [PMID: 32947369 PMCID: PMC7541515 DOI: 10.1097/ede.0000000000001252] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
To make informed policy recommendations from observational panel data, researchers must consider the effects of confounding and temporal variability in outcome variables. Difference-in-difference methods allow for estimation of treatment effects under the parallel trends assumption. To justify this assumption, methods for matching based on covariates, outcome levels, and outcome trends-such as the synthetic control approach-have been proposed. While these tools can reduce bias and variability in some settings, we show that certain applications can introduce regression to the mean (RTM) bias into estimates of the treatment effect. Through simulations, we show RTM bias can lead to inflated type I error rates and bias toward the null in typical policy evaluation settings. We develop a novel correction for RTM bias that allows for valid inference and show how this correction can be used in a sensitivity analysis. We apply our proposed sensitivity analysis to reanalyze data concerning the effects of California's Proposition 99, a large-scale tobacco control program, on statewide smoking rates.
Collapse
|
361
|
Veridical Causal Inference using Propensity Score Methods for Comparative Effectiveness Research with Medical Claims. HEALTH SERVICES AND OUTCOMES RESEARCH METHODOLOGY 2020; 21:206-228. [PMID: 34040495 DOI: 10.1007/s10742-020-00222-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Medical insurance claims are becoming increasingly common data sources to answer a variety of questions in biomedical research. Although comprehensive in terms of longitudinal characterization of disease development and progression for a potentially large number of patients, population-based inference using these datasets require thoughtful modifications to sample selection and analytic strategies relative to other types of studies. Along with complex selection bias and missing data issues, claims-based studies are purely observational, which limits effective understanding and characterization of the treatment differences between groups being compared. All these issues contribute to a crisis in reproducibility and replication of comparative findings using medical claims. This paper offers practical guidance to the analytical process, demonstrates methods for estimating causal treatment effects with propensity score methods for several types of outcomes common to such studies, such as binary, count, time to event and longitudinally-varying measures, and also aims to increase transparency and reproducibility of reporting of results from these investigations. We provide an online version of the paper with readily implementable code for the entire analysis pipeline to serve as a guided tutorial for practitioners. The online version can be accessed at https://rydaro.github.io/. The analytic pipeline is illustrated using a sub-cohort of patients with advanced prostate cancer from the large Clinformatics TM Data Mart Database (OptumInsight, Eden Prairie, Minnesota), consisting of 73 million distinct private payer insurees from 2001-2016.
Collapse
|
362
|
Siegel L, Rudser K, Sutcliffe S, Markland A, Brubaker L, Gahagan S, Stapleton AE, Chu H. A Bayesian multivariate meta-analysis of prevalence data. Stat Med 2020; 39:3105-3119. [PMID: 32510638 PMCID: PMC7571488 DOI: 10.1002/sim.8593] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Revised: 04/11/2020] [Accepted: 05/09/2020] [Indexed: 01/01/2023]
Abstract
When conducting a meta-analysis involving prevalence data for an outcome with several subtypes, each of them is typically analyzed separately using a univariate meta-analysis model. Recently, multivariate meta-analysis models have been shown to correspond to a decrease in bias and variance for multiple correlated outcomes compared with univariate meta-analysis, when some studies only report a subset of the outcomes. In this article, we propose a novel Bayesian multivariate random effects model to account for the natural constraint that the prevalence of any given subtype cannot be larger than that of the overall prevalence. Extensive simulation studies show that this new model can reduce bias and variance when estimating subtype prevalences in the presence of missing data, compared with standard univariate and multivariate random effects models. The data from a rapid review on occupation and lower urinary tract symptoms by the Prevention of Lower Urinary Tract Symptoms Research Consortium are analyzed as a case study to estimate the prevalence of urinary incontinence and several incontinence subtypes among women in suspected high risk work environments.
Collapse
|
363
|
Exploring the Injury Severity Risk Factors in Fatal Crashes with Neural Network. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17207466. [PMID: 33066522 PMCID: PMC7602238 DOI: 10.3390/ijerph17207466] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 09/29/2020] [Accepted: 10/01/2020] [Indexed: 01/28/2023]
Abstract
A better understanding of circumstances contributing to the severity outcome of traffic crashes is an important goal of road safety studies. An in-depth crash injury severity analysis is vital for the proactive implementation of appropriate mitigation strategies. This study proposes an improved feed-forward neural network (FFNN) model for predicting injury severity associated with individual crashes using three years (2017–2019) of crash data collected along 15 rural highways in the Kingdom of Saudi Arabia (KSA). A total of 12,566 crashes were recorded during the study period with a binary injury severity outcome (fatal or non-fatal injury) for the variable to be predicted. FFNN architecture with back-propagation (BP) as a training algorithm, logistic as activation function, and six number of hidden neurons in the hidden layer yielded the best model performance. Results of model prediction for the test data were analyzed using different evaluation metrics such as overall accuracy, sensitivity, and specificity. Prediction results showed the adequacy and robust performance of the proposed method. A detailed sensitivity analysis of the optimized NN was also performed to show the impact and relative influence of different predictor variables on resulting crash injury severity. The sensitivity analysis results indicated that factors such as traffic volume, average travel speeds, weather conditions, on-site damage conditions, road and vehicle type, and involvement of pedestrians are the most sensitive variables. The methods applied in this study could be used in big data analysis of crash data, which can serve as a rapid-useful tool for policymakers to improve highway safety.
Collapse
|
364
|
Duan C, Chaovalitwongse WA, Bai F, Hippe DS, Wang S, Thammasorn P, Pierce LA, Liu X, You J, Miyaoka RS, Vesselle HJ, Kinahan PE, Rengan R, Zeng J, Bowen SR. Sensitivity analysis of FDG PET tumor voxel cluster radiomics and dosimetry for predicting mid-chemoradiation regional response of locally advanced lung cancer. Phys Med Biol 2020; 65:205007. [PMID: 33027064 PMCID: PMC7593986 DOI: 10.1088/1361-6560/abb0c7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
We investigated the sensitivity of regional tumor response prediction to variability in voxel clustering techniques, imaging features, and machine learning algorithms in 25 patients with locally advanced non-small cell lung cancer (LA-NSCLC) enrolled on the FLARE-RT clinical trial. Metabolic tumor volumes (MTV) from pre-chemoradiation (PETpre) and mid-chemoradiation fluorodeoxyglucose-positron emission tomography (FDG PET) images (PETmid) were subdivided into K-means or hierarchical voxel clusters by standardized uptake values (SUV) and 3D-positions. MTV cluster separability was evaluated by CH index, and morphologic changes were captured by Dice similarity and centroid Euclidean distance. PETpre conventional features included SUVmean, MTV/MTV cluster size, and mean radiation dose. PETpre radiomics consisted of 41 intensity histogram and 3D texture features (PET Oncology Radiomics Test Suite) extracted from MTV or MTV clusters. Machine learning models (multiple linear regression, support vector regression, logistic regression, support vector machines) of conventional features or radiomic features were constructed to predict PETmid response. Leave-one-out-cross-validated root-mean-squared-error (RMSE) for continuous response regression (ΔSUVmean) and area-under-receiver-operating-characteristic-curve (AUC) for binary response classification were calculated. K-means MTV 2-clusters (MTVhi, MTVlo) achieved maximum CH index separability (Friedman p < 0.001). Between PETpre and PETmid, MTV cluster pairs overlapped (Dice 0.70-0.87) and migrated 0.6-1.1 cm. PETmid ΔSUVmean response prediction was superior in MTV and MTVlo (RMSE = 0.17-0.21) compared to MTVhi (RMSE = 0.42-0.52, Friedman p < 0.001). PETmid ΔSUVmean response class prediction performance trended higher in MTVlo (AUC = 0.83-0.88) compared to MTVhi (AUC = 0.44-0.58, Friedman p = 0.052). Models were more sensitive to MTV/MTV cluster regions (Friedman p = 0.026) than feature sets/algorithms (Wilcoxon signed-rank p = 0.36). Top-ranked radiomic features included GLZSM-LZHGE (large-zone-high-SUV), GTSDM-CP (cluster-prominence), GTSDM-CS (cluster-shade) and NGTDM-CNT (contrast). Top-ranked features were consistent between MTVhi and MTVlo cluster pairs but varied between MTVhi-MTVlo clusters, reflecting distinct regional radiomic phenotypes. Variability in tumor voxel cluster response prediction can inform robust radiomic target definition for risk-adaptive chemoradiation in patients with LA-NSCLC. FLARE-RT trial: NCT02773238.
Collapse
|
365
|
Zhou T, Daniels MJ, Müller P. A Semiparametric Bayesian Approach to Dropout in Longitudinal Studies with Auxiliary Covariates. J Comput Graph Stat 2020; 29:1-12. [PMID: 33013150 DOI: 10.1080/10618600.2019.1617159] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
We develop a semiparametric Bayesian approach to missing outcome data in longitudinal studies in the presence of auxiliary covariates. We consider a joint model for the full data response, missingness and auxiliary covariates. We include auxiliary covariates to "move" the missingness "closer" to missing at random (MAR). In particular, we specify a semiparametric Bayesian model for the observed data via Gaussian process priors and Bayesian additive regression trees. These model specifications allow us to capture non-linear and non-additive effects, in contrast to existing parametric methods. We then separately specify the conditional distribution of the missing data response given the observed data response, missingness and auxiliary covariates (i.e. the extrapolation distribution) using identifying restrictions. We introduce meaningful sensitivity parameters that allow for a simple sensitivity analysis. Informative priors on those sensitivity parameters can be elicited from subject-matter experts. We use Monte Carlo integration to compute the full data estimands. Performance of our approach is assessed using simulated datasets. Our methodology is motivated by, and applied to, data from a clinical trial on treatments for schizophrenia.
Collapse
|
366
|
A Methodology for the Statistical Calibration of Complex Constitutive Material Models: Application to Temperature-Dependent Elasto-Visco-Plastic Materials. MATERIALS 2020; 13:ma13194402. [PMID: 33023178 PMCID: PMC7579257 DOI: 10.3390/ma13194402] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Revised: 09/22/2020] [Accepted: 09/28/2020] [Indexed: 11/16/2022]
Abstract
The calibration of any sophisticated model, and in particular a constitutive relation, is a complex problem that has a direct impact in the cost of generating experimental data and the accuracy of its prediction capacity. In this work, we address this common situation using a two-stage procedure. In order to evaluate the sensitivity of the model to its parameters, the first step in our approach consists of formulating a meta-model and employing it to identify the most relevant parameters. In the second step, a Bayesian calibration is performed on the most influential parameters of the model in order to obtain an optimal mean value and its associated uncertainty. We claim that this strategy is very efficient for a wide range of applications and can guide the design of experiments, thus reducing test campaigns and computational costs. Moreover, the use of Gaussian processes together with Bayesian calibration effectively combines the information coming from experiments and numerical simulations. The framework described is applied to the calibration of three widely employed material constitutive relations for metals under high strain rates and temperatures, namely, the Johnson–Cook, Zerilli–Armstrong, and Arrhenius models.
Collapse
|
367
|
Bearing Damage Detection of a Reinforced Concrete Plate Based on Sensitivity Analysis and Chaotic Moth-Flame-Invasive Weed Optimization. SENSORS 2020; 20:s20195488. [PMID: 32992710 PMCID: PMC7582654 DOI: 10.3390/s20195488] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 09/13/2020] [Accepted: 09/17/2020] [Indexed: 11/17/2022]
Abstract
This article proposes a novel damage detection method based on the sensitivity analysis and chaotic moth-flame-invasive weed optimization (CMF-IWO), which is utilized to simultaneously identify the damage of structural elements and bearings. First, the sensitivity coefficients of eigenvalues to the damage factors of structural elements and bearings are deduced, the regularization technology is used to solve the problem of equation undetermined, meanwhile, the modal strain energy-based index is utilized to detect the damage locations, and the regularization objective function is constructed to quantify the damage severity. Then, for the subsequent procedure of damage detection, CMF-IWO is proposed based on moth-flame optimization and invasive weed optimization as well as chaos theory, reverse learning, and evolutional strategy. The optimization effectiveness of the hybrid algorithm is verified by five benchmark functions and a damage identification numerical example of a simply supported beam; the results demonstrate it is of great global search ability and higher convergence efficiency. After that, a numerical example of an 8-span continuous beam and an experimental reinforced concrete plate are both adopted to evaluate the proposed damage identification method. The results of the numerical example indicate that the proposed method can locate and quantify the damage of structural elements and bearings with high accuracy. Furthermore, the outcomes of the experimental example show that despite the existence of some errors and uncertain factors, the method still obtains an acceptable result. Generally speaking, the proposed method is proved that it is of good feasibility.
Collapse
|
368
|
Yuan C, Hedeker D, Mermelstein R, Xie H. A tractable method to account for high-dimensional nonignorable missing data in intensive longitudinal data. Stat Med 2020; 39:2589-2605. [PMID: 32367549 DOI: 10.1002/sim.8560] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 01/29/2020] [Accepted: 04/09/2020] [Indexed: 11/09/2022]
Abstract
Despite the need for sensitivity analysis to nonignorable missingness in intensive longitudinal data (ILD), such analysis is greatly hindered by novel ILD features, such as large data volume and complex nonmonotonic missing-data patterns. Likelihood of alternative models permitting nonignorable missingness often involves very high-dimensional integrals, causing curse of dimensionality and rendering solutions computationally prohibitive to obtain. We aim to overcome this challenge by developing a computationally feasible method, nonlinear indexes of local sensitivity to nonignorability (NISNI). We use linear mixed effects models for the incomplete outcome and covariates. We use Markov multinomial models to describe complex missing-data patterns and mechanisms in ILD, thereby permitting missingness probabilities to depend directly on missing data. Using a second-order Taylor series to approximate likelihood under nonignorability, we develop formulas and closed-form expressions for NISNI. Our approach permits the outcome and covariates to be missing simultaneously, as is often the case in ILD, and can capture U-shaped impact of nonignorability in the neighborhood of the missing at random model without fitting alternative models or evaluating integrals. We evaluate performance of this method using simulated data and real ILD collected by the ecological momentary assessment method.
Collapse
|
369
|
Yan RS. [Ozone Sensitivity Analysis and Emission Controls in Dezhou in Summer]. HUAN JING KE XUE= HUANJING KEXUE 2020; 41:3961-3968. [PMID: 33124275 DOI: 10.13227/j.hjkx.202001197] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 04/20/2023]
Abstract
In recent years, there have been frequent ozone pollution episodes in Dezhou, China. In the summer of 2018 (from June to August), Dezhou experienced serious ozone pollution episodes. The daily 8-hour maximum ozone concentrations exceeded the national standard for 60 days with the standard exceeding ratio of 65%. The average of daily 8-hour maximum ozone concentration was 176 μg ·m-3 over these three months, and the highest value reached was 262 μg ·m-3. In this study, the WRF-CAMx model coupled with the higher-order decoupled direct method (HDDM) was used to analyze the ozone sensitivity and emission control plans in Dezhou during this period. The results showed that ozone formation was in the strong VOC-limited regime in the urban area of Dezhou, while it was in the NOx and VOCs transition regime in suburban areas. VOCs sensitivity values (dO3_V50) were positive every day in summer, which was higher in June (18.7 μg ·m-3 in urban area, 19.7 μg ·m-3 in suburban area) and August (15.3 μg ·m-3 in urban area, 16.4 μg ·m-3 in suburban area) than in July (13.0 μg ·m-3 in urban area, 11.8 μg ·m-3 in suburban area). NOx sensitivity values (dO3_N50) were positive or negative in the urban area, and most days were positive in the suburban area, which were close to the VOCs sensitivity values. For urban areas, VOC reduction should be the priority for emission reduction plans, whereas for suburban areas, NOx:VOCs=1:1 is recommended because the reductions in NOx and VOCs emissions had the same effect on ozone pollution control.
Collapse
|
370
|
CDC-42 Interactions with Par Proteins Are Critical for Proper Patterning in Polarization. Cells 2020; 9:cells9092036. [PMID: 32899550 PMCID: PMC7565983 DOI: 10.3390/cells9092036] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Revised: 08/31/2020] [Accepted: 09/02/2020] [Indexed: 11/23/2022] Open
Abstract
Many cells rearrange proteins and other components into spatially distinct domains in a process called polarization. This asymmetric patterning is required for a number of biological processes including asymmetric division, cell migration, and embryonic development. Proteins involved in polarization are highly conserved and include members of the Par and Rho protein families. Despite the importance of these proteins in polarization, it is not yet known how they interact and regulate each other to produce the protein localization patterns associated with polarization. In this study, we develop and analyse a biologically based mathematical model of polarization that incorporates interactions between Par and Rho proteins that are consistent with experimental observations of CDC-42. Using minimal network and eFAST sensitivity analyses, we demonstrate that CDC-42 is predicted to reinforce maintenance of anterior PAR protein polarity which in turn feedbacks to maintain CDC-42 polarization, as well as supporting posterior PAR protein polarization maintenance. The mechanisms for polarity maintenance identified by these methods are not sufficient for the generation of polarization in the absence of cortical flow. Additional inhibitory interactions mediated by the posterior Par proteins are predicted to play a role in the generation of Par protein polarity. More generally, these results provide new insights into the role of CDC-42 in polarization and the mutual regulation of key polarity determinants, in addition to providing a foundation for further investigations.
Collapse
|
371
|
Within-Host Phenotypic Evolution and the Population-Level Control of Chronic Viral Infections by Treatment and Prophylaxis. MATHEMATICS 2020; 8. [PMID: 34258245 PMCID: PMC8274820 DOI: 10.3390/math8091500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Chronic viral infections can persist for decades spanning thousands of viral generations, leading to a highly diverse population of viruses with its own complex evolutionary history. We propose an expandable mathematical framework for understanding how the emergence of genetic and phenotypic diversity affects the population-level control of those infections by both non-curative treatment and chemo-prophylactic measures. Our frameworks allows both neutral and phenotypic evolution, and we consider the specific evolution of contagiousness, resistance to therapy, and efficacy of prophylaxis. We compute both the controlled and uncontrolled, population-level basic reproduction number accounting for the within-host evolutionary process where new phenotypes emerge and are lost in infected persons, which we also extend to include both treatment and prophylactic control efforts. We used these results to discuss the conditions under which the relative efficacy of prophylactic versus therapeutic methods of control are superior. Finally, we give expressions for the endemic equilibrium of these models for certain constrained versions of the within-host evolutionary model providing a potential method for estimating within-host evolutionary parameters from population-level genetic sequence data.
Collapse
|
372
|
Marino MF, Alfò M. Finite Mixtures of Hidden Markov Models for Longitudinal Responses Subject to Drop out. MULTIVARIATE BEHAVIORAL RESEARCH 2020; 55:647-663. [PMID: 31559866 DOI: 10.1080/00273171.2019.1660606] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Drop out is a typical issue in longitudinal studies. When the missingness is non-ignorable, inference based on the observed data only may be biased. This paper is motivated by the Leiden 85+ study, a longitudinal study conducted to analyze the dynamics of cognitive functioning in the elderly. We account for dependence between longitudinal responses from the same subject using time-varying random effects associated with a heterogeneous hidden Markov chain. As several participants in the study drop out prematurely, we introduce a further random effect model to describe the missing data mechanism. The potential dependence between the random effects in the two equations (and, therefore, between the two processes) is introduced through a joint distribution specified via a latent structure approach. The application of the proposal to data from the Leiden 85+ study shows its effectiveness in modeling heterogeneous longitudinal patterns, possibly influenced by the missing data process. Results from a sensitivity analysis show the robustness of the estimates with respect to misspecification of the missing data mechanism. A simulation study provides evidence for the reliability of the inferential conclusions drawn from the analysis of the Leiden 85+ data.
Collapse
|
373
|
Gosho M, Maruo K. An application of the mixed-effects model and pattern mixture model to treatment groups with differential missingness suspected not-missing-at-random. Pharm Stat 2020; 20:93-108. [PMID: 33249763 DOI: 10.1002/pst.2058] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 05/30/2020] [Accepted: 07/21/2020] [Indexed: 11/09/2022]
Abstract
Likelihood-based, mixed-effects models for repeated measures (MMRMs) are occasionally used in primary analyses for group comparisons of incomplete continuous longitudinal data. Although MMRM analysis is generally valid under missing-at-random assumptions, it is invalid under not-missing-at-random (NMAR) assumptions. We consider the possibility of bias of estimated treatment effect using standard MMRM analysis in a motivational case, and propose simple and easily implementable pattern mixture models within the framework of mixed-effects modeling, to handle the NMAR data with differential missingness between treatment groups. The proposed models are a new form of pattern mixture model that employ a categorical time variable when modeling the outcome and a continuous time variable when modeling the missingness-data patterns. The models can directly provide an overall estimate of the treatment effect of interest using the average of the distribution of the missingness indicator and a categorical time variable in the same manner as MMRM analysis. Our simulation results indicate that the bias of the treatment effect for MMRM analysis was considerably larger than that for the pattern mixture model analysis under NMAR assumptions. In the case study, it would be dangerous to interpret only the results of the MMRM analysis, and the proposed pattern mixture model would be useful as a sensitivity analysis for treatment effect evaluation.
Collapse
|
374
|
Simulation and Verification of Vertical Heterogeneity Spectral Response of Winter Wheat Based on the mSCOPE Model. SENSORS 2020; 20:s20164570. [PMID: 32824031 PMCID: PMC7472204 DOI: 10.3390/s20164570] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 08/12/2020] [Accepted: 08/12/2020] [Indexed: 11/16/2022]
Abstract
Vertical heterogeneity of the biochemical characteristics of crop canopy is important in diagnosing and monitoring nutrition, disease, and crop yield via remote sensing. However, the research on vertical isomerism was not comprehensive. Experiments were carried out from the two levels of simulation and verification to analyze the applicability of this recently development model. Effects of winter wheat on spectrum were studied when input different structure parameters (e.g., leaf area index (LAI)) and physicochemical parameters (e.g., chlorophyll content (Chla+b) and water content (Cw)) to the mSCOPE (Soil Canopy Observation, Photochemistry, and Energy fluxes) model. The maximum operating efficiency was 127.43, when the winter wheat was stratified into three layers. Meanwhile, the simulation results also proved that: the vertical profile of LAI had an influence on canopy reflectance in almost all bands; the vertical profile of Chla+b mainly affected the reflectivity of visible region; the vertical profile of Cw only affected the near-infrared reflectance. The verification results showed that the vegetation indexes (VIs) selected of different bands were strongly correlated with the parameters of the canopy. LAI, Chla+b and Cw affected VIs estimation related to LAI, Chla+b and Cw respectively. The Root Mean Square Error (RMSE) of the new-proposed NDVIgreen was the smallest, which was 0.05. Sensitivity analysis showed that the spectrum was more sensitive to changes in upper layer parameters, which verified the rationality of mSCOPE model in explaining the law that light penetration in vertical nonuniform canopy gradually decreases with the increase of layers.
Collapse
|
375
|
Potter GE. Dismantling the Fragility Index: A demonstration of statistical reasoning. Stat Med 2020; 39:3720-3731. [PMID: 32781488 DOI: 10.1002/sim.8689] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 05/09/2020] [Accepted: 06/11/2020] [Indexed: 12/22/2022]
Abstract
The Fragility Index has been introduced as a complement to the P-value to summarize the statistical strength of evidence for a trial's result. The Fragility Index (FI) is defined in trials with two equal treatment group sizes, with a dichotomous or time-to-event outcome, and is calculated as the minimum number of conversions from nonevent to event in the treatment group needed to shift the P-value from Fisher's exact test over the .05 threshold. As the index lacks a well-defined probability motivation, its interpretation is challenging for consumers. We clarify what the FI may be capturing by separately considering two scenarios: (a) what the FI is capturing mathematically when the probability model is correct and (b) how well the FI captures violations of probability model assumptions. By calculating the posterior probability of a treatment effect, we show that when the probability model is correct, the FI inappropriately penalizes small trials for using fewer events than larger trials to achieve the same significance level. The analysis shows that for experiments conducted without bias, the FI promotes an incorrect intuition of probability, which has not been noted elsewhere and must be dispelled. We illustrate shortcomings of the FI's ability to quantify departures from model assumptions and contextualize the FI concept within current debate around the null hypothesis significance testing paradigm. Altogether, the FI creates more confusion than it resolves and does not promote statistical thinking. We recommend against its use. Instead, sensitivity analyses are recommended to quantify and communicate robustness of trial results.
Collapse
|