126
|
Andersen‐Ranberg N, Poulsen LM, Perner A, Hästbacka J, Morgan MPG, Citerio G, Oxenbøll‐Collet M, Weber S, Andreasen AS, Bestle MH, Uslu B, Pedersen HBS, Nielsen LG, Damgaard K, Jensen TB, Sommer T, Dey N, Mathiesen O, Granholm A. Agents intervening against delirium in the intensive care unit trial-Protocol for a secondary Bayesian analysis. Acta Anaesthesiol Scand 2022; 66:898-903. [PMID: 35580239 PMCID: PMC9540259 DOI: 10.1111/aas.14091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 05/09/2022] [Indexed: 11/27/2022]
Abstract
BACKGROUND Delirium is highly prevalent in the intensive care unit (ICU) and is associated with high morbidity and mortality. The antipsychotic haloperidol is the most frequently used agent to treat delirium although this is not supported by solid evidence. The agents intervening against delirium in the intensive care unit (AID-ICU) trial investigates the effects of haloperidol versus placebo for the treatment of delirium in adult ICU patients. METHODS This protocol describes the secondary, pre-planned Bayesian analyses of the primary and secondary outcomes up to day 90 of the AID-ICU trial. We will use Bayesian linear regression models for all count outcomes and Bayesian logistic regression models for all dichotomous outcomes. We will adjust for stratification variables (site and delirium subtype) and use weakly informative priors supplemented with sensitivity analyses using sceptical priors. We will present results as absolute differences (mean differences and risk differences) and relative differences (ratios of means and relative risks). Posteriors will be summarised using median values as point estimates and percentile-based 95% credibility intervals. Probabilities of any benefit/harm, clinically important benefit/harm and clinically unimportant differences will be presented for all outcomes. DISCUSSION The results of this secondary, pre-planned Bayesian analysis will complement the primary frequentist analysis of the AID-ICU trial and facilitate a nuanced and probabilistic interpretation of the trial results.
Collapse
|
127
|
Ekstrøm CT, Jensen AK. Having a ball: evaluating scoring streaks and game excitement using in-match trend estimation. ADVANCES IN STATISTICAL ANALYSIS : ASTA : A JOURNAL OF THE GERMAN STATISTICAL SOCIETY 2022; 107:295-311. [PMID: 35730005 PMCID: PMC9205152 DOI: 10.1007/s10182-022-00452-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 04/26/2022] [Indexed: 11/30/2022]
Abstract
Many popular sports involve matches between two teams or players where each team have the possibility of scoring points throughout the match. While the overall match winner and result is interesting, it conveys little information about the underlying scoring trends throughout the match. Modeling approaches that accommodate a finer granularity of the score difference throughout the match is needed to evaluate in-game strategies, discuss scoring streaks, teams strengths, and other aspects of the game. We propose a latent Gaussian process to model the score difference between two teams and introduce the Trend Direction Index as an easily interpretable probabilistic measure of the current trend in the match as well as a measure of post-game trend evaluation. In addition we propose the Excitement Trend Index-the expected number of monotonicity changes in the running score difference-as a measure of overall game excitement. Our proposed methodology is applied to all 1143 matches from the 2019-2020 National Basketball Association season. We show how the trends can be interpreted in individual games and how the excitement score can be used to cluster teams according to how exciting they are to watch. Supplementary Information The online version contains supplementary material available at 10.1007/s10182-022-00452-w.
Collapse
|
128
|
Röver C, Ursino M, Friede T, Zohar S. A straightforward meta-analysis approach for oncology phase I dose-finding studies. Stat Med 2022; 41:3915-3940. [PMID: 35661205 DOI: 10.1002/sim.9484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 05/12/2022] [Accepted: 05/16/2022] [Indexed: 11/09/2022]
Abstract
Phase I early-phase clinical studies aim at investigating the safety and the underlying dose-toxicity relationship of a drug or combination. While little may still be known about the compound's properties, it is crucial to consider quantitative information available from any studies that may have been conducted previously on the same drug. A meta-analytic approach has the advantages of being able to properly account for between-study heterogeneity, and it may be readily extended to prediction or shrinkage applications. Here we propose a simple and robust two-stage approach for the estimation of maximum tolerated dose(s) utilizing penalized logistic regression and Bayesian random-effects meta-analysis methodology. Implementation is facilitated using standard R packages. The properties of the proposed methods are investigated in Monte Carlo simulations. The investigations are motivated and illustrated by two examples from oncology.
Collapse
|
129
|
Baumann L, Krisam J, Kieser M. Monotonicity conditions for avoiding counterintuitive decisions in basket trials. Biom J 2022; 64:934-947. [PMID: 35692061 DOI: 10.1002/bimj.202100287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 02/11/2022] [Accepted: 03/05/2022] [Indexed: 11/10/2022]
Abstract
In a basket trial, a new treatment is tested in different subgroups, called the baskets. In oncology, the baskets usually comprise patients with different primary tumor sites but a common biomarker. Most basket trials are uncontrolled phase II trials and investigate a binary endpoint such as tumor response. To combine the data of baskets that show a similar response to the treatment, many basket trial designs use Bayesian borrowing methods. This increases the power compared to a basketwise analysis. However, it can lead to posterior probabilities that are not monotonically increasing in the number of responses. We show that, as a consequence, two types of counterintuitive decisions can arise-one that occurs within a single trial and one that occurs when the results are compared between different trials. We propose two monotonicity conditions for the inference in basket trials. Using a design recently proposed by Fujikawa and colleagues, we investigate the case of a single-stage basket trial with equal sample sizes in all baskets and show that, as the number of baskets increases, these conditions are violated for a wide range of different borrowing strengths. We show that in the investigated scenarios pruning baskets can help to ensure that the monotonicity conditions hold and investigate how this affects type I error rate and power.
Collapse
|
130
|
Koeppel M, Eckert K, Huber G. Trends in gross body coordination and cardiorespiratory fitness-a hierarchical Bayesian Analysis of 35 000 2nd Graders. Scand J Med Sci Sports 2022; 32:1026-1040. [PMID: 35218079 DOI: 10.1111/sms.14146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 01/22/2022] [Accepted: 02/14/2022] [Indexed: 11/28/2022]
Abstract
OBJECTIVE A negative decline of motor competence in children has been observed over the last decades. Though most studies derive their inferences from only two distant points in time and thus neglect to investigate the variability of the temporal trends. METHODS Between the year 2000 and 2011, 35 018 second graders performed the Körperkoordinationstest für Kinder (KTK), consisting of four items (WB, HH, JS, and MS) and a six minute run test (6MRT). A hierarchical Bayesian regression model with varying intercepts and temporal trends was fitted to the data. Age, sex, and BMI categories were included as input variables. The outcome variables were z-standardized to the initial cohort. RESULTS In all four KTK items, we observed a yearly decline of -0.020 (95% UI -0.038 to -0.001) for WB, -0.054 (95% UI -0.071, -0.037) for HH, -0.028 (95% UI -0.045 to -0.012) for JS and -0.088 (95% UI -0.108 to -0.067) for MS. For the 6MRT, no trend was identified. Overweight and obese children showed a disadvantage in all tests scores. Negative time interactions were observed for overweight and obese children in HH and JS. A substantial between-city variation for all temporal trends was observed. The predictive validation for all models but MS was successful. CONCLUSION A general negative decline was confirmed for coordinative abilities but not in cardiorespiratory fitness. For all outcome variables, a substantial between-city variation was observed, highlighting the importance of environmental factors in motor development. Overweight and obese children demonstrated an urgent need for action.
Collapse
|
131
|
Jin RN, Inada H, Négyesi J, Ito D, Nagatomi R. Carbon dioxide effects on daytime sleepiness and EEG signal: A combinational approach using classical frequentist and Bayesian analyses. INDOOR AIR 2022; 32:e13055. [PMID: 35762237 PMCID: PMC9327715 DOI: 10.1111/ina.13055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 04/29/2022] [Accepted: 05/10/2022] [Indexed: 06/15/2023]
Abstract
Environmental carbon dioxide (CO2 ) could affect various mental and physiological activities in humans, but its effect on daytime sleepiness is still controversial. In a randomized and counterbalanced crossover study with twelve healthy volunteers, we applied a combinational approach using classical frequentist and Bayesian statistics to analyze the CO2 exposure effect on daytime sleepiness and electroencephalogram (EEG) signals. Subjective sleepiness was measured by the Japanese Karolinska Sleepiness Scale (KSS-J) by recording EEG during CO2 exposure at different concentrations: Normal (C), 4000 ppm (Moderately High: MH), and 40 000 ppm (high: H). The daytime sleepiness was significantly affected by the exposure time but not the CO2 condition in the classical statistics. On the other hand, the Bayesian paired t-test revealed that the CO2 exposure at the MH condition might induce daytime sleepiness at the 40-min point compared with the C condition. By contrast, EEG was significantly affected by a short exposure to the H condition but not exposure time. The Bayesian analysis of EEG was primarily consistent with results by the classical statistics but showed different credible levels in the Bayes' factor. Our result suggested that the EEG may not be suitable to detect objective sleepiness induced by CO2 exposure because the EEG signal was highly sensitive to environmental CO2 concentration. Our study would be helpful for researchers to revisit whether EEG is applicable as a judgment indicator of objective sleepiness.
Collapse
|
132
|
Qu K, Bradley JR. Bayesian models for spatial count data with informative finite populations with application to the American community survey. J Appl Stat 2022; 50:2701-2716. [PMID: 37720247 PMCID: PMC10503459 DOI: 10.1080/02664763.2022.2078289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 05/07/2022] [Indexed: 10/18/2022]
Abstract
The American Community Survey (ACS) is an ongoing program conducted by the US Census Bureau that publishes estimates of important demographic statistics over pre-specified administrative areas. ACS provides spatially referenced count-valued outcomes that are paired with finite populations. For example, the number of people below the poverty line and the total population for each county are estimated by ACS. One common assumption is that the spatially referenced count-valued outcome given the finite population is binomial distributed. This conditionally specified (CS) model does not define the joint relationship between the count-valued outcome and the finite population. Thus, we consider a joint model for the count-valued outcome and the finite population. When cross-dependence in our joint model can be leveraged to 'improve spatial prediction' we say that the finite population is 'informative.' We model the count given the finite population as binomial and the finite population as negative binomial and use multivariate logit-beta prior distributions. This leads to closed-form expressions of the full-conditional distributions for an efficient Gibbs sampler. We illustrate our model through simulations and our motivating application of ACS poverty estimates. These empirical analyses show the benefits of using our proposed model over the more traditional CS binomial model.
Collapse
|
133
|
Siu TK. Bayesian nonlinear expectation for time series modelling and its application to Bitcoin. EMPIRICAL ECONOMICS 2022; 64:505-537. [PMID: 35645455 PMCID: PMC9130704 DOI: 10.1007/s00181-022-02255-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 04/25/2022] [Indexed: 06/15/2023]
Abstract
UNLABELLED This paper proposes a two-stage approach to parametric nonlinear time series modelling in discrete time with the objective of incorporating uncertainty or misspecification in the conditional mean and volatility. At the first stage, a reference or approximating time series model is specified and estimated. At the second stage, Bayesian nonlinear expectations are introduced to incorporate model uncertainty or misspecification in prediction via specifying a family of alternative models. The Bayesian nonlinear expectations for prediction are constructed from closed-form Bayesian credible intervals evaluated using conjugate priors and residuals of the estimated approximating model. Using real Bitcoin data including some periods of Covid 19, applications of the proposed method to forecasting and risk evaluation of Bitcoin are discussed via three major parametric nonlinear time series models, namely the self-exciting threshold autoregressive model, the generalized autoregressive conditional heteroscedasticity model and the stochastic volatility model. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s00181-022-02255-z.
Collapse
|
134
|
Münch JL, Paul F, Schmauder R, Benndorf K. Bayesian inference of kinetic schemes for ion channels by Kalman filtering. eLife 2022; 11:e62714. [PMID: 35506659 PMCID: PMC9342998 DOI: 10.7554/elife.62714] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Accepted: 04/22/2022] [Indexed: 11/16/2022] Open
Abstract
Inferring adequate kinetic schemes for ion channel gating from ensemble currents is a daunting task due to limited information in the data. We address this problem by using a parallelized Bayesian filter to specify hidden Markov models for current and fluorescence data. We demonstrate the flexibility of this algorithm by including different noise distributions. Our generalized Kalman filter outperforms both a classical Kalman filter and a rate equation approach when applied to patch-clamp data exhibiting realistic open-channel noise. The derived generalization also enables inclusion of orthogonal fluorescence data, making unidentifiable parameters identifiable and increasing the accuracy of the parameter estimates by an order of magnitude. By using Bayesian highest credibility volumes, we found that our approach, in contrast to the rate equation approach, yields a realistic uncertainty quantification. Furthermore, the Bayesian filter delivers negligibly biased estimates for a wider range of data quality. For some data sets, it identifies more parameters than the rate equation approach. These results also demonstrate the power of assessing the validity of algorithms by Bayesian credibility volumes in general. Finally, we show that our Bayesian filter is more robust against errors induced by either analog filtering before analog-to-digital conversion or by limited time resolution of fluorescence data than a rate equation approach.
Collapse
|
135
|
Crook OM, Chung CW, Deane CM. Challenges and Opportunities for Bayesian Statistics in Proteomics. J Proteome Res 2022; 21:849-864. [PMID: 35258980 PMCID: PMC8982455 DOI: 10.1021/acs.jproteome.1c00859] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Indexed: 12/27/2022]
Abstract
Proteomics is a data-rich science with complex experimental designs and an intricate measurement process. To obtain insights from the large data sets produced, statistical methods, including machine learning, are routinely applied. For a quantity of interest, many of these approaches only produce a point estimate, such as a mean, leaving little room for more nuanced interpretations. By contrast, Bayesian statistics allows quantification of uncertainty through the use of probability distributions. These probability distributions enable scientists to ask complex questions of their proteomics data. Bayesian statistics also offers a modular framework for data analysis by making dependencies between data and parameters explicit. Hence, specifying complex hierarchies of parameter dependencies is straightforward in the Bayesian framework. This allows us to use a statistical methodology which equals, rather than neglects, the sophistication of experimental design and instrumentation present in proteomics. Here, we review Bayesian methods applied to proteomics, demonstrating their potential power, alongside the challenges posed by adopting this new statistical framework. To illustrate our review, we give a walk-through of the development of a Bayesian model for dynamic organic orthogonal phase-separation (OOPS) data.
Collapse
|
136
|
Hecksteden A, Forster S, Egger F, Buder F, Kellner R, Meyer T. Dwarfs on the Shoulders of Giants: Bayesian Analysis With Informative Priors in Elite Sports Research and Decision Making. Front Sports Act Living 2022; 4:793603. [PMID: 35368412 PMCID: PMC8970347 DOI: 10.3389/fspor.2022.793603] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 02/08/2022] [Indexed: 11/22/2022] Open
Abstract
While sample sizes in elite sports are necessarily small, so are the effects that may be relevant. This conundrum is complicated by an understandable reluctance of athletes to comply with extensive study requirements. In Bayesian analyses, pre-existing knowledge (e.g., from sub-elite trials) can be formally included to supplement scarce data. Moreover, some design specifics for small sample research extend to the extreme case of a single subject. This provides the basis for actionable feedback (e.g., about individual responses) thereby incentivising participation. As a proof-of-concept, we conducted a replicated cross-over trial on the effect of cold-water immersion (CWI) on sprint performance recovery in soccer players. Times for 30 m linear sprint and the initial 5 m section, respectively, were measured by light gates before and 24 h after induction of fatigue. Data were analysed by Bayesian and by standard frequentist methods. Informative priors are based on a published metaanalysis. Seven players completed the trial. Sprint performance was 4.156 ± 0.193 s for 30 m linear sprint and 0.978 ± 0.064 s for the initial 5 m section. CWI improved recovery of sprint time for the initial 5 m section (difference to control: -0.060 ± 0.060 s, p = 0.004) but not for the full 30 m sprint (0.002 ± 0.115 s, p = 0.959), with general agreement between Bayesian and frequentist interval estimates. On the individual level, relevant differences between analytical approaches were present for most players. Changes in the two performance measures are correlated (p = 0.009) with a fairly good reproducibility of individual response patterns. Bayesian analyses with informative priors may be a practicable and meaningful option particularly for very small samples and when the analytical aim is decision making (use / don't use in the specific setting) rather than generalizable inference.
Collapse
|
137
|
Schmidt D, Kahlen K, Bahr C, Friedel M. Towards a Stochastic Model to Simulate Grapevine Architecture: A Case Study on Digitized Riesling Vines Considering Effects of Elevated CO 2. PLANTS (BASEL, SWITZERLAND) 2022; 11:801. [PMID: 35336683 PMCID: PMC8953974 DOI: 10.3390/plants11060801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 03/08/2022] [Accepted: 03/15/2022] [Indexed: 11/17/2022]
Abstract
Modeling plant growth, in particular with functional-structural plant models, can provide tools to study impacts of changing environments in silico. Simulation studies can be used as pilot studies for reducing the on-field experimental effort when predictive capabilities are given. Robust model calibration leads to less fragile predictions, while introducing uncertainties in predictions allows accounting for natural variability, resulting in stochastic plant growth models. In this study, stochastic model components that can be implemented into the functional-structural plant model Virtual Riesling are developed relying on Bayesian model calibration with the goal to enhance the model towards a fully stochastic model. In this first step, model development targeting phenology, in particular budburst variability, phytomer development rate and internode growth are presented in detail. Multi-objective optimization is applied to estimate a single set of cardinal temperatures, which is used in phenology and growth modeling based on a development days approach. Measurements from two seasons of grapevines grown in a vineyard with free-air carbon dioxide enrichment (FACE) are used; thus, model building and selection are coupled with an investigation as to whether including effects of elevated CO2 conditions to be expected in 2050 would improve the models. The results show how natural variability complicates the detection of possible treatment effects, but demonstrate that Bayesian calibration in combination with mixed models can realistically recover natural shoot growth variability in predictions. We expect these and further stochastic model extensions to result in more realistic virtual plant simulations to study effects, which are used to conduct in silico studies of canopy microclimate and its effects on grape health and quality.
Collapse
|
138
|
Russo D, Masegosa AR, Stol KJ. From anecdote to evidence: the relationship between personality and need for cognition of developers. EMPIRICAL SOFTWARE ENGINEERING 2022; 27:71. [PMID: 35313539 PMCID: PMC8928712 DOI: 10.1007/s10664-021-10106-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 12/09/2021] [Indexed: 06/14/2023]
Abstract
There is considerable anecdotal evidence suggesting that software engineers enjoy engaging in solving puzzles and other cognitive efforts. A tendency to engage in and enjoy effortful thinking is referred to as a person's 'need for cognition.' In this article we study the relationship between software engineers' personality traits and their need for cognition. Through a large-scale sample study of 483 respondents we collected data to capture the six 'bright' personality traits of the HEXACO model of personality, and three 'dark' personality traits. Data were analyzed using several methods including a multiple Bayesian linear regression analysis. The results indicate that ca. 33% of variation in developers' need for cognition can be explained by personality traits. The Bayesian analysis suggests four traits to be of particular interest in predicting need for cognition: openness to experience, conscientiousness, honesty-humility, and emotionality. Further, we also find that need for cognition of software engineers is, on average, higher than in the general population, based on a comparison with prior studies. Given the importance of human factors for software engineers' performance in general, and problem solving skills in particular, our findings suggest several implications for recruitment, working behavior, and teaming.
Collapse
|
139
|
Aneman A, Frost S, Parr M, Skrifvars MB. Target temperature management following cardiac arrest: a systematic review and Bayesian meta-analysis. Crit Care 2022; 26:58. [PMID: 35279209 PMCID: PMC8917746 DOI: 10.1186/s13054-022-03935-z] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 02/26/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Temperature control with target temperature management (TTM) after cardiac arrest has been endorsed by expert societies and adopted in international clinical practice guidelines but recent evidence challenges the use of hypothermic TTM. METHODS Systematic review and Bayesian meta-analysis of clinical trials on adult survivors from cardiac arrest undergoing TTM for at least 12 h comparing TTM versus no TTM or with a separation > 2 °C between intervention and control groups using the PubMed/MEDLINE, EMBASE, CENTRAL databases from inception to 1 September 2021 (PROSPERO CRD42021248140). All randomised and quasi-randomised controlled trials were considered. The risk ratio and 95% confidence interval for death (primary outcome) and unfavourable neurological recovery (secondary outcome) were captured using the original study definitions censored up to 180 days after cardiac arrest. Bias was assessed using the updated Cochrane risk-of-bias for randomised trials tool and certainty of evidence assessed using the Grading of Recommendation Assessment, Development and Evaluation methodology. A hierarchical robust Bayesian model-averaged meta-analysis was performed using both minimally informative and data-driven priors and reported by mean risk ratio (RR) and its 95% credible interval (95% CrI). RESULTS In seven studies (three low bias, three intermediate bias, one high bias, very low to low certainty) recruiting 3792 patients the RR by TTM 32-34 °C was 0.95 [95% CrI 0.78-1.09] for death and RR 0.93 [95% CrI 0.84-1.02] for unfavourable neurological outcome. The posterior probability for no benefit (RR ≥ 1) by TTM 32-34 °C was 24% for death and 12% for unfavourable neurological outcome. The posterior probabilities for favourable treatment effects of TTM 32-34 °C were the highest for an absolute risk reduction of 2-4% for death (28-53% chance) and unfavourable neurological outcome (63-78% chance). Excluding four studies without active avoidance of fever in the control arm reduced the probability to achieve an absolute risk reduction > 2% for death or unfavourable neurological outcome to ≤ 50%. CONCLUSIONS The posterior probability distributions did not support the use of TTM at 32-34 °C compared to 36 °C also including active control of fever to reduce the risk of death and unfavourable neurological outcome at 90-180 days. Any likely benefit of hypothermic TTM is smaller than targeted in RCTs to date.
Collapse
|
140
|
Park J, Pek J. Conducting Bayesian-Classical Hybrid Power Analysis with R Package Hybridpower. MULTIVARIATE BEHAVIORAL RESEARCH 2022:1-17. [PMID: 35263213 DOI: 10.1080/00273171.2022.2038056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
There are several approaches to incorporating uncertainty in power analysis. We review these approaches and highlight the Bayesian-classical hybrid approach that has been implemented in the R package hybridpower. Calculating Bayesian-classical hybrid power circumvents the problem of local optimality in which calculated power is valid if and only if the specified inputs are perfectly correct. hybridpower can compute classical and Bayesian-classical hybrid power for popular testing procedures including the t-test, correlation, simple linear regression, one-way ANOVA (with equal or unequal variances), and the sign test. Using several examples, we demonstrate features of hybridpower and illustrate how to elicit subjective priors, how to determine sample size from the Bayesian-classical approach, and how this approach is distinct from related methods. hybridpower can conduct power analysis for the classical approach, and more importantly, the novel Bayesian-classical hybrid approach that returns more realistic calculations by taking into account local optimality that the classical approach ignores. For users unfamiliar with R, we provide a limited number of RShiny applications based on hybridpower to promote the accessibility of this novel approach to power analysis. We end with a discussion on future developments in hybridpower.
Collapse
|
141
|
Du H, Enders C, Keller BT, Bradbury TN, Karney BR. A Bayesian Latent Variable Selection Model for Nonignorable Missingness. MULTIVARIATE BEHAVIORAL RESEARCH 2022; 57:478-512. [PMID: 33529056 PMCID: PMC10170967 DOI: 10.1080/00273171.2021.1874259] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Missing data are exceedingly common across a variety of disciplines, such as educational, social, and behavioral science areas. Missing not at random (MNAR) mechanism where missingness is related to unobserved data is widespread in real data and has detrimental consequence. However, the existing MNAR-based methods have potential problems such as leaving the data incomplete and failing to accommodate incomplete covariates with interactions, non-linear terms, and random slopes. We propose a Bayesian latent variable imputation approach to impute missing data due to MNAR (and other missingness mechanisms) and estimate the model of substantive interest simultaneously. In addition, even when the incomplete covariates involves interactions, non-linear terms, and random slopes, the proposed method can handle missingness appropriately. Computer simulation results suggested that the proposed Bayesian latent variable selection model (BLVSM) was quite effective when the outcome and/or covariates were MNAR. Except when the sample size was small, estimates from the proposed BLVSM tracked closely with those from the complete data analysis. With a small sample size, when the outcome was less predictable from the covariates, the missingness proportions of the covariates and the outcome were larger, and the missingness selection processes of the covariates and the outcome were more MNAR and MAR, the performance of BLVSM was less satisfactory. When the sample size was large, BLVSM always performed well. In contrast, the method with an MAR assumption provided biased estimates and undercoverage confidence intervals when the missingness was MNAR. The robustness and the implementation of BLVSM in real data were also illustrated. The proposed method is available in the Blimp software application, and the paper includes a data analysis example illustrating its use.
Collapse
|
142
|
Sadatsafavi M, Yoon Lee T, Gustafson P. Uncertainty and the Value of Information in Risk Prediction Modeling. Med Decis Making 2022; 42:661-671. [PMID: 35209762 PMCID: PMC9194963 DOI: 10.1177/0272989x221078789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Background Because of the finite size of the development sample, predicted probabilities from a risk prediction model are inevitably uncertain. We apply value-of-information methodology to evaluate the decision-theoretic implications of prediction uncertainty. Methods Adopting a Bayesian perspective, we extend the definition of the expected value of perfect information (EVPI) from decision analysis to net benefit calculations in risk prediction. In the context of model development, EVPI is the expected gain in net benefit by using the correct predictions as opposed to predictions from a proposed model. We suggest bootstrap methods for sampling from the posterior distribution of predictions for EVPI calculation using Monte Carlo simulations. We used subsets of data of various sizes from a clinical trial for predicting mortality after myocardial infarction to show how EVPI changes with sample size. Results With a sample size of 1000 and at the prespecified threshold of 2% on predicted risks, the gains in net benefit using the proposed and the correct models were 0.0006 and 0.0011, respectively, resulting in an EVPI of 0.0005 and a relative EVPI of 87%. EVPI was zero only at unrealistically high thresholds (>85%). As expected, EVPI declined with larger samples. We summarize an algorithm for incorporating EVPI calculations into the commonly used bootstrap method for optimism correction. Conclusion The development EVPI can be used to decide whether a model can advance to validation, whether it should be abandoned, or whether a larger development sample is needed. Value-of-information methods can be applied to explore decision-theoretic consequences of uncertainty in risk prediction and can complement inferential methods in predictive analytics. R code for implementing this method is provided.
Collapse
|
143
|
Perkins TA, Stephens M, Alvarez Barrios W, Cavany S, Rulli L, Pfrender ME. Performance of Three Tests for SARS-CoV-2 on a University Campus Estimated Jointly with Bayesian Latent Class Modeling. Microbiol Spectr 2022; 10:e0122021. [PMID: 35044220 PMCID: PMC8768831 DOI: 10.1128/spectrum.01220-21] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/12/2021] [Indexed: 12/19/2022] Open
Abstract
Accurate tests for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) have been critical in efforts to control its spread. The accuracy of tests for SARS-CoV-2 has been assessed numerous times, usually in reference to a gold standard diagnosis. One major disadvantage of that approach is the possibility of error due to inaccuracy of the gold standard, which is especially problematic for evaluating testing in a real-world surveillance context. We used an alternative approach known as Bayesian latent class modeling (BLCM), which circumvents the need to designate a gold standard by simultaneously estimating the accuracy of multiple tests. We applied this technique to a collection of 1,716 tests of three types applied to 853 individuals on a university campus during a 1-week period in October 2020. We found that reverse transcriptase PCR (RT-PCR) testing of saliva samples performed at a campus facility had higher sensitivity (median, 92.3%; 95% credible interval [CrI], 73.2 to 99.6%) than RT-PCR testing of nasal samples performed at a commercial facility (median, 85.9%; 95% CrI, 54.7 to 99.4%). The reverse was true for specificity, although the specificity of saliva testing was still very high (median, 99.3%; 95% CrI, 98.3 to 99.9%). An antigen test was less sensitive and specific than both of the RT-PCR tests, although the sample sizes with this test were small and the statistical uncertainty was high. These results suggest that RT-PCR testing of saliva samples at a campus facility can be an effective basis for surveillance screening to prevent SARS-CoV-2 transmission in a university setting. IMPORTANCE Testing for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been vitally important during the COVID-19 pandemic. There are a variety of methods for testing for this virus, and it is important to understand their accuracy in choosing which one might be best suited for a given application. To estimate the accuracy of three different testing methods, we used a data set collected at a university that involved testing the same samples with multiple tests. Unlike most other estimates of test accuracy, we did not assume that one test was perfect but instead allowed for some degree of inaccuracy in all testing methods. We found that molecular tests performed on saliva samples at a university facility were similarly accurate as molecular tests performed on nasal samples at a commercial facility. An antigen test appeared somewhat less accurate than the molecular tests, but there was high uncertainty about that.
Collapse
|
144
|
Jaya IGNM, Folmer H. Spatiotemporal high-resolution prediction and mapping: methodology and application to dengue disease. JOURNAL OF GEOGRAPHICAL SYSTEMS 2022; 24:527-581. [PMID: 35221792 PMCID: PMC8857957 DOI: 10.1007/s10109-021-00368-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 10/08/2021] [Indexed: 05/16/2023]
Abstract
Dengue disease has become a major public health problem. Accurate and precise identification, prediction and mapping of high-risk areas are crucial elements of an effective and efficient early warning system in countering the spread of dengue disease. In this paper, we present the fusion area-cell spatiotemporal generalized geoadditive-Gaussian Markov random field (FGG-GMRF) framework for joint estimation of an area-cell model, involving temporally varying coefficients, spatially and temporally structured and unstructured random effects, and spatiotemporal interaction of the random effects. The spatiotemporal Gaussian field is applied to determine the unobserved relative risk at cell level. It is transformed to a Gaussian Markov random field using the finite element method and the linear stochastic partial differential equation approach to solve the "big n" problem. Sub-area relative risk estimates are obtained as block averages of the cell outcomes within each sub-area boundary. The FGG-GMRF model is estimated by applying Bayesian Integrated Nested Laplace Approximation. In the application to Bandung city, Indonesia, we combine low-resolution area level (district) spatiotemporal data on population at risk and incidence and high-resolution cell level data on weather variables to obtain predictions of relative risk at subdistrict level. The predicted dengue relative risk at subdistrict level suggests significant fine-scale heterogeneities which are not apparent when examining the area level. The relative risk varies considerably across subdistricts and time, with the latter showing an increase in the period January-July and a decrease in the period August-December. Supplementary Information The online version contains supplementary material available at 10.1007/s10109-021-00368-0.
Collapse
|
145
|
Shtossel O, Louzoun Y. Sampling bias minimization in disease frequency estimates. J Theor Biol 2022; 534:110972. [PMID: 34856201 DOI: 10.1016/j.jtbi.2021.110972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 10/08/2021] [Accepted: 11/16/2021] [Indexed: 11/17/2022]
Abstract
An accurate estimate of the number of infected individuals in any disease is crucial. Current estimates are mainly based on the fraction of positive samples or the total number of positive samples. However, both methods are biased and sensitive to the sampling depth. We here propose an alternative method to use the attributes of each sample to estimate the change in the total number of positive patients in the total population. We present a Bayesian estimator assuming a combination of condition and time-dependent probability of being positive, and mixed implicit-explicit solution for the probability of a person with conditions i at time t of being positive. We use this estimate to predict the total probability of being positive at a given day t. We show that these estimate results are smooth and not sensitive to the properties of the samples. Moreover, these results are a better predictor of future mortality.
Collapse
|
146
|
Factors influencing gait speed in community-dwelling older women: A Bayesian approach. Gait Posture 2022; 92:455-460. [PMID: 34999556 DOI: 10.1016/j.gaitpost.2021.12.022] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Revised: 12/24/2021] [Accepted: 12/29/2021] [Indexed: 02/02/2023]
Abstract
BACKGROUND Human gait is a complex task resulting from the interaction of sensory perception, muscle force output, and sensory-motor integration, which declines with the aging process and impacts gait speed in older women. RESEARCH QUESTION What are the separate and combined impacts of sensory-motor factors on gait speed of older women? METHODS Sixty healthy older women (69.3 ± 5.9 years) volunteered for this study. A previous screening using Pearson's correlation selected variables significantly correlated with gait speed: age, plantar tactile perception, lower limb explosive force, and mean velocity (MV) of the center of pressure (CoP). Simple and multivariate regression models were performed with selected variables. The magnitude of evidence was obtained using Bayesian inference, determining posterior probabilities based on our data. RESULTS Gait speed was negatively correlated with age and positively correlated with plantar tactile perception, MV (Romberg index), and lower limb explosive force. The coefficient of determination (R2) varied between 0.06 for plantar tactile perception and 0.22 for explosive force (p < 0.05). The multivariate model, including age, MV (Romberg index), and lower limb explosive force, explained 44% (R2 = 0.44) of the variance in gait speed, with a small standard error of estimate (0.14 m/s). Bayesian inference confirmed the good posterior probability of the model. SIGNIFICANCE Age, plantar tactile perception, MV (Romberg index), and lower limb explosive force impact gait speed, whereas the combination of the first three factors has an excellent posterior probability of predicting or affecting gait speed.
Collapse
|
147
|
Edwards J, Georgiades K. Reading Between the Lines: A Pursuit of Estimating the Population Prevalence of Mental Illness Using Multiple Data Sources. CANADIAN JOURNAL OF PSYCHIATRY. REVUE CANADIENNE DE PSYCHIATRIE 2022; 67:101-103. [PMID: 33969716 PMCID: PMC8892056 DOI: 10.1177/07067437211016255] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Population-based prevalence estimates of mental illness are foundational to health service planning, strategic resource allocation, and the development and evaluation of public mental health policy. Generating valid, reliable, and context-specific population-level estimates is of utmost importance and can be achieved by combining various data sources. This pursuit benefits from the right combination of theory, applied statistics, and the conceptualization of available data sources as a collective rather than in isolation. We believe there is a need to read between the lines as theory, methodology, and context (i.e., strengths and limitations) are what determines the meaningfulness of a combined prevalence estimate. Currently lacking is a gold standard approach to combining estimates from multiple data sources. Here, we compare and contrast various approaches to combining data and introduce an idea that leverages the strengths of pre-existing individually linked population-based survey and health administrative data sources currently available in Canada.
Collapse
|
148
|
Faulstich SD, Schissler AG, Strickland MJ, Holmes HA. Statistical Comparison and Assessment of Four Fire Emissions Inventories for 2013 and a Large Wildfire in the Western United States. FIRE (BASEL, SWITZERLAND) 2022; 5:27. [PMID: 35295881 PMCID: PMC8923622 DOI: 10.3390/fire5010027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Wildland fires produce smoke plumes that impact air quality and human health. To understand the effects of wildland fire smoke on humans, the amount and composition of the smoke plume must be quantified. Using a fire emissions inventory is one way to determine the emissions rate and composition of smoke plumes from individual fires. There are multiple fire emissions inventories, and each uses a different method to estimate emissions. This paper presents a comparison of four emissions inventories and their products: Fire INventory from NCAR (FINN version 1.5), Global Fire Emissions Database (GFED version 4s), Missoula Fire Labs Emissions Inventory (MFLEI (250 m) and MFLEI (10 km) products), and Wildland Fire Emissions Inventory System (WFEIS (MODIS) and WFEIS (MTBS) products). The outputs from these inventories are compared directly. Because there are no validation datasets for fire emissions, the outlying points from the Bayesian models developed for each inventory were compared with visible images and fire radiative power (FRP) data from satellite remote sensing. This comparison provides a framework to check fire emissions inventory data against additional data by providing a set of days to investigate closely. Results indicate that FINN and GFED likely underestimate emissions, while the MFLEI products likely overestimate emissions. No fire emissions inventory matched the temporal distribution of emissions from an external FRP dataset. A discussion of the differences impacting the emissions estimates from the four fire emissions inventories is provided, including a qualitative comparison of the methods and inputs used by each inventory and the associated strengths and limitations.
Collapse
|
149
|
Hüsers J, Hafer G, Heggemann J, Wiemeyer S, John SM, Hübner U. Development and Evaluation of a Bayesian Risk Stratification Method for Major Amputations in Patients with Diabetic Foot Ulcers. Stud Health Technol Inform 2022; 289:212-215. [PMID: 35062130 DOI: 10.3233/shti210897] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The diabetic foot ulcer, which 2% - 6% of diabetes patients experience, is a severe health threat. It is closely linked to the risk of lower extremity amputation (LEA). When a DFU is present, the chief imperative is to initiate tertiary preventive actions to avoid amputation. In this light, clinical decision support systems (CDSS) can guide clinicians to identify DFU patients early. In this study, the PEDIS classification and a Bayesian logistic regression model are utilised to develop and evaluate a decision method for patient stratification. Therefore, we conducted a Bayesian cutpoint analysis. The CDSS revealed an optimal cutpoint for the amputation risk of 0.28. Sensitivity and specificity were 0.83 and 0.66. These results show that although the specificity is low, the decision method includes most actual patients at risk, which is a desirable feature in monitoring patients at risk for major amputation. This study shows that the PEDIS classification promises to provide a valid basis for a DFU risk stratification in CDSS.
Collapse
|
150
|
Menendez D, Rosengren KS, Alibali MW. Detailed bugs or bugging details? The influence of perceptual richness across elementary school years. J Exp Child Psychol 2022; 213:105269. [PMID: 34416553 PMCID: PMC8463490 DOI: 10.1016/j.jecp.2021.105269] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 07/19/2021] [Accepted: 07/20/2021] [Indexed: 01/03/2023]
Abstract
Visualizations are commonly used in educational materials; however, not all visualizations are equally effective at promoting learning. Prior research has supported the idea that both perceptually rich and bland visualizations are beneficial for learning and generalization. We investigated whether the perceptual richness of a life cycle diagram influenced children's learning of metamorphosis, a concept that prior work suggests is difficult for people to generalize. Using identical materials, Study 1 (N = 76) examined learning and generalization of metamorphosis in first- and second-grade students, and Study 2 (N = 53) did so in fourth- and fifth-grade students. Bayesian regression analyses revealed that first and second graders learned more from the lesson with the perceptually rich diagram. In addition, fourth and fifth graders generalized more with the bland diagram, but these generalizations tended to be incorrect (i.e., generalizing metamorphosis to animals that do not undergo this type of change). These findings differ from prior research with adults, in which bland diagrams led to more correct generalizations, suggesting that the effect of perceptual richness on learning and generalization might change over development.
Collapse
|