26
|
Jimoh KA, Hashim N, Shamsudin R, Che Man H, Jahari M. Optimization of computational intelligence approach for the prediction of glutinous rice dehydration. JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE 2024. [PMID: 38451113 DOI: 10.1002/jsfa.13445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 02/09/2024] [Accepted: 03/05/2024] [Indexed: 03/08/2024]
Abstract
BACKGROUND Five computational intelligence approaches, namely Gaussian process regression (GPR), artificial neural network (ANN), decision tree (DT), ensemble of trees (EoT) and support vector machine (SVM), were used to describe the evolution of moisture during the dehydration process of glutinous rice. The hyperparameters of the models were optimized with three strategies: Bayesian optimization, grid search and random search. To understand the parameters that facilitate intelligence model adaptation to the dehydration process, global sensitivity analysis (GSA) was used to compute the impact of the input variables on the model output. RESULT The result shows that the optimum computational intelligence techniques include the 3-9-1 topology trained with Bayesian regulation function for ANN, Gaussian kernel function for SVM, Matérn covariance function combined with zero mean function for GPR, boosting method for EoT and 4 minimum leaf size for DT. GPR has the highest performance with R2 of 100% and 99.71% during calibration and testing of the model, respectively. GSA reveals that all the models significantly rely on the variation in time as the main factor that affects the model outputs. CONCLUSION Therefore, the computational intelligence models, especially GPR, can be applied for an effective description of moisture evolution during small-scale and industrial dehydration of glutinous rice. © 2024 Society of Chemical Industry.
Collapse
|
27
|
Jeon S, Lemosquet S, Toulemonde AC, Kiessé TS, Nozière P. Sensitivity analysis of the INRA 2018 feeding system for ruminants by a one-at-a-time approach: Effects of dietary input variables on predictions of multiple responses of dairy cattle. J Dairy Sci 2024:S0022-0302(24)00533-2. [PMID: 38460873 DOI: 10.3168/jds.2023-24361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 01/31/2024] [Indexed: 03/11/2024]
Abstract
In the INRA 2018 feeding system for ruminants, the prediction of multiple animal responses is based on the integration of the characteristics of the animal and the available feedstuff characteristics, as well as the rationing objectives. In this framework, the characterization of feedstuffs in terms of net energy, digestible protein, and fill units requirs information on their chemical composition, digestibility, and degradability. Despite the importance of these feed characteristics, a comprehensive assessment of their impact on the responses predicted by the INRA 2018 feeding system has not been carried out. Thus, our study investigated how variables predicted by the INRA feeding system (i.e., outputs) for dairy cows are affected by variation in feed characterization (i.e., inputs). Five input variables were selected for the sensitivity analysis (SA): CP, OM apparent digestibility (OMd), GE, effective degradability of nitrogen assuming a passage rate of 6%/h (ED6_N) and true intestinal digestibility (dr_N) of nitrogen. A one-at-a-time SA was performed on predicted digestive, productive and environmental output variables for dairy cows with 6 contrasted diets. These 6 diets were formulated to meet 95% of the potential daily milk production (37.5 kg) of a multiparous cow at wk 14 of lactation. Then, the values of the 5 key input variables of each feedstuff were randomly sampled around the INRA 2018 feed table values (reference point). The response of the output variable to the variation of the input variable was quantified and compared using the tangent value at the reference point and the normalized sensitivity coefficient. Among the major final output variables, CP and dr_N had the greatest impact on nitrogen (N) excretion in urine (as a proportion of total fecal and urinary N excretion, UN/TN), OMd and GE had the greatest impact on N utilization efficiency (N in milk as proportion of intake N, NUE), and ED6_N had the greatest impact on milk protein yield (MPY). Additionally, CP, GE, and dr_N had the least effect on methane emission, OMd had the least effect on UN/TN, and ED6_N had the least effect on NUE. The responses of most output variables to ED6_N and dr_N variations were highly dependent on diet, and were related to the ratio between PDI (i.e., metabolizable protein) and UFL (i.e., NEL) at the reference point of each diet. In conclusion, we were able to analyze the response of output variables to the variations of the input variables, using the tangent and its normalized value at the reference point. The predicted final outputs were more impacted by variations in CP, GE, and OMd. The other 2 input variables, ED6_N and dr_N, had a smaller effect on the final output variables, but the responses varied between the diets according to their PDI/UFL ratio. Among the final output variables affected by ED6_N, MPY was the most impacted, but when quantified this impact was at an acceptable level. Our present study was conducted using 6 representative diets for dairy cattle fed at their potential, but should be completed by the analysis of more diverse conditions.
Collapse
|
28
|
Xiaowu W, Qiang Z, Yike H, Wu Z, Yiheng J, Xuemei C, Sen L, Jiong C. Causal relationship between atrial fibrillation/warfarin and cutaneous melanoma: a two-sample Mendelian randomization study. Front Med (Lausanne) 2024; 11:1336849. [PMID: 38504913 PMCID: PMC10948620 DOI: 10.3389/fmed.2024.1336849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 02/26/2024] [Indexed: 03/21/2024] Open
Abstract
Purpose In recent years, the relationship between malignant tumors and atrial fibrillation has attracted more and more attention. Atrial fibrillation can also cause a series of adverse events, such as the risk of thromboembolism. Also, Warfarin is often used here. But, the relationship between cutaneous melanoma and atrial fibrillation, and between cutaneous melanoma and warfarin is still unclear. Therefore, we used a two-sample Mendelian randomization to assess the causal relationship between atrial fibrillation/warfarin and cutaneous melanoma (cM). Methods Firstly, atrial fibrillation (ukb-b-11550; nCase = 3,518, nControl = 459,415) and warfarin (ukb-b-13248; nCase = 4,623, nControl = 458,310) as exposures, with genome-wide association studies (GWAS) data from the United Kingdom Biobank. And cM (ieu-b-4969; nCase = 3,751, nControl = 372,016) as outcome, with GWAS data from the IEU Open GWAS project. Subsequently, single-nucleotide polymorphisms (SNPs) were filtered from GWAS studies using quality control measures. In addition, two-sample Mendelian randomization (MR) analysis was performed to explore the causal relationship between atrial fibrillation or warfarin and cM and used inverse variance weighting (IVW) as the primary analytical method. Finally, relevant heterogeneity and sensitivity analysis were performed to ensure the accuracy of the results. Results A causal relationship between atrial fibrillation and cutaneous melanoma was observed, and between warfarin and cutaneous melanoma. Conclusion The atrial fibrillation may play a causal role in the development of cutaneous melanoma, but the mechanism and the causal relationship between warfarin and cutaneous melanoma needs to be further elucidated.
Collapse
|
29
|
He J, Zhan L, Yang Y, Xu Y. Global Sensitivity Analysis of Factors Influencing the Surface Temperature of Mold during Autoclave Processing. Polymers (Basel) 2024; 16:705. [PMID: 38475388 DOI: 10.3390/polym16050705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 02/29/2024] [Accepted: 03/02/2024] [Indexed: 03/14/2024] Open
Abstract
During the process of forming carbon fiber reinforced plastics (CFRP) in an autoclave, deeply understanding the global sensitivity of factors influencing mold surface temperature is of paramount importance for optimizing large frame-type mold thermally and enhancing curing quality. In this study, the convective heat transfer coefficient (CHTC), the thickness of composite laminates (TCL), the thickness of mold facesheet (TMF), the mold material type (MMT), and the thickness of the auxiliary materials layer (TAL) have been quantitatively assessed for the effects on the mold surface temperature. This assessment was conducted by building the thermal-chemical curing model of composite laminates and utilizing the Sobol global sensitivity analysis (GSA) method. Additionally, the interactions among these factors were investigated to gain a comprehensive understanding of their combined effects. The results show that the sensitivity order of these factors is as follows: CHTC > MMT > TMF > TCL > TAL. Moreover, CHTC, MMT, and TMF are the main factors influencing mold surface temperature, as the sum of their first-order sensitivity indices accounts for over 97.3%. The influence of a single factor is more significant than that of the interaction between factors since the sum of the first-order sensitivity indices of the factors is more than 78.1%. This study will support the development of science-based guidelines for the thermal design of molds and associated heating equipment design.
Collapse
|
30
|
Mrad D, Boukhari S, Dairi S, Djebbar Y. Modeling the sediment yield and estimating the best management practices in the Seybouse basin, Northeastern Algeria. WATER SCIENCE AND TECHNOLOGY : A JOURNAL OF THE INTERNATIONAL ASSOCIATION ON WATER POLLUTION RESEARCH 2024; 89:1497-1511. [PMID: 38557714 DOI: 10.2166/wst.2024.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 02/16/2024] [Indexed: 04/04/2024]
Abstract
Identifying vulnerable areas to erosion within the watershed and implementing best management practices (BMPs) are crucial steps in mitigating watershed degradation by minimizing sediment yields. The present study evaluates and identifies the BMPs in the Seybouse basin, northeastern Algeria, using the Soil and Water Assessment Tool (SWAT) model. After successful calibration and validation, the model demonstrated a satisfactory ability to simulate monthly discharge and sediment. Then, the calibrated model was employed to evaluate the efficacy of diverse management practices in sediment control. In the SWAT, three soil and conservation practices, as well as vegetated filter strips (VFSs), grade stabilization structures (GSSs), and terracing were evaluated. The average annual sediment yield in the Seybouse watershed is determined to be 14.43 t/ha year, constituting 71% of the total soil loss. VFS demonstrated a sediment reduction of 37.30%, GSS 20.40%, and terracing 42.30%. Among these strategies, terracing results in the greatest reduction, followed by VFS. The results of this study area can be useful for informed decision-making regarding optimal watershed management strategies.
Collapse
|
31
|
Nian F, Li F. Epidemic propagation risk study with effective fractal dimension. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2024; 44:612-622. [PMID: 37435956 DOI: 10.1111/risa.14192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/11/2023] [Accepted: 06/15/2023] [Indexed: 07/13/2023]
Abstract
In this article, the risk of epidemic transmission on complex networks is studied from the perspective of effective fractal dimension. First, we introduce the method of calculating the effective fractal dimensionD B ${D}_B$ of the network by taking a scale-free network as an example. Second, we propose the construction method of administrative fractal network and calculate theD B ${D}_B$ . using the classical susceptible exposed infectious removed (SEIR) infectious disease model, we simulate the virus propagation process on the administrative fractal network. The results show that the larger theD B ${D}_B$ is, the higher the risk of virus transmission is. Later, we proposed five parameters P, M, B, F, and D, where P denotes population mobility, M denotes geographical distance, B denotes GDP, F denotesD B ${D}_B$ , and D denotes population density. The new epidemic growth index formulaI = ( P + ( 1 - M ) + B ) ( F + D ) $I = {( {P + ( {1 - M} ) + B} )}^{( {F + D} )}$ was obtained by combining these five parameters, and the validity of I in epidemic transmission risk assessment was demonstrated by parameter sensitivity analysis and reliability analysis. Finally, we also confirmed the reliability of the SEIR dynamic transmission model in simulating early COVID-19 transmission trends and the ability of timely quarantine measures to effectively control the spread of the epidemic.
Collapse
|
32
|
Haider MA, Pearce KJ, Chesler NC, Hill NA, Olufsen MS. Application and reduction of a nonlinear hyperelastic wall model capturing ex vivo relationships between fluid pressure, area, and wall thickness in normal and hypertensive murine left pulmonary arteries. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2024; 40:e3798. [PMID: 38214099 DOI: 10.1002/cnm.3798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 08/10/2023] [Accepted: 11/26/2023] [Indexed: 01/13/2024]
Abstract
Pulmonary hypertension is a cardiovascular disorder manifested by elevated mean arterial blood pressure (>20 mmHg) together with vessel wall stiffening and thickening due to alterations in collagen, elastin, and smooth muscle cells. Hypoxia-induced (type 3) pulmonary hypertension can be studied in animals exposed to a low oxygen environment for prolonged time periods leading to biomechanical alterations in vessel wall structure. This study introduces a novel approach to formulating a reduced order nonlinear elastic structural wall model for a large pulmonary artery. The model relating blood pressure and area is calibrated using ex vivo measurements of vessel diameter and wall thickness changes, under controlled pressure conditions, in left pulmonary arteries isolated from control and hypertensive mice. A two-layer, hyperelastic, and anisotropic model incorporating residual stresses is formulated using the Holzapfel-Gasser-Ogden model. Complex relations predicting vessel area and wall thickness with increasing blood pressure are derived and calibrated using the data. Sensitivity analysis, parameter estimation, subset selection, and physical plausibility arguments are used to systematically reduce the 16-parameter model to one in which a much smaller subset of identifiable parameters is estimated via solution of an inverse problem. Our final reduced one layer model includes a single set of three elastic moduli. Estimated ranges of these parameters demonstrate that nonlinear stiffening is dominated by elastin in the control animals and by collagen in the hypertensive animals. The pressure-area relation developed in this novel manner has potential impact on one-dimensional fluids network models of vessel wall remodeling in the presence of cardiovascular disease.
Collapse
|
33
|
Hilhorst PLJ, Quicken S, van de Vosse FN, Huberts W. Efficient sensitivity analysis for biomechanical models with correlated inputs. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2024; 40:e3797. [PMID: 38116742 DOI: 10.1002/cnm.3797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 09/08/2023] [Accepted: 11/26/2023] [Indexed: 12/21/2023]
Abstract
In most variance-based sensitivity analysis (SA) approaches applied to biomechanical models, statistical independence of the model input is assumed. However, often the model inputs are correlated. This might alter the interpretation of the SA results, which may severely impact the guidance provided during model development and personalization. Potential reasons for the infrequent usage of SA techniques that account for input correlation are the associated high computational costs, especially for models with many parameters, and the fact that the input correlation structure is often unknown. The aim of this study was to propose an efficient correlated global sensitivity analysis method by applying a surrogate model-based approach. Furthermore, this article demonstrates how correlated SA should be interpreted and how the applied method can guide the modeler during model development and personalization, even when the correlation structure is not entirely known beforehand. The proposed methodology was applied to a typical example of a pulse wave propagation model and resulted in accurate SA results that could be obtained at a theoretically 27,000× lower computational cost compared to the correlated SA approach without employing a surrogate model. Furthermore, our results demonstrate that input correlations can significantly affect SA results, which emphasizes the need to thoroughly investigate the effect of input correlations during model development. We conclude that our proposed surrogate-based SA approach allows modelers to efficiently perform correlated SA to complex biomechanical models and allows modelers to focus on input prioritization, input fixing and model reduction, or assessing the dependency structure between parameters.
Collapse
|
34
|
Aleshin-Guendel S, Sadinle M, Wakefield J. The central role of the identifying assumption in population size estimation. Biometrics 2024; 80:ujad028. [PMID: 38456546 DOI: 10.1093/biomtc/ujad028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 05/12/2022] [Accepted: 12/12/2023] [Indexed: 03/09/2024]
Abstract
The problem of estimating the size of a population based on a subset of individuals observed across multiple data sources is often referred to as capture-recapture or multiple-systems estimation. This is fundamentally a missing data problem, where the number of unobserved individuals represents the missing data. As with any missing data problem, multiple-systems estimation requires users to make an untestable identifying assumption in order to estimate the population size from the observed data. If an appropriate identifying assumption cannot be found for a data set, no estimate of the population size should be produced based on that data set, as models with different identifying assumptions can produce arbitrarily different population size estimates-even with identical observed data fits. Approaches to multiple-systems estimation often do not explicitly specify identifying assumptions. This makes it difficult to decouple the specification of the model for the observed data from the identifying assumption and to provide justification for the identifying assumption. We present a re-framing of the multiple-systems estimation problem that leads to an approach that decouples the specification of the observed-data model from the identifying assumption, and discuss how common models fit into this framing. This approach takes advantage of existing software and facilitates various sensitivity analyses. We demonstrate our approach in a case study estimating the number of civilian casualties in the Kosovo war.
Collapse
|
35
|
Liu S, Yang S, Zhang Y, Liu G(F. Multiply robust estimators in longitudinal studies with missing data under control-based imputation. Biometrics 2024; 80:ujad036. [PMID: 38393335 PMCID: PMC10885818 DOI: 10.1093/biomtc/ujad036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 12/05/2023] [Accepted: 12/19/2023] [Indexed: 02/25/2024]
Abstract
Longitudinal studies are often subject to missing data. The recent guidance from regulatory agencies, such as the ICH E9(R1) addendum addresses the importance of defining a treatment effect estimand with the consideration of intercurrent events. Jump-to-reference (J2R) is one classical control-based scenario for the treatment effect evaluation, where the participants in the treatment group after intercurrent events are assumed to have the same disease progress as those with identical covariates in the control group. We establish new estimators to assess the average treatment effect based on a proposed potential outcomes framework under J2R. Various identification formulas are constructed, motivating estimators that rely on different parts of the observed data distribution. Moreover, we obtain a novel estimator inspired by the efficient influence function, with multiple robustness in the sense that it achieves n1/2-consistency if any pairs of multiple nuisance functions are correctly specified, or if the nuisance functions converge at a rate not slower than n-1/4 when using flexible modeling approaches. The finite-sample performance of the proposed estimators is validated in simulation studies and an antidepressant clinical trial.
Collapse
|
36
|
Liu CC, Yu RX, Aitkin M. The flaw of averages: Bayes factors as posterior means of the likelihood ratio. Pharm Stat 2024. [PMID: 38282048 DOI: 10.1002/pst.2355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 09/25/2023] [Accepted: 11/24/2023] [Indexed: 01/30/2024]
Abstract
As an alternative to the Frequentist p-value, the Bayes factor (or ratio of marginal likelihoods) has been regarded as one of the primary tools for Bayesian hypothesis testing. In recent years, several researchers have begun to re-analyze results from prominent medical journals, as well as from trials for FDA-approved drugs, to show that Bayes factors often give divergent conclusions from those of p-values. In this paper, we investigate the claim that Bayes factors are straightforward to interpret as directly quantifying the relative strength of evidence. In particular, we show that for nested hypotheses with consistent priors, the Bayes factor for the null over the alternative hypothesis is the posterior mean of the likelihood ratio. By re-analyzing 39 results previously published in the New England Journal of Medicine, we demonstrate how the posterior distribution of the likelihood ratio can be computed and visualized, providing useful information beyond the posterior mean alone.
Collapse
|
37
|
Li X, Ning Z, Lü M. Microchannel Gas Flow in the Multi-Flow Regime Based on the Lattice Boltzmann Method. ENTROPY (BASEL, SWITZERLAND) 2024; 26:84. [PMID: 38248209 PMCID: PMC10814513 DOI: 10.3390/e26010084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 12/21/2023] [Accepted: 01/09/2024] [Indexed: 01/23/2024]
Abstract
In this work, a lattice Boltzmann method (LBM) for studying microchannel gas flow is developed in the multi-flow regime. In the LBM, by comparing previous studies' results on effective viscosity in multi-flow regimes, the values of the rarefaction factor applicable to multi-flow regions were determined, and the relationship between relaxation time and Kn number with the rarefaction factor is given. The Kn number is introduced into the second-order slip boundary condition together with the combined bounce-back/specular-reflection (CBBSR) scheme to capture the gas flow in the multi-flow regime. Sensitivity analysis of the dimensionless flow rate to adjustable parameters using the Taguchi method was carried out, and the values of adjustable parameters were determined based on the results of the sensitivity analysis. The results show that the dimensionless flow rate is more sensitive to j than h. Numerical simulations of Poiseuille flow and pulsating flow in a microchannel with second-order slip boundary conditions are carried out to validate the method. The results show that the velocity profile and dimensionless flow rate simulated by the present numerical simulation method in this work are found in the multi-flow regime, and the phenomenon of annular velocity profile in the microchannel is reflected in the phases.
Collapse
|
38
|
Wang L, Han Y, Cao C, Hu H, Li H. The non-linear link between non-high-density lipoprotein to high-density lipoprotein ratio and the risk of stroke in middle-aged and older adults in Chinese: a prospective cohort study from the China health and retirement longitudinal study. Front Endocrinol (Lausanne) 2024; 14:1303336. [PMID: 38288470 PMCID: PMC10823364 DOI: 10.3389/fendo.2023.1303336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 12/20/2023] [Indexed: 02/01/2024] Open
Abstract
Objective This study aims to assess the association between the non-HDL-c/HDL-c ratio and stroke risk among middle-aged and older adults participating in the China Health and Retirement Longitudinal Study (CHARLS). Methods This study conducted a prospective cohort analysis, enrolling a total of 10,183 participants who met the designated criteria from CHARLS between 2011 and 2012. We then used the Cox proportional-hazards regression model to explore the relationship between baseline non-HDL-c/HDL-c ratio and stroke risk. Using a Cox proportional hazards regression with cubic spline function, we were able to identify the non-linear relationship between the non-HDL-c/HDL-c ratio and stroke occurrence. A series of sensitivity analyses were also carried out. Results The average age of the participants included in this study was 59.16 ± 9.35 years, and 4,735 individuals (46.68%) were male. Over a median follow-up period of 7.0 years, a total of 1,191 people (11.70%) experienced a stroke. Using a Cox proportional hazards regression model that was fully adjusted, we found no statistically significant correlation between the non-HDL-c/HDL-c ratio and the risk of stroke (HR=1.022; 95% CI 0.964, 1.083). Nevertheless, we did observe a non-linear relationship and saturation effect between the non-HDL-c/HDL-c ratio and stroke. Employing a two-piece Cox proportional hazards regression model and a recursive algorithm, we determined an inflection point of 2.685 for the non-HDL-c/HDL-c ratio. In instances where the non-HDL-c/HDL-c ratio fell below 2.685, for every 1-unit decrease in the non-HDL-c/HDL-c ratio, the likelihood of stroke decreased by 21.4% (HR=1.214, 95% CI: 1.039-1.418). In contrast, when the non-HDL-c/HDL-c ratio exceeded 2.685, there was no statistically significant change in the risk of stroke for each unit decrease in the non-HDL-c/HDL-c ratio (HR: 0.967, 95% CI: 0.897-1.042). The consistency of these findings across multiple sensitivity analyses suggests their robustness. Conclusion This study unveils a non-linear relationship between the non-HDL-c/HDL-c ratio and stroke risk in middle-aged and older adults in China. Specifically, when the non-HDL-c/HDL-c ratio was below 2.685, a significant and clearly positive association with stroke risk was observed. Additionally, maintaining the non-HDL-c/HDL-c ratio below 2.685 could potentially lead to a substantial reduction in the risk of stroke.
Collapse
|
39
|
Zambrano-Navea C, Bastida F, Aguilera MJ, Gonzalez-Andujar JL. Economical Evaluation of Reduced Herbicide Doses Application Rates to Control Phalaris brachystachys (Short-Spiked Canary Grass) in a Biennial Wheat-Sunflower Rotation in Mediterranean Dryland: A Modelling Approach. PLANTS (BASEL, SWITZERLAND) 2024; 13:212. [PMID: 38256765 PMCID: PMC10821078 DOI: 10.3390/plants13020212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 12/29/2023] [Accepted: 12/30/2023] [Indexed: 01/24/2024]
Abstract
Phalaris brachystachys (short-spiked canary grass) is considered to be among the most troublesome cereal weeds in Mediterranean areas. A bioeconomic model, based on population dynamics, competition and economic sub-models, was developed to simulate the long-term economic consequence of using herbicide-based strategies: no herbicide application, full herbicide dose (standard rate) and two reduced dose rates (75 and 50% of the standard rate) to control P. brachystachys in a biennial wheat-sunflower rotation. Simulation results indicated that only herbicide application at a full dose (90% control) and 3/4 dose (80% control) produced positive economic results, with the full dose being the best strategy (EUR 98.65 ha-1 year-1). A sensitivity analysis showed that the economic outcome, in terms of annualized net return, was strongly influenced by changes in yield, price, and fixed costs. In addition, the annualized net return was more sensitive to parameter changes at reduced herbicide doses than at full rate. In the wheat-sunflower rotation system, the application of the full dose of herbicide was the most economical and stable strategy in the long-term. Reduced doses are not a recommended option from an economic point of view. Bioeconomic models provide practical insight into different management approaches for effective weed control.
Collapse
|
40
|
Husum HC, Hellfritzsch MB, Maimburg RD, Møller-Madsen B, Henriksen M, Lapitskaya N, Kold S, Rahbek O. Pubofemoral distances correlate to acetabular morphology and sonological instability in screening for hip dysplasia. Bone Jt Open 2024; 5:3-8. [PMID: 38164740 PMCID: PMC10758888 DOI: 10.1302/2633-1462.51.bjo-2023-0134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/03/2024] Open
Abstract
Aims The present study seeks to investigate the correlation of pubofemoral distances (PFD) to α angles, and hip displaceability status, defined as femoral head coverage (FHC) or FHC during manual provocation of the newborn hip < 50%. Methods We retrospectively included all newborns referred for ultrasound screening at our institution based on primary risk factor, clinical, and PFD screening. α angles, PFD, FHC, and FHC at follow-up ultrasound for referred newborns were measured and compared using scatter plots, linear regression, paired t-test, and box-plots. Results We included 2,735 newborns, of whom 754 received a follow-up hip ultrasound within six weeks of age. After exclusion, 1,500 hips were included for analysis. Sex distribution was 372 male and 380 female, and the mean age at examination was 36.6 days (4 to 87). We found a negative linear correlation of PFD to α angles (p < 0.001), FHC (p < 0.001), and FHC during provocation (p < 0.001) with a 1 mm increase in PFD corresponding to a -2.1° (95% confidence interval (CI) -2.3 to -1.9) change in α angle and a -3.4% (95% CI -3.7 to -3.0) change in FHC and a -6.0% (-6.6 to -5.5) change in FHC during provocation. The PFD was significantly higher with increasing Graf types and in displaceable hips (p < 0.001). Conclusion PFD is strongly correlated to both α angles and hip displaceability, as measured by FHC and FHC during provocation, in ultrasound of newborn hips. The PFD increases as the hips become more dysplastic and/or displaceable.
Collapse
|
41
|
Tesema Z, Derbie B, Shenkute A, Gobeze M, Kefale A, Gizaw S. Breeding objectives for Central Highland goats using participatory and bio-economic modelling approaches. J Anim Breed Genet 2024; 141:1-12. [PMID: 37608464 DOI: 10.1111/jbg.12821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 06/06/2023] [Accepted: 08/12/2023] [Indexed: 08/24/2023]
Abstract
The breeding objectives of Central Highland goats rearing under a low-input production system were defined through a participatory proportional piling method and bio-economic model. Additionally, the economic values and relative economic value of the breeding objective traits were derived. A participatory proportional piling method was used to estimate the relative weights of farmers attached to a list of goat traits identified, and the relative weights were statistically evaluated using a generalized multinomial logit model analysis. A bio-economic model was used to compute the economic values of the identified traits. The most important traits for selection of does according to farmer's preference were body size, coat colour, post-weaning growth rate and weaning rate with a relative weight (odds ratio) of 1.58, 1.38, 1.37 and 1.13, respectively. Goats with dark red followed by light red coat colour were the most preferred (p < 0.001) by goat keepers compared with white-coloured goats. Farmers were more likely (p < 0.001) to allocate higher scores for does-bearing twins than for single and triplet-bearing does. Using the bio-economic model (economic value and relative economic value), post-weaning growth rate, weaning rate, and six-month weight (body size) were identified as the most important traits and if the mean of these traits is changed by one genetic standard deviation, the change in profit will range from 2.06 to 3.03 $ doe-1 year-1 . Therefore, the most important traits for the selection of Central Highland goats according to the economic-based method were post-weaning weight gain, weaning rate and body size (six-month weight). Besides, coat colour was the second preferred trait by goat keepers next to body size. Thus, this aesthetical trait should be included in the designed breeding programme besides economically important quantitative traits. The combination of the participatory proportional piling method and bio-economic model would give better insights to explore the trait preferences of farmers and enhance profitability. The economic values of traits estimated in this study can be used for the construction of selection indices for Central Highland goats.
Collapse
|
42
|
Liu YS, Hou P, Wang P, Zhu J. [Research advance on quantitative assessment methods of ecosystem water conservation service functions]. YING YONG SHENG TAI XUE BAO = THE JOURNAL OF APPLIED ECOLOGY 2024; 35:275-288. [PMID: 38511465 DOI: 10.13287/j.1001-9332.202401.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 03/22/2024]
Abstract
The water conservation service function, which is one of the most important ecological service function in the regional system, directly reflects the regulation role of a region in precipitation, the redistribution function of precipitation, and the ecohydrological value. With the development of the comprehensive evaluation method and the deepening of research on water conservation service function, relevant evaluation calculation process has changed significantly. Nowadays, in the assessment of the water conservation service function, it is necessary not only to calculate and evaluate relevant indicators, but also to localize specific parameters in the model and analyze the effectiveness of the overall model for specific study areas. However, the current literature review lacks systematic summaries of model evaluation methods. Meanwhile, the review is also insufficient on model validity verification and significance analysis methods, the result verification and applicability analysis methods such as parameter localization in water conservation studies. We reviewed the research advance on typical ecosystem water conservation ser-vice assessment methods with a specific focus on the model assessment methods that have developed rapidly in recent years. At the same time, we summarized methods commonly used for parameter localization, as well as validity testing and sensitivity analysis of simulation results, and discussed existing problems and future directions in this field.
Collapse
|
43
|
Zhang L, Cao H, Medlin K, Pearson J, Aristotelous AC, Chen A, Wessler T, Forest MG. Computational Modeling Insights into Extreme Heterogeneity in COVID-19 Nasal Swab Data. Viruses 2023; 16:69. [PMID: 38257769 PMCID: PMC10820884 DOI: 10.3390/v16010069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 12/20/2023] [Accepted: 12/23/2023] [Indexed: 01/24/2024] Open
Abstract
Throughout the COVID-19 pandemic, an unprecedented level of clinical nasal swab data from around the globe has been collected and shared. Positive tests have consistently revealed viral titers spanning six orders of magnitude! An open question is whether such extreme population heterogeneity is unique to SARS-CoV-2 or possibly generic to viral respiratory infections. To probe this question, we turn to the computational modeling of nasal tract infections. Employing a physiologically faithful, spatially resolved, stochastic model of respiratory tract infection, we explore the statistical distribution of human nasal infections in the immediate 48 h of infection. The spread, or heterogeneity, of the distribution derives from variations in factors within the model that are unique to the infected host, infectious variant, and timing of the test. Hypothetical factors include: (1) reported physiological differences between infected individuals (nasal mucus thickness and clearance velocity); (2) differences in the kinetics of infection, replication, and shedding of viral RNA copies arising from the unique interactions between the host and viral variant; and (3) differences in the time between initial cell infection and the clinical test. Since positive clinical tests are often pre-symptomatic and independent of prior infection or vaccination status, in the model we assume immune evasion throughout the immediate 48 h of infection. Model simulations generate the mean statistical outcomes of total shed viral load and infected cells throughout 48 h for each "virtual individual", which we define as each fixed set of model parameters (1) and (2) above. The "virtual population" and the statistical distribution of outcomes over the population are defined by collecting clinically and experimentally guided ranges for the full set of model parameters (1) and (2). This establishes a model-generated "virtual population database" of nasal viral titers throughout the initial 48 h of infection of every individual, which we then compare with clinical swab test data. Support for model efficacy comes from the sampling of infection dynamics over the virtual population database, which reproduces the six-order-of-magnitude clinical population heterogeneity. However, the goal of this study is to answer a deeper biological and clinical question. What is the impact on the dynamics of early nasal infection due to each individual physiological feature or virus-cell kinetic mechanism? To answer this question, global data analysis methods are applied to the virtual population database that sample across the entire database and de-correlate (i.e., isolate) the dynamic infection outcome sensitivities of each model parameter. These methods predict the dominant, indeed exponential, driver of population heterogeneity in dynamic infection outcomes is the latency time of infected cells (from the moment of infection until onset of viral RNA shedding). The shedding rate of the viral RNA of infected cells in the shedding phase is a strong, but not exponential, driver of infection. Furthermore, the unknown timing of the nasal swab test relative to the onset of infection is an equally dominant contributor to extreme population heterogeneity in clinical test data since infectious viral loads grow from undetectable levels to more than six orders of magnitude within 48 h.
Collapse
|
44
|
Farhan M, Shah Z, Jan R, Islam S, Alshehri MH, Ling Z. A fractional modeling approach for the transmission dynamics of measles with double-dose vaccination. Comput Methods Biomech Biomed Engin 2023:1-18. [PMID: 38149815 DOI: 10.1080/10255842.2023.2297171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 11/08/2023] [Indexed: 12/28/2023]
Abstract
Measles, a member of the Paramyxoviridae family and the Morbillivirus genus, is an infectious disease caused by the measles virus that is extremely contagious and can be prevented through vaccination. When a person with the measles coughs or sneezes, the virus is disseminated by respiratory droplets. Normally, the appearance of measles symptoms takes 10-14 d following viral exposure. Conjunctivitis, a high temperature, a cough, a runny nose, and a distinctive rash are some of the symptoms. Despite the measles vaccination being available, it is still widespread worldwide. To eradicate measles, the Reproduction Number (i.e. R 0 < 1 ) must remain less than unity. This study examines a SEIVR compartmental model in the caputo sense using a double dose of vaccine to simulate the measles outbreak. The reproduction number R 0 and model properties are both thoroughly examined. Both the local and global stabilities of the proposed model are determined for R 0 less and greater than 1. To achieve the model's global stability, the Lyapunov function is used while the existence and uniqueness of the proposed model are demonstrated In addition to the calculated and fitted biological parameters, the forward sensitivity indices for R 0 are also obtained. Simulations of the proposed fractional order (FO) caputo model are performed in order to analyse their graphical representations and the significance of FO derivatives to illustrate how our theoretical findings have an impact. The graphical results show that the measles outbreak is reduced by increasing vaccine dosage rates.
Collapse
|
45
|
Zamanian A, Ahmidi N, Drton M. Assessable and interpretable sensitivity analysis in the pattern graph framework for nonignorable missingness mechanisms. Stat Med 2023; 42:5419-5450. [PMID: 37759370 DOI: 10.1002/sim.9920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 06/12/2023] [Accepted: 09/13/2023] [Indexed: 09/29/2023]
Abstract
The pattern graph framework solves a wide range of missing data problems with nonignorable mechanisms. However, it faces two challenges of assessability and interpretability, particularly important in safety-critical problems such as clinical diagnosis: (i) How can one assess the validity of the framework's a priori assumption and make necessary adjustments to accommodate known information about the problem? (ii) How can one interpret the process of exponential tilting used for sensitivity analysis in the pattern graph framework and choose the tilt perturbations based on meaningful real-world quantities? In this paper, we introduce Informed Sensitivity Analysis, an extension of the pattern graph framework that enables us to incorporate substantive knowledge about the missingness mechanism into the pattern graph framework. Our extension allows us to examine the validity of assumptions underlying pattern graphs and interpret sensitivity analysis results in terms of realistic problem characteristics. We apply our method to a prevalent nonignorable missing data scenario in clinical research. We validate and compare our method's results of our method with a number of widely-used missing data methods, including Unweighted CCA, KNN Imputer, MICE, and MissForest. The validation is done using both boot-strapped simulated experiments as well as real-world clinical observations in the MIMIC-III public dataset.
Collapse
|
46
|
Korsah MA, Johnston ST, Tiedje KE, Day KP, Flegg JA, Walker CR. Mathematical assessment of the role of intervention programs for malaria control. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.12.18.23300185. [PMID: 38196597 PMCID: PMC10775318 DOI: 10.1101/2023.12.18.23300185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
Malaria remains a global health problem despite the many attempts to control and eradicate it. There is an urgent need to understand the current transmission dynamics of malaria and to determine the interventions necessary to control malaria. In this paper, we seek to develop a fit-for-purpose mathematical model to assess the interventions needed to control malaria in an endemic setting. To achieve this, we formulate a malaria transmission model to analyse the spread of malaria in the presence of interventions. A sensitivity analysis of the model is performed to determine the relative impact of the model parameters on disease transmission. We explore how existing variations in the recruitment and management of intervention strategies affect malaria transmission. Results obtained from the study imply that the discontinuation of existing interventions has a significant effect on malaria prevalence. Thus, the maintenance of interventions is imperative for malaria elimination and eradication. In a scenario study aimed at assessing the impact of long-lasting insecticidal nets (LLINs), indoor residual spraying (IRS), and localized individual measures, our findings indicate that increased LLINs utilization and extended IRS coverage (with longer-lasting insecticides) cause a more pronounced reduction in symptomatic malaria prevalence compared to a reduced LLINs utilization and shorter IRS coverage. Additionally, our study demonstrates the impact of localized preventive measures in mitigating the spread of malaria when compared to the absence of interventions.
Collapse
|
47
|
Petersen JM, Kahrs JC, Adrien N, Wood ME, Olshan AF, Smith LH, Howley MM, Ailes EC, Romitti PA, Herring AH, Parker SE, Shaw GM, Politis MD. Bias analyses to investigate the impact of differential participation: Application to a birth defects case-control study. Paediatr Perinat Epidemiol 2023. [PMID: 38102868 DOI: 10.1111/ppe.13026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 11/17/2023] [Accepted: 11/24/2023] [Indexed: 12/17/2023]
Abstract
BACKGROUND Certain associations observed in the National Birth Defects Prevention Study (NBDPS) contrasted with other research or were from areas with mixed findings, including no decrease in odds of spina bifida with periconceptional folic acid supplementation, moderately increased cleft palate odds with ondansetron use and reduced hypospadias odds with maternal smoking. OBJECTIVES To investigate the plausibility and extent of differential participation to produce effect estimates observed in NBDPS. METHODS We searched the literature for factors related to these exposures and participation and conducted deterministic quantitative bias analyses. We estimated case-control participation and expected exposure prevalence based on internal and external reports, respectively. For the folic acid-spina bifida and ondansetron-cleft palate analyses, we hypothesized the true odds ratio (OR) based on prior studies and quantified the degree of exposure over- (or under-) representation to produce the crude OR (cOR) in NBDPS. For the smoking-hypospadias analysis, we estimated the extent of selection bias needed to nullify the association as well as the maximum potential harmful OR. RESULTS Under our assumptions (participation, exposure prevalence, true OR), there was overrepresentation of folic acid use and underrepresentation of ondansetron use and smoking among participants. Folic acid-exposed spina bifida cases would need to have been ≥1.2× more likely to participate than exposed controls to yield the observed null cOR. Ondansetron-exposed cleft palate cases would need to have been 1.6× more likely to participate than exposed controls if the true OR is null. Smoking-exposed hypospadias cases would need to have been ≥1.2 times less likely to participate than exposed controls for the association to falsely appear protective (upper bound of selection bias adjusted smoking-hypospadias OR = 2.02). CONCLUSIONS Differential participation could partly explain certain associations observed in NBDPS, but questions remain about why. Potential impacts of other systematic errors (e.g. exposure misclassification) could be informed by additional research.
Collapse
|
48
|
Díaz I, Lee H, Kıcıman E, Schenck EJ, Akacha M, Follman D, Ghosh D. Sensitivity analysis for causality in observational studies for regulatory science. J Clin Transl Sci 2023; 7:e267. [PMID: 38380390 PMCID: PMC10877517 DOI: 10.1017/cts.2023.688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 10/30/2023] [Accepted: 11/16/2023] [Indexed: 02/22/2024] Open
Abstract
Objective The United States Congress passed the 21st Century Cures Act mandating the development of Food and Drug Administration guidance on regulatory use of real-world evidence. The Forum on the Integration of Observational and Randomized Data conducted a meeting with various stakeholder groups to build consensus around best practices for the use of real-world data (RWD) to support regulatory science. Our companion paper describes in detail the context and discussion of the meeting, which includes a recommendation to use a causal roadmap for study designs using RWD. This article discusses one step of the roadmap: the specification of a sensitivity analysis for testing robustness to violations of causal model assumptions. Methods We present an example of a sensitivity analysis from a RWD study on the effectiveness of Nifurtimox in treating Chagas disease, and an overview of various methods, emphasizing practical considerations on their use for regulatory purposes. Results Sensitivity analyses must be accompanied by careful design of other aspects of the causal roadmap. Their prespecification is crucial to avoid wrong conclusions due to researcher degrees of freedom. Sensitivity analysis methods require auxiliary information to produce meaningful conclusions; it is important that they have at least two properties: the validity of the conclusions does not rely on unverifiable assumptions, and the auxiliary information required by the method is learnable from the corpus of current scientific knowledge. Conclusions Prespecified and assumption-lean sensitivity analyses are a crucial tool that can strengthen the validity and trustworthiness of effectiveness conclusions for regulatory science.
Collapse
|
49
|
Omari Shekaftik S, Nasirzadeh N, Mohammadiyan M, Mohammadpour S. An analysis on control banding-based methods used for occupational risk assessment of nanomaterials. Nanotoxicology 2023; 17:628-650. [PMID: 38164113 DOI: 10.1080/17435390.2023.2293141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 12/05/2023] [Indexed: 01/03/2024]
Abstract
Despite all benefits of nanomaterials, their unique characteristics made them an emerging hazard in workplaces, which need to be assessed for their potential risks. So, the aim of this study was to review all the studies conducted on the risk assessment of activities involving nanomaterials with CB-based methods.This study is based on a literature review on databases including Web of science, Scopus, PubMed, and SID. After reviewing and screening studies according to PRISMA, the collected data were meta-analyzed by Comprehensive Meta-Analysis Software. Also, Newcastle-Ottawa checklist was used for quality assessment of the studies. To determine similarity of methods, Cohen's Kappa was used. Sensitivity analysis was used to determine the role of each factor in the risk assessment by using the Crystal Ball tool.There are eight validated methods for risk assessment. Also, some authors used a self-deigned tool based on CB approach. The results of meta-analysis showed that the odds ratio for the risk of activities involved with nanomaterials was 0.654 (high risk). Results of simulation for Nanotool showed that the mean risk level of activities involved with nanomaterials, with a certainty of 95.07%, is moderate (RL3). Moreover, sensitivity analysis showed that the risk was depended on "Hazard band" in all methods except ISO method.The obtained results can be useful in improving existing methods and suggesting new methods. Also, there is a need to design and propose specific methods for risk assessment of incidental and natural nanomaterials.
Collapse
|
50
|
Rosenbaum PR. A second evidence factor for a second control group. Biometrics 2023; 79:3968-3980. [PMID: 37563803 DOI: 10.1111/biom.13921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 07/24/2023] [Indexed: 08/12/2023]
Abstract
In an observational study of the effects caused by a treatment, a second control group is used in an effort to detect bias from unmeasured covariates, and the investigator is content if no evidence of bias is found. This strategy is not entirely satisfactory: two control groups may differ significantly, yet the difference may be too small to invalidate inferences about the treatment, or the control groups may not differ yet nonetheless fail to provide a tangible strengthening of the evidence of a treatment effect. Is a firmer conclusion possible? Is there a way to analyze a second control group such that the data might report measurably strengthened evidence of cause and effect, that is, insensitivity to larger unmeasured biases? Evidence factor analyses are not commonly used with a second control group: most analyses compare the treated group to each control group, but analyses of that kind are partially redundant; so, they do not constitute evidence factors. An alternative analysis is proposed here, one that does yield two evidence factors, and with a carefully designed test statistic, is capable of extracting strong evidence from the second factor. The new technical work here concerns the development of a test statistic with high design sensitivity and high Bahadur efficiency in a sensitivity analysis for the second factor. A study of binge drinking as a cause of high blood pressure is used as an illustration.
Collapse
|