1
|
Zhang L, Lewsey J. Comparing the performance of two-stage residual inclusion methods when using physician's prescribing preference as an instrumental variable: unmeasured confounding and noncollapsibility. J Comp Eff Res 2024; 13:e230085. [PMID: 38567965 PMCID: PMC11036961 DOI: 10.57264/cer-2023-0085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 03/18/2024] [Indexed: 04/23/2024] Open
Abstract
Aim: The first objective is to compare the performance of two-stage residual inclusion (2SRI), two-stage least square (2SLS) with the multivariable generalized linear model (GLM) in terms of the reducing unmeasured confounding bias. The second objective is to demonstrate the ability of 2SRI and 2SPS in alleviating unmeasured confounding when noncollapsibility exists. Materials & methods: This study comprises a simulation study and an empirical example from a real-world UK population health dataset (Clinical Practice Research Datalink). The instrumental variable (IV) used is based on physicians' prescribing preferences (defined by prescribing history). Results: The percent bias of 2SRI in terms of treatment effect estimates to be lower than GLM and 2SPS and was less than 15% in most scenarios. Further, 2SRI was found to be robust to mild noncollapsibility with the percent bias less than 50%. As the level of unmeasured confounding increased, the ability to alleviate the noncollapsibility decreased. Strong IVs tended to be more robust to noncollapsibility than weak IVs. Conclusion: 2SRI tends to be less biased than GLM and 2SPS in terms of estimating treatment effect. It can be robust to noncollapsibility in the case of the mild unmeasured confounding effect.
Collapse
Affiliation(s)
- Lisong Zhang
- Department of Population Health Sciences, University of Leicester, Leicester, LE1 7RH, UK
| | - Jim Lewsey
- School of Health and Well-Being, University of Glasgow, Glasgow, G12 8TB, UK
| |
Collapse
|
2
|
Park C, Richardson DB, Tchetgen Tchetgen EJ. Single proxy control. Biometrics 2024; 80:ujae027. [PMID: 38646999 PMCID: PMC11033710 DOI: 10.1093/biomtc/ujae027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 12/08/2023] [Accepted: 03/27/2024] [Indexed: 04/25/2024]
Abstract
Negative control variables are sometimes used in nonexperimental studies to detect the presence of confounding by hidden factors. A negative control outcome (NCO) is an outcome that is influenced by unobserved confounders of the exposure effects on the outcome in view, but is not causally impacted by the exposure. Tchetgen Tchetgen (2013) introduced the Control Outcome Calibration Approach (COCA) as a formal NCO counterfactual method to detect and correct for residual confounding bias. For identification, COCA treats the NCO as an error-prone proxy of the treatment-free counterfactual outcome of interest, and involves regressing the NCO on the treatment-free counterfactual, together with a rank-preserving structural model, which assumes a constant individual-level causal effect. In this work, we establish nonparametric COCA identification for the average causal effect for the treated, without requiring rank-preservation, therefore accommodating unrestricted effect heterogeneity across units. This nonparametric identification result has important practical implications, as it provides single-proxy confounding control, in contrast to recently proposed proximal causal inference, which relies for identification on a pair of confounding proxies. For COCA estimation we propose 3 separate strategies: (i) an extended propensity score approach, (ii) an outcome bridge function approach, and (iii) a doubly-robust approach. Finally, we illustrate the proposed methods in an application evaluating the causal impact of a Zika virus outbreak on birth rate in Brazil.
Collapse
Affiliation(s)
- Chan Park
- Department of Statistics and Data Science, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - David B Richardson
- Department of Environmental & Occupational Health, University of California Irvine, Irvine, CA 92697, United States
| | - Eric J Tchetgen Tchetgen
- Department of Statistics and Data Science, University of Pennsylvania, Philadelphia, PA 19104, United States
| |
Collapse
|
3
|
Zafari Z, Park JE, Shah CH, dosReis S, Gorman EF, Hua W, Ma Y, Tian F. The State of Use and Utility of Negative Controls in Pharmacoepidemiologic Studies. Am J Epidemiol 2024; 193:426-453. [PMID: 37851862 DOI: 10.1093/aje/kwad201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 07/27/2023] [Accepted: 10/06/2023] [Indexed: 10/20/2023] Open
Abstract
Uses of real-world data in drug safety and effectiveness studies are often challenged by various sources of bias. We undertook a systematic search of the published literature through September 2020 to evaluate the state of use and utility of negative controls to address bias in pharmacoepidemiologic studies. Two reviewers independently evaluated study eligibility and abstracted data. Our search identified 184 eligible studies for inclusion. Cohort studies (115, 63%) and administrative data (114, 62%) were, respectively, the most common study design and data type used. Most studies used negative control outcomes (91, 50%), and for most studies the target source of bias was unmeasured confounding (93, 51%). We identified 4 utility domains of negative controls: 1) bias detection (149, 81%), 2) bias correction (16, 9%), 3) P-value calibration (8, 4%), and 4) performance assessment of different methods used in drug safety studies (31, 17%). The most popular methodologies used were the 95% confidence interval and P-value calibration. In addition, we identified 2 reference sets with structured steps to check the causality assumption of the negative control. While negative controls are powerful tools in bias detection, we found many studies lacked checking the underlying assumptions. This article is part of a Special Collection on Pharmacoepidemiology.
Collapse
|
4
|
Li Y, Wong KY, Howard AG, Gordon-Larsen P, Highland HM, Graff M, North KE, Downie CG, Avery CL, Yu B, Young KL, Buchanan VL, Kaplan R, Hou L, Joyce BT, Qi Q, Sofer T, Moon JY, Lin DY. Mendelian randomization with incomplete measurements on the exposure in the Hispanic Community Health Study/Study of Latinos. HGG Adv 2024; 5:100245. [PMID: 37817410 PMCID: PMC10628889 DOI: 10.1016/j.xhgg.2023.100245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 10/02/2023] [Accepted: 10/02/2023] [Indexed: 10/12/2023] Open
Abstract
Mendelian randomization has been widely used to assess the causal effect of a heritable exposure variable on an outcome of interest, using genetic variants as instrumental variables. In practice, data on the exposure variable can be incomplete due to high cost of measurement and technical limits of detection. In this paper, we propose a valid and efficient method to handle both unmeasured and undetectable values of the exposure variable in one-sample Mendelian randomization analysis with individual-level data. We estimate the causal effect of the exposure variable on the outcome using maximum likelihood estimation and develop an expectation maximization algorithm for the computation of the estimator. Simulation studies show that the proposed method performs well in making inference on the causal effect. We apply our method to the Hispanic Community Health Study/Study of Latinos, a community-based prospective cohort study, and estimate the causal effect of several metabolites on phenotypes of interest.
Collapse
Affiliation(s)
- Yilun Li
- Department of Biostatistics, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Kin Yau Wong
- Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
| | - Annie Green Howard
- Department of Biostatistics, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; Carolina Population Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Penny Gordon-Larsen
- Carolina Population Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; Department of Nutrition, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Heather M Highland
- Department of Epidemiology, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mariaelisa Graff
- Department of Epidemiology, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Kari E North
- Department of Epidemiology, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Carolina G Downie
- Department of Epidemiology, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Christy L Avery
- Carolina Population Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; Department of Epidemiology, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Bing Yu
- Department of Epidemiology, Human Genetics and Environmental Sciences, School of Public Health, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Kristin L Young
- Department of Epidemiology, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Victoria L Buchanan
- Department of Epidemiology, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Robert Kaplan
- Department of Epidemiology and Population Health, Albert Einstein College of Medicine, Bronx, NY 10461, USA; Public Health Sciences Division, Fred Hutchinson Cancer Center, Seattle, WA 98109, USA
| | - Lifang Hou
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
| | - Brian Thomas Joyce
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
| | - Qibin Qi
- Department of Epidemiology and Population Health, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Tamar Sofer
- Department of Medicine, Harvard Medical School, Boston, MA 02115, USA
| | - Jee-Young Moon
- Department of Epidemiology and Population Health, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Dan-Yu Lin
- Department of Biostatistics, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
5
|
Renson A, Hudgens MG, Keil AP, Zivich PN, Aiello AE. Identifying and estimating effects of sustained interventions under parallel trends assumptions. Biometrics 2023; 79:2998-3009. [PMID: 36989497 PMCID: PMC10539489 DOI: 10.1111/biom.13862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 03/23/2023] [Indexed: 03/31/2023]
Abstract
Many research questions in public health and medicine concern sustained interventions in populations defined by substantive priorities. Existing methods to answer such questions typically require a measured covariate set sufficient to control confounding, which can be questionable in observational studies. Differences-in-differences rely instead on the parallel trends assumption, allowing for some types of time-invariant unmeasured confounding. However, most existing difference-in-differences implementations are limited to point treatments in restricted subpopulations. We derive identification results for population effects of sustained treatments under parallel trends assumptions. In particular, in settings where all individuals begin follow-up with exposure status consistent with the treatment plan of interest but may deviate at later times, a version of Robins' g-formula identifies the intervention-specific mean under stable unit treatment value assumption, positivity, and parallel trends. We develop consistent asymptotically normal estimators based on inverse-probability weighting, outcome regression, and a double robust estimator based on targeted maximum likelihood. Simulation studies confirm theoretical results and support the use of the proposed estimators at realistic sample sizes. As an example, the methods are used to estimate the effect of a hypothetical federal stay-at-home order on all-cause mortality during the COVID-19 pandemic in spring 2020 in the United States.
Collapse
Affiliation(s)
- Audrey Renson
- Department of Epidemiology, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
- Carolina Population Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| | - Michael G. Hudgens
- Department of Biostatistics, University of North Carolina, Chapel Hill, North Carolina, U.S.A
| | - Alexander P. Keil
- Department of Epidemiology, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| | - Paul N. Zivich
- Department of Epidemiology, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| | - Allison E. Aiello
- Columbia Aging Center and Department of Epidemiology, Mailman School of Public Health, Columbia University, New York, New York, U.S.A
| |
Collapse
|
6
|
Degtiar I, Layton T, Wallace J, Rose S. Conditional cross-design synthesis estimators for generalizability in Medicaid. Biometrics 2023; 79:3859-3872. [PMID: 37018228 DOI: 10.1111/biom.13863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 03/23/2023] [Indexed: 04/06/2023]
Abstract
While much of the causal inference literature has focused on addressing internal validity biases, both internal and external validity are necessary for unbiased estimates in a target population of interest. However, few generalizability approaches exist for estimating causal quantities in a target population that is not well-represented by a randomized study but is reflected when additionally incorporating observational data. To generalize to a target population represented by a union of these data, we propose a novel class of conditional cross-design synthesis estimators that combine randomized and observational data, while addressing their estimates' respective biases-lack of overlap and unmeasured confounding. These methods enable estimating the causal effect of managed care plans on health care spending among Medicaid beneficiaries in New York City, which requires obtaining estimates for the 7% of beneficiaries randomized to a plan and 93% who choose a plan, who do not resemble randomized beneficiaries. Our new estimators include outcome regression, propensity weighting, and double robust approaches. All use the covariate overlap between the randomized and observational data to remove potential unmeasured confounding bias. Applying these methods, we find substantial heterogeneity in spending effects across managed care plans. This has major implications for our understanding of Medicaid, where this heterogeneity has previously been hidden. Additionally, we demonstrate that unmeasured confounding rather than lack of overlap poses a larger concern in this setting.
Collapse
Affiliation(s)
| | - Tim Layton
- Department of Health Care Policy, Harvard Medical School, Boston, Massachusetts, USA
| | - Jacob Wallace
- Department of Health Policy & Management, Yale School of Public Health New Haven, Connecticut, USA
| | - Sherri Rose
- Center for Health Policy and Department of Health Policy, Stanford University, Stanford, California, USA
| |
Collapse
|
7
|
Donovan LM, Wai T, Spece LJ, Duan KI, Griffith MF, Leonhard A, Plumley R, Hayes SA, Picazo F, Crothers K, Kapur VK, Palen BN, Au DH, Feemster LC. Sleep Testing and Mortality in a Propensity-matched Cohort of Patients with Chronic Obstructive Pulmonary Disease. Ann Am Thorac Soc 2023; 20:1642-1653. [PMID: 37579136 DOI: 10.1513/annalsats.202303-275oc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 08/10/2023] [Indexed: 08/16/2023] Open
Abstract
Rationale: Many advocate the application of propensity-matching methods to real-world data to answer key questions around obstructive sleep apnea (OSA) management. One such question is whether identifying undiagnosed OSA impacts mortality in high-risk populations, such as those with chronic obstructive pulmonary disease (COPD). Objectives: Assess the association of sleep testing with mortality among patients with COPD and a high likelihood of undiagnosed OSA. Methods: We identified patients with COPD and a high likelihood of undiagnosed OSA. We then distinguished those receiving sleep testing within 90 days of index COPD encounters. We calculated propensity scores for testing based on 37 variables and compared long-term mortality in matched groups. In sensitivity analyses, we compared mortality using inverse propensity weighting and instrumental variable methods. We also compared the incidence of nonfatal events including adverse outcomes (hospitalizations and COPD exacerbations) and routine services that are regularly indicated in COPD (influenza vaccination and pulmonary function testing). We compared the incidence of each nonfatal event as a composite outcome with death and separately compared the marginal probability of each nonfatal event independently, with death as a competing risk. Results: Among 135,958 patients, 1,957 (1.4%) received sleep testing. We propensity matched all patients with sleep testing to an equal number without testing, achieving excellent balance on observed confounders, with standardized differences < 0.10. We observed lower mortality risk among patients with sleep testing (incidence rate ratio, 0.88; 95% confidence interval [CI], 0.79-0.99) and similar results using inverse propensity weighting and instrumental variable methods. Contrary to mortality, we found that sleep testing was associated with a similar or greater risk for nonfatal adverse events, including inpatient COPD exacerbations (subhazard ratio, 1.29; 95% CI, 1.02-1.62) and routine services like influenza vaccination (subhazard ratio, 1.26; 95% CI, 1.17-1.36). Conclusions: Our disparate findings can be interpreted in multiple ways. Sleep testing may indeed cause both reduced mortality and greater incidence of nonfatal adverse outcomes and routine services. However, it is also possible that our findings stem from residual confounding by patients' likelihood of accessing care. Given the limitations of propensity-based analyses, we cannot confidently distinguish these two possibilities. This uncertainty highlights the limitations of using propensity-based analyses to guide patient care and policy decisions.
Collapse
Affiliation(s)
- Lucas M Donovan
- Seattle-Denver Center of Innovation for Veteran-centered and Value-driven Care, Veterans Affairs Puget Sound Health Care System, Seattle, Washington
- University of Washington, Seattle, Washington; and
| | - Travis Wai
- Seattle-Denver Center of Innovation for Veteran-centered and Value-driven Care, Veterans Affairs Puget Sound Health Care System, Seattle, Washington
| | - Laura J Spece
- Seattle-Denver Center of Innovation for Veteran-centered and Value-driven Care, Veterans Affairs Puget Sound Health Care System, Seattle, Washington
- University of Washington, Seattle, Washington; and
| | - Kevin I Duan
- Seattle-Denver Center of Innovation for Veteran-centered and Value-driven Care, Veterans Affairs Puget Sound Health Care System, Seattle, Washington
- University of Washington, Seattle, Washington; and
| | - Matthew F Griffith
- Seattle-Denver Center of Innovation for Veteran-centered and Value-driven Care, Veterans Affairs Puget Sound Health Care System, Seattle, Washington
- University of Colorado, Aurora, Colorado
| | | | - Robert Plumley
- Seattle-Denver Center of Innovation for Veteran-centered and Value-driven Care, Veterans Affairs Puget Sound Health Care System, Seattle, Washington
| | | | | | - Kristina Crothers
- Seattle-Denver Center of Innovation for Veteran-centered and Value-driven Care, Veterans Affairs Puget Sound Health Care System, Seattle, Washington
- University of Washington, Seattle, Washington; and
| | | | - Brian N Palen
- Seattle-Denver Center of Innovation for Veteran-centered and Value-driven Care, Veterans Affairs Puget Sound Health Care System, Seattle, Washington
- University of Washington, Seattle, Washington; and
| | - David H Au
- Seattle-Denver Center of Innovation for Veteran-centered and Value-driven Care, Veterans Affairs Puget Sound Health Care System, Seattle, Washington
- University of Washington, Seattle, Washington; and
| | - Laura C Feemster
- Seattle-Denver Center of Innovation for Veteran-centered and Value-driven Care, Veterans Affairs Puget Sound Health Care System, Seattle, Washington
- University of Washington, Seattle, Washington; and
| |
Collapse
|
8
|
Gao D, Hu J, Bradley CJ, Yang F. Instrumental variable analysis for cost outcome: Application to the effect of primary care visit on medical cost among low-income adults. Stat Med 2023; 42:4349-4376. [PMID: 37828812 PMCID: PMC10644894 DOI: 10.1002/sim.9865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 06/12/2023] [Accepted: 07/20/2023] [Indexed: 10/14/2023]
Abstract
Medical cost data often consist of zero values as well as extremely right-skewed positive values. A two-part model is a popular choice for analyzing medical cost data, where the first part models the probability of a positive cost using logistic regression and the second part models the positive cost using a lognormal or Gamma distribution. To address the unmeasured confounding in studies on cost outcome under two-part models, two instrumental variable (IV) methods, two-stage residual inclusion (2SRI) and two-stage prediction substitution (2SPS) are widely applied. However, previous literature demonstrated that both the 2SRI and the 2SPS could fail to consistently estimate the causal effect among compliers under standard IV assumptions for binary and survival outcomes. Our simulation studies confirmed that it continued to be the case for a two-part model, which is another nonlinear model. In this article, we develop a model-based IV approach, Instrumental Variable with Two-Part model (IV2P), to obtain a consistent estimate of the causal effect among compliers for cost outcome under standard IV assumptions. In addition, we develop sensitivity analysis approaches to allow the evaluation of the sensitivity of the causal conclusions to potential quantified violations of the exclusion restriction assumption and the randomization of IV assumption. We apply our method to a randomized cash incentive study to evaluate the effect of a primary care visit on medical cost among low-income adults newly covered by a primary care program.
Collapse
Affiliation(s)
- Dexiang Gao
- University of Colorado Cancer Center Biostatistics Core, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
- Department of Pediatrics, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Junxiao Hu
- University of Colorado Cancer Center Biostatistics Core, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
- Department of Pediatrics, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Cathy J. Bradley
- Department of Health Systems, Management & Policy, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Fan Yang
- Yau Mathematical Sciences Center, Tsinghua University, Beijing, China
- Yanqi Lake Beijing Institute of Mathmatical Sciences and Applications, Beijing, China
| |
Collapse
|
9
|
Roy S, Ye T, Ertefaie A, Vo TT, Flory J, Hennessy S, Small D. Group sequential testing under instrumented difference-in-differences approach. Stat Med 2023; 42:3838-3859. [PMID: 37345519 DOI: 10.1002/sim.9836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 04/15/2023] [Accepted: 05/08/2023] [Indexed: 06/23/2023]
Abstract
Unmeasured confounding is a major obstacle to reliable causal inference based on observational studies. Instrumented difference-in-differences (iDiD), a novel idea connecting instrumental variable and standard DiD, ameliorates the above issue by explicitly leveraging exogenous randomness in an exposure trend. In this article, we utilize the above idea of iDiD, and propose a novel group sequential testing method that provides valid inference even in the presence of unmeasured confounders. At each time point, we estimate the average or conditional average treatment effect under iDiD setting using the data accumulated up to that time point, and test the significance of the treatment effect. We derive the joint distribution of the test statistics under the null using the asymptotic properties of M-estimation, and the group sequential boundaries are obtained using theα $$ \alpha $$ -spending functions. The performance of our proposed approach is evaluated on both synthetic data and Clinformatics Data Mart Database (OptumInsight, Eden Prairie, MN) to examine the association between rofecoxib and acute myocardial infarction, and our method detects significant adverse effect of rofecoxib much earlier than the time when it was finally withdrawn from the market.
Collapse
Affiliation(s)
- Samrat Roy
- Department of Statistics and Data Science, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Ting Ye
- Department of Biostatistics, University of Washington, Seattle, Washington, USA
| | - Ashkan Ertefaie
- Department of Biostatistics and Computational Biology, University of Rochester, Rochester, New York, USA
| | - Tat-Thang Vo
- Department of Statistics and Data Science, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - James Flory
- Department of Subspecialty Medicine, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Sean Hennessy
- Division of Epidemiology, DBEI, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Dylan Small
- Department of Statistics and Data Science, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
10
|
Liu Z, Ye T, Sun B, Schooling M, Tchetgen ET. Mendelian randomization mixed-scale treatment effect robust identification and estimation for causal inference. Biometrics 2023; 79:2208-2219. [PMID: 35950778 DOI: 10.1111/biom.13735] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 08/02/2022] [Indexed: 11/28/2022]
Abstract
Standard Mendelian randomization (MR) analysis can produce biased results if the genetic variant defining an instrumental variable (IV) is confounded and/or has a horizontal pleiotropic effect on the outcome of interest not mediated by the treatment variable. We provide novel identification conditions for the causal effect of a treatment in the presence of unmeasured confounding by leveraging a possibly invalid IV for which both the IV independence and exclusion restriction assumptions may be violated. The proposed Mendelian randomization mixed-scale treatment effect robust identification (MR MiSTERI) approach relies on (i) an assumption that the treatment effect does not vary with the possibly invalid IV on the additive scale; (ii) that the confounding bias does not vary with the possibly invalid IV on the odds ratio scale; and (iii) that the residual variance for the outcome is heteroskedastic with respect to the possibly invalid IV. Although assumptions (i) and (ii) have, respectively, appeared in the IV literature, assumption (iii) has not; we formally establish that their conjunction can identify a causal effect even with an invalid IV. MR MiSTERI is shown to be particularly advantageous in the presence of pervasive heterogeneity of pleiotropic effects on the additive scale. We propose a simple and consistent three-stage estimator that can be used as a preliminary estimator to a carefully constructed efficient one-step-update estimator. In order to incorporate multiple, possibly correlated, and weak invalid IVs, a common challenge in MR studies, we develop a MAny Weak Invalid Instruments (MR MaWII MiSTERI) approach for strengthened identification and improved estimation accuracy. Both simulation studies and UK Biobank data analysis results demonstrate the robustness of the proposed methods.
Collapse
Affiliation(s)
- Zhonghua Liu
- Department of Biostatistics, Columbia University, New York, New York, USA
| | - Ting Ye
- Department of Biostatistics, University of Washington, Seattle, Washington, USA
| | - Baoluo Sun
- Department of Statistics and Data Science, National University of Singapore, Singapore
| | - Mary Schooling
- CUNY Graduate School of Public Health and Health Policy, New York, New York, USA
- School of Public Health, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong SAR, China
| | - Eric Tchetgen Tchetgen
- Department of Statistics and Data Science, The Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
11
|
Fogarty CB. Testing weak nulls in matched observational studies. Biometrics 2023; 79:2196-2207. [PMID: 35980014 DOI: 10.1111/biom.13741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 08/01/2022] [Indexed: 11/29/2022]
Abstract
We develop sensitivity analyses for the sample average treatment effect in matched observational studies while allowing unit-level treatment effects to vary. The methods may be applied to studies using any optimal without-replacement matching algorithm. In contrast to randomized experiments and to paired observational studies, we show for general matched designs that over a large class of test statistics, any procedure bounding the worst-case expectation while allowing for arbitrary effect heterogeneity must be unnecessarily conservative if treatment effects are actually constant across individuals. We present a sensitivity analysis which bounds the worst-case expectation while allowing for effect heterogeneity, and illustrate why it is generally conservative if effects are constant. An alternative procedure is presented that is asymptotically sharp if treatment effects are constant, and that is valid for testing the sample average effect under additional restrictions which may be deemed benign by practitioners. Simulations demonstrate that this alternative procedure results in a valid sensitivity analysis for the weak null hypothesis under a host of reasonable data-generating processes. The procedures allow practitioners to assess robustness of estimated sample average treatment effects to hidden bias while allowing for effect heterogeneity in matched observational studies.
Collapse
Affiliation(s)
- Colin B Fogarty
- Operations Research and Statistics Group, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| |
Collapse
|
12
|
Penning de Vries BBL, Groenwold RHH. Negative controls: Concepts and caveats. Stat Methods Med Res 2023; 32:1576-1587. [PMID: 37338976 PMCID: PMC10515451 DOI: 10.1177/09622802231181230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2023]
Abstract
Unmeasured confounding is a well-known obstacle in causal inference. In recent years, negative controls have received increasing attention as a important tool to address concerns about the problem. The literature on the topic has expanded rapidly and several authors have advocated the more routine use of negative controls in epidemiological practice. In this article, we review concepts and methodologies based on negative controls for detection and correction of unmeasured confounding bias. We argue that negative controls may lack both specificity and sensitivity to detect unmeasured confounding and that proving the null hypothesis of a null negative control association is impossible. We focus our discussion on the control outcome calibration approach, the difference-in-difference approach, and the double-negative control approach as methods for confounding correction. For each of these methods, we highlight their assumptions and illustrate the potential impact of violations thereof. Given the potentially large impact of assumption violations, it may sometimes be desirable to replace strong conditions for exact identification with weaker, easily verifiable conditions, even when these imply at most partial identification of unmeasured confounding. Future research in this area may broaden the applicability of negative controls and in turn make them better suited for routine use in epidemiological practice. At present, however, the applicability of negative controls should be carefully judged on a case-by-case basis.
Collapse
Affiliation(s)
- Bas BL Penning de Vries
- Julius Center for Health Sciences and Primary Care, Utrecht University Medical Center, Utrecht, The Netherlands
| | - Rolf HH Groenwold
- Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, The Netherlands
- Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, The Netherlands
| |
Collapse
|
13
|
Liu X, Zhang Z, Valentino K, Wang L. The impact of omitting confounders in parallel process latent growth curve mediation models: Three sensitivity analysis approaches. Struct Equ Modeling 2023; 31:132-150. [PMID: 38706777 PMCID: PMC11068081 DOI: 10.1080/10705511.2023.2189551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 03/07/2023] [Indexed: 05/07/2024]
Abstract
Parallel process latent growth curve mediation models (PP-LGCMMs) are frequently used to longitudinally investigate the mediation effects of treatment on the level and change of outcome through the level and change of mediator. An important but often violated assumption in empirical PP-LGCMM analysis is the absence of omitted confounders of the relationships among treatment, mediator, and outcome. In this study, we analytically examined how omitting pretreatment confounders impacts the inference of mediation from the PP-LGCMM. Using the analytical results, we developed three sensitivity analysis approaches for the PP-LGCMM, including the frequentist, Bayesian, and Monte Carlo approaches. The three approaches help investigate different questions regarding the robustness of mediation results from the PP-LGCMM, and handle the uncertainty in the sensitivity parameters differently. Applications of the three sensitivity analyses are illustrated using a real-data example. A user-friendly Shiny web application is developed to conduct the sensitivity analyses.
Collapse
Affiliation(s)
- Xiao Liu
- The University of Texas at Austin
| | | | | | | |
Collapse
|
14
|
Lupattelli A, Trinh NTH, Nordeng H. Association of maternal personality traits with medication use during pregnancy to appraise unmeasured confounding in long-term pharmacoepidemiological safety studies. Front Pharmacol 2023; 14:1160168. [PMID: 37256227 PMCID: PMC10225644 DOI: 10.3389/fphar.2023.1160168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 04/26/2023] [Indexed: 06/01/2023] Open
Abstract
Maternal personality is a possible confounder on the association between prenatal medication exposure and long-term developmental outcomes in offspring, but it is often unmeasured. This study aimed to (i) estimate the association between five maternal personality traits and prenatal use of acetaminophen (including extended use), opioid analgesics, antidepressants, benzodiazepines/z-hypnotics, and antipsychotics; (ii) evaluate, using an applied example, whether unmeasured confounding by maternal neuroticism would make the association between prenatal antidepressant-child ADHD null, using the E-value framework. We used data from 8,879 pregnant women and recent mothers who participated in the Multinational Medication Use in Pregnancy Study, a web-based cross-sectional study performed within the period from 1-Oct-2011 to 29-Feb-2012 in Europe, North America and Australia. Medication use in pregnancy was self-reported by the women. Personality was assessed with the Big Five Inventory, capturing the dimensions of neuroticism, extraversion, openness, agreeableness, and conscientiousness. Adjusted logistic regression analyses were conducted for each trait-medication pair, using the survey weighting. There was a strong association between having high neuroticism and prenatal use of antidepressants (Odds Ratio (OR): 5.63, 95% Confidence Interval (CI): 3.96-8.01), benzodiazepines/z-hypnotics (OR: 6.66, 95% CI: 4.05-10.95), and analgesic opioids (OR: 2.24, 95% CI: 1.41-3.56), but not with antipsychotics. Among women with mental illness, this association attenuated for benzodiazepines/z-hypnotics, but decreased to the null for antidepressants. High neuroticism (OR: 1.31, 95% CI: 1.08-1.59) and high openness (OR: 0.77, 95% CI: 0.64-0.93) were associated with extended use of acetaminophen. The E-value for the Hazard Ratio 1.93 in the applied example was 3.27. If the example study was conducted using a population comparison group, high maternal neuroticism could have explained away the association antidepressant-ADHD. Because the example study included only women with a mental illness, this risk of bias was assessed as minimal. Various personality dispositions in the mother are associated, with a different degree, to prenatal use of medication. The strength of these association can aid researchers in evaluating the influence of uncontrolled confounding by maternal personality in long-term safety studies in pregnancy, using the E-value. This assessment should always be performed in addition to a rigorous study design using approaches to triangulate the evidence.
Collapse
Affiliation(s)
- Angela Lupattelli
- Pharmacoepidemiology and Drug Safety Research Group, Department of Pharmacy, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
| | - Nhung T. H. Trinh
- Pharmacoepidemiology and Drug Safety Research Group, Department of Pharmacy, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
| | - Hedvig Nordeng
- Pharmacoepidemiology and Drug Safety Research Group, Department of Pharmacy, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
- Department of Child Health and Development, Norwegian Institute of Public Health, Oslo, Norway
| |
Collapse
|
15
|
Li L, Jemielita T. Confounding adjustment in the analysis of augmented randomized controlled trial with hybrid control arm. Stat Med 2023. [PMID: 37186394 DOI: 10.1002/sim.9753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 03/03/2023] [Accepted: 04/16/2023] [Indexed: 05/17/2023]
Abstract
The augmented randomized controlled trial (RCT) with hybrid control arm includes a randomized treatment group (RT), a smaller randomized control group (RC), and a large synthetic control (SC) group from real-world data. This kind of trial is useful when there is logistics and ethics hurdle to conduct a fully powered RCT with equal allocation, or when it is necessary to increase the power of the RCT by incorporating real-world data. A difficulty in the analysis of augmented RCT is that the SC and RC may be systematically different in the distribution of observed and unmeasured confounding factors, causing bias when the two control groups are analyzed together as hybrid controls. We propose to use propensity score (PS) analysis to balance the observed confounders between SC and RC. The possible bias caused by unmeasured confounders can be estimated and tested by analyzing propensity score adjusted outcomes from SC and RC. We also propose a partial bias correction (PBC) procedure to reduce bias from unmeasured confounding. Extensive simulation studies show that the proposed PS + PBC procedures can improve the efficiency and statistical power by effectively incorporating the SC into the RCT data analysis, while still control the estimation bias and Type I error inflation that might arise from unmeasured confounding. We illustrate the proposed statistical procedures with data from an augmented RCT in oncology.
Collapse
Affiliation(s)
- Liang Li
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Thomas Jemielita
- Early Oncology Statistics, Merck & Co., Inc., Rahway, New Jersey, USA
| |
Collapse
|
16
|
Li KQ, Shi X, Miao W, Tchetgen ET. Double Negative Control Inference in Test-Negative Design Studies of Vaccine Effectiveness. ArXiv 2023:arXiv:2203.12509v4. [PMID: 35350548 PMCID: PMC8963685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Revised: 07/07/2022] [Indexed: 10/26/2022]
Abstract
The test-negative design (TND) has become a standard approach to evaluate vaccine effectiveness against the risk of acquiring infectious diseases in real-world settings, such as Influenza, Rotavirus, Dengue fever, and more recently COVID-19. In a TND study, individuals who experience symptoms and seek care are recruited and tested for the infectious disease which defines cases and controls. Despite TND's potential to reduce unobserved differences in healthcare seeking behavior (HSB) between vaccinated and unvaccinated subjects, it remains subject to various potential biases. First, residual confounding bias may remain due to unobserved HSB, occupation as healthcare worker, or previous infection history. Second, because selection into the TND sample is a common consequence of infection and HSB, collider stratification bias may exist when conditioning the analysis on testing, which further induces confounding by latent HSB. In this paper, we present a novel approach to identify and estimate vaccine effectiveness in the target population by carefully leveraging a pair of negative control exposure and outcome variables to account for potential hidden bias in TND studies. We illustrate our proposed method with extensive simulation and an application to study COVID-19 vaccine effectiveness using data from the University of Michigan Health System.
Collapse
Affiliation(s)
| | - Xu Shi
- Department of Biostatistics, University of Michigan
| | - Wang Miao
- Department of Probability and Statistics, Peking University
| | - Eric Tchetgen Tchetgen
- Department of Statistics and Data Science, The Wharton School, University of Pennsylvania
| |
Collapse
|
17
|
Wickramarachchi DS, Lim LHM, Sun B. Mediation analysis with multiple mediators under unmeasured mediator-outcome confounding. Stat Med 2023; 42:422-432. [PMID: 36502820 DOI: 10.1002/sim.9624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 11/07/2022] [Accepted: 11/24/2022] [Indexed: 12/14/2022]
Abstract
It is often of interest in the health and social sciences to investigate the joint mediation effects of multiple post-exposure mediating variables. Identification of such joint mediation effects generally require no unmeasured confounding of the outcome with respect to the whole set of mediators. As the number of mediators under consideration grows, this key assumption is likely to be violated as it is often infeasible to intervene on any of the mediators. In this article, we develop a simple two-step method of moments estimation procedure to assess mediation with multiple mediators simultaneously in the presence of potential unmeasured mediator-outcome confounding. Our identification result leverages heterogeneity of the population exposure effect on the mediators, which is plausible under a variety of empirical settings. The proposed estimators are illustrated through both simulations and an application to evaluate the mediating effects of post-traumatic stress disorder symptoms in the association between self-efficacy and fatigue among health care workers during the COVID-19 outbreak.
Collapse
Affiliation(s)
| | - Laura Huey Mien Lim
- Department of Statistics and Data Science, National University of Singapore, Singapore
| | - Baoluo Sun
- Department of Statistics and Data Science, National University of Singapore, Singapore
| |
Collapse
|
18
|
Papadogeorgou G. Discussion on "Spatial+: a novel approach to spatial confounding" by Emiko Dupont, Simon N. Wood, and Nicole H. Augustin. Biometrics 2022; 78:1305-1308. [PMID: 35712896 DOI: 10.1111/biom.13655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 09/23/2021] [Accepted: 10/12/2021] [Indexed: 12/30/2022]
Abstract
I congratulate Dupont, Wood, and Augustin (DWA hereon) for providing an easy-to-implement method for estimation in the presence of spatial confounding, and for addressing some of the complicated aspects on the topic. I discuss conceptual and operational issues that are fundamental to inference in spatial settings: (i) the target quantity and its interpretability, (ii) the nonspatial aspect of covariates and their relative spatial scales, and (iii) the impact of spatial smoothing. While DWA provide some insights on these issues, I believe that the audience might benefit from a deeper discussion.
Collapse
|
19
|
Leahy TP, Kent S, Sammon C, Groenwold RH, Grieve R, Ramagopalan S, Gomes M. Unmeasured confounding in nonrandomized studies: quantitative bias analysis in health technology assessment. J Comp Eff Res 2022; 11:851-859. [PMID: 35678151 DOI: 10.2217/cer-2022-0029] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
Evidence generated from nonrandomized studies (NRS) is increasingly submitted to health technology assessment (HTA) agencies. Unmeasured confounding is a primary concern with this type of evidence, as it may result in biased treatment effect estimates, which has led to much criticism of NRS by HTA agencies. Quantitative bias analyses are a group of methods that have been developed in the epidemiological literature to quantify the impact of unmeasured confounding and adjust effect estimates from NRS. Key considerations for application in HTA proposed in this article reflect the need to balance methodological complexity with ease of application and interpretation, and the need to ensure the methods fit within the existing frameworks used to assess nonrandomized evidence by HTA bodies.
Collapse
Affiliation(s)
| | - Seamus Kent
- National Institute for Health & Care Excellence, Manchester, M1 4BT, UK
| | | | - Rolf Hh Groenwold
- Department of Clinical Epidemiology & Department of Biomedical Data Sciences, Leiden University Medical Centre, Einthovenweg 20, Leiden, 2333, The Netherlands
| | - Richard Grieve
- Department of Health Services Research and Policy, London School of Hygiene & Tropical Medicine, London, WC1E 7HT, UK
| | - Sreeram Ramagopalan
- Global Access, F. Hoffmann-La Roche, Grenzacherstrasse 124 CH-4070, Basel, Switzerland
| | - Manuel Gomes
- Department of Applied Health Research, University College London, London, WC1E 6BT, UK
| |
Collapse
|
20
|
Leahy TP, Duffield S, Kent S, Sammon C, Tzelis D, Ray J, Groenwold RH, Gomes M, Ramagopalan S, Grieve R. Application of quantitative bias analysis for unmeasured confounding in cost-effectiveness modelling. J Comp Eff Res 2022; 11:861-870. [PMID: 35678168 DOI: 10.2217/cer-2022-0030] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
Due to uncertainty regarding the potential impact of unmeasured confounding, health technology assessment (HTA) agencies often disregard evidence from nonrandomized studies when considering new technologies. Quantitative bias analysis (QBA) methods provide a means to quantify this uncertainty but have not been widely used in the HTA setting, particularly in the context of cost-effectiveness modelling (CEM). This study demonstrated the application of an aggregate and patient-level QBA approach to quantify and adjust for unmeasured confounding in a simulated nonrandomized comparison of survival outcomes. Application of the QBA output within a CEM through deterministic and probabilistic sensitivity analyses and under different scenarios of knowledge of an unmeasured confounder demonstrates the potential value of QBA in HTA.
Collapse
Affiliation(s)
| | - Stephen Duffield
- National Institute for Health & Care Excellence, Manchester, M1 4BT, UK
| | - Seamus Kent
- National Institute for Health & Care Excellence, Manchester, M1 4BT, UK
| | | | | | - Joshua Ray
- Global Access, F. Hoffmann-La Roche, Grenzacherstrasse 124, CH-4070, Basel, Switzerland
| | - Rolf Hh Groenwold
- Leiden University Medical Centre, Department of Clinical Epidemiology & Department of Biomedical Data Sciences, Einthovenweg 20, 2333, ZC Leiden, The Netherlands
| | | | - Sreeram Ramagopalan
- Global Access, F. Hoffmann-La Roche, Grenzacherstrasse 124, CH-4070, Basel, Switzerland
| | - Richard Grieve
- London School of Hygiene & Tropical Medicine, London, WC1H 9SH, UK
| |
Collapse
|
21
|
Richardson DB, Tchetgen Tchetgen EJ. Bespoke Instruments: A new tool for addressing unmeasured confounders. Am J Epidemiol 2022; 191:939-947. [PMID: 34907434 DOI: 10.1093/aje/kwab288] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 12/10/2021] [Accepted: 12/10/2021] [Indexed: 12/24/2022] Open
Abstract
Suppose that an investigator is interested in quantifying an exposure-disease causal association in a setting where the exposure, disease, and some potential confounders of the association of interest have been measured. However, there remains concern about residual confounding of the association of interest by unmeasured confounders. We propose an approach to account for residual bias due to unmeasured confounders. The proposed approach uses a measured confounder to derive a "bespoke" instrumental variable that is tailored to the study population and is used to control for bias due to residual confounding. The approach may provide a useful tool for assessing and accounting for bias due to residual confounding. We provide a formal description of the conditions for identification of causal effects, illustrate the method using simulations, and provide an empirical example concerning mortality among Japanese atomic bomb survivors.
Collapse
|
22
|
Ananth CV, Brandt JS. A principled approach to mediation analysis in perinatal epidemiology. Am J Obstet Gynecol 2022; 226:24-32.e6. [PMID: 34991898 DOI: 10.1016/j.ajog.2021.10.028] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 10/21/2021] [Accepted: 10/21/2021] [Indexed: 01/03/2023]
Abstract
For many research questions in perinatal epidemiology, gestational age is a mediator that features the causal pathway between exposure and outcome. A mediator is an intermediate variable between an exposure and outcome, which is influenced by the exposure on the causal pathway to the outcome. Therefore, conventional analyses that adjust, stratify, or match for gestational age or its proxy (eg, preterm vs term deliveries) are problematic. This practice, which is entrenched in perinatal research, induces an overadjustment bias. Depending on the causal question, it may be inappropriate to adjust (or condition) for a mediator, such as gestational age, by either design or statistical analysis, but its effect can be quantified through causal mediation analysis. In an exposition of such methods, we demonstrated the relationship between the exposure and outcome and provided a formal analytical framework to quantify the extent to which a causal effect is influenced by a mediator. We reviewed concepts of confounding and causal inference, introduced the concept of a mediator and illustrated the perils of adjusting for a mediator in an exposure-outcome paradigm for a given causal question, adopted causal methods that call for an evaluation of a mediator in a causal exposure effect on the outcome, and discussed unmeasured confounding assumptions in mediation analysis. Furthermore, we reviewed other developments in the causal mediation analysis literature, including decomposition of a total effect when the mediator interacts with the exposure (4-way decomposition), methods for multiple mediators, mediation methods for case-control studies, mediation methods for time-to-event outcomes, sample size and power analysis for mediation analysis, and available software to apply these methods. To illustrate these methods, we provided a clinical example to estimate the risk of perinatal mortality (outcome) concerning placental abruption (exposure) and to determine the extent to which preterm delivery (mediator; a proxy for gestational age) plays a role in this causal effect. We hoped that the adoption of mediation methods described in this review will move research in perinatal epidemiology away from biased adjustments of mediators toward a more nuanced quantification of effects that pose unique challenges and provide unique insights in our field.
Collapse
|
23
|
Zhou M, Yao W. Sensitivity analysis of unmeasured confounding in causal inference based on exponential tilting and super learner. J Appl Stat 2021; 50:744-760. [PMID: 36819084 PMCID: PMC9930795 DOI: 10.1080/02664763.2021.1999398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Causal inference under the potential outcome framework relies on the strongly ignorable treatment assumption. This assumption is usually questionable in observational studies, and the unmeasured confounding is one of the fundamental challenges in causal inference. To this end, we propose a new sensitivity analysis method to evaluate the impact of the unmeasured confounder by leveraging ideas of doubly robust estimators, the exponential tilt method, and the super learner algorithm. Compared to other existing methods of sensitivity analysis that parameterize the unmeasured confounder as a latent variable in the working models, the exponential tilting method does not impose any restrictions on the structure or models of the unmeasured confounders. In addition, in order to reduce the modeling bias of traditional parametric methods, we propose incorporating the super learner machine learning algorithm to perform nonparametric model estimation and the corresponding sensitivity analysis. Furthermore, most existing sensitivity analysis methods require multivariate sensitivity parameters, which make its choice difficult and subjective in practice. In comparison, the new method has a univariate sensitivity parameter with a nice and simple interpretation of log-odds ratios for binary outcomes, which makes its choice and the application of the new sensitivity analysis method very easy for practitioners.
Collapse
Affiliation(s)
- Mi Zhou
- Department of Statistics, University of California, Riverside, CA, USA
| | - Weixin Yao
- Department of Statistics, University of California, Riverside, CA, USA,Weixin Yao Department of Statistics, University of California, Riverside, CA92521, USA
| |
Collapse
|
24
|
Affiliation(s)
- Tyler J VanderWeele
- Harvard School of Public Health, Epidemiology and Biostatistics, 677 Huntington Ave, Boston, MA 02115, USA. E-mail:
| |
Collapse
|
25
|
Abstract
Confounding, selection bias, and measurement error are well-known sources of bias in epidemiologic research. Methods for assessing these biases have their own limitations. Many quantitative sensitivity analysis approaches consider each type of bias individually, although more complex approaches are harder to implement or require numerous assumptions. By failing to consider multiple biases at once, researchers can underestimate-or overestimate-their joint impact. We show that it is possible to bound the total composite bias owing to these three sources and to use that bound to assess the sensitivity of a risk ratio to any combination of these biases. We derive bounds for the total composite bias under a variety of scenarios, providing researchers with tools to assess their total potential impact. We apply this technique to a study where unmeasured confounding and selection bias are both concerns and to another study in which possible differential exposure misclassification and confounding are concerns. The approach we describe, though conservative, is easier to implement and makes simpler assumptions than quantitative bias analysis. We provide R functions to aid implementation.
Collapse
Affiliation(s)
- Louisa H. Smith
- Department of Epidemiology, Harvard T. H. Chan School of Public Health, Boston, MA
| | - Maya B. Mathur
- Quantitative Sciences Unit, Stanford University, Palo Alto, CA
| | - Tyler J. VanderWeele
- Department of Epidemiology, Harvard T. H. Chan School of Public Health, Boston, MA
- Department of Biostatistics, Harvard T. H. Chan School of Public Health, Boston, MA
| |
Collapse
|
26
|
Abstract
Quantitative bias analyses allow researchers to adjust for uncontrolled confounding, given specification of certain bias parameters. When researchers are concerned about unknown confounders, plausible values for these bias parameters will be difficult to specify. Ding and VanderWeele developed bounding factor and E-value approaches that require the user to specify only some of the bias parameters. We describe the mathematical meaning of bounding factors and E-values and the plausibility of these methods in an applied context. We encourage researchers to pay particular attention to the assumption made, when using E-values, that the prevalence of the uncontrolled confounder among the exposed is 100% (or, equivalently, the prevalence of the exposure among those without the confounder is 0%). We contrast methods that attempt to bound biases or effects and alternative approaches such as quantitative bias analysis. We provide an example where failure to make this distinction led to erroneous statements. If the primary concern in an analysis is with known but unmeasured potential confounders, then E-values are not needed and may be misleading. In cases where the concern is with unknown confounders, the E-value assumption of an extreme possible prevalence of the confounder limits its practical utility.
Collapse
Affiliation(s)
- Richard F. MacLehose
- From the Division of Epidemiology and Community Health, School of Public Health, University of Minnesota, Minneapolis, MN
| | - Thomas P. Ahern
- Department of Surgery, Larner College of Medicine, University of Vermont, Burlington, VT
| | - Timothy L. Lash
- Department of Epidemiology, Rollins School of Public Health, Emory University, Atlanta, GA
| | - Charles Poole
- Department of Epidemiology, Gillings School of Global Public Health, University of North Carolina, Chapel Hill, NC
| | - Sander Greenland
- Department of Epidemiology, Fielding School of Public Health, UCLA, Los Angeles, CA
- Department of Statistics, College of Letters and Science, UCLA, Los Angeles, CA
| |
Collapse
|
27
|
Stürmer T, Webster-Clark M, Lund JL, Wyss R, Ellis AR, Lunt M, Rothman KJ, Glynn RJ. Propensity Score Weighting and Trimming Strategies for Reducing Variance and Bias of Treatment Effect Estimates: A Simulation Study. Am J Epidemiol 2021; 190:1659-1670. [PMID: 33615349 DOI: 10.1093/aje/kwab041] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Revised: 02/05/2021] [Accepted: 02/15/2021] [Indexed: 12/30/2022] Open
Abstract
To extend previous simulations on the performance of propensity score (PS) weighting and trimming methods to settings without and with unmeasured confounding, Poisson outcomes, and various strengths of treatment prediction (PS c statistic), we simulated studies with a binary intended treatment T as a function of 4 measured covariates. We mimicked treatment withheld and last-resort treatment by adding 2 "unmeasured" dichotomous factors that directed treatment to change for some patients in both tails of the PS distribution. The number of outcomes Y was simulated as a Poisson function of T and confounders. We estimated the PS as a function of measured covariates and trimmed the tails of the PS distribution using 3 strategies ("Crump," "Stürmer," and "Walker"). After trimming and reestimation, we used alternative PS weights to estimate the treatment effect (rate ratio): inverse probability of treatment weighting, standardized mortality ratio (SMR)-treated, SMR-untreated, the average treatment effect in the overlap population (ATO), matching, and entropy. With no unmeasured confounding, the ATO (123%) and "Crump" trimming (112%) improved relative efficiency compared with untrimmed inverse probability of treatment weighting. With unmeasured confounding, untrimmed estimates were biased irrespective of weighting method, and only Stürmer and Walker trimming consistently reduced bias. In settings where unmeasured confounding (e.g., frailty) may lead physicians to withhold treatment, Stürmer and Walker trimming should be considered before primary analysis.
Collapse
|
28
|
Barberio J, Ahern TP, MacLehose RF, Collin LJ, Cronin-Fenton DP, Damkier P, Sørensen HT, Lash TL. Assessing Techniques for Quantifying the Impact of Bias Due to an Unmeasured Confounder: An Applied Example. Clin Epidemiol 2021; 13:627-635. [PMID: 34349564 PMCID: PMC8326776 DOI: 10.2147/clep.s313613] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 06/22/2021] [Indexed: 11/23/2022] Open
Abstract
Purpose To compare the magnitude of bias due to unmeasured confounding estimated from various techniques in an applied example. Patients and Methods We examined the association between dibutyl phthalate (DBP) and incident estrogen receptor (ER)-positive breast cancer in a Danish nationwide cohort (N=1,122,042). Cox regression analyses were adjusted for age and active drug compounds contributing to DBP exposure. We estimated the hazard ratios (HRs) that would have been observed had one of the DBP sources been unmeasured and calculated the strength of confounding by comparing to the fully adjusted HR. We performed a quantitative bias analysis (QBA) of the “unmeasured” confounder, using external information to specify the bias parameters. Upper bounds on the bias were estimated and E-values were calculated. Results The adjusted HR for incident ER-positive breast cancer among women with high-level (≥10,000 cumulative milligrams) versus no DBP exposure was 2.12 (95% confidence interval 1.12 to 4.05). Removing each DBP source in isolation resulted in negligible change in the HR. The bias estimates from the QBA ranged from 1.00 to 1.01. The estimated maximum impact of unmeasured confounding ranged from 1.01 to 1.51. E-values ranged from 3.46 to 3.68. Conclusion The impact of bias due to simulated unmeasured confounding was negligible, in part, because the unmeasured variable was not independent of controlled variables. When a suspected confounder cannot be measured in the study data, our exercise suggests that QBA is the most informative method for assessing the impact. E-values may best be reserved for situations where uncontrolled confounding emanates from an unknown confounder.
Collapse
Affiliation(s)
- Julie Barberio
- Rollins School of Public Health, Emory University, Atlanta, GA, USA
| | - Thomas P Ahern
- The Robert Larner College of Medicine, University of Vermont, Burlington, VT, USA
| | | | - Lindsay J Collin
- Rollins School of Public Health, Emory University, Atlanta, GA, USA
| | | | | | | | - Timothy L Lash
- Rollins School of Public Health, Emory University, Atlanta, GA, USA
| |
Collapse
|
29
|
Dharmarajan SH, Li Y, Lehmann D, Schaubel DE. Weighted estimators of the complier average causal effect on restricted mean survival time with observed instrument-outcome confounders. Biom J 2021; 63:712-724. [PMID: 33346382 PMCID: PMC8035265 DOI: 10.1002/bimj.201900284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Revised: 05/19/2020] [Accepted: 06/22/2020] [Indexed: 11/07/2022]
Abstract
A major concern in any observational study is unmeasured confounding of the relationship between a treatment and outcome of interest. Instrumental variable (IV) analysis methods are able to control for unmeasured confounding. However, IV analysis methods developed for censored time-to-event data tend to rely on assumptions that may not be reasonable in many practical applications, making them unsuitable for use in observational studies. In this report, we develop weighted estimators of the complier average causal effect (CACE) on the restricted mean survival time in the overall population as well as in an evenly matchable population (CACE-m). Our method is able to accommodate instrument-outcome confounding and adjust for covariate-dependent censoring, making it particularly suited for causal inference from observational studies. We establish the asymptotic properties and derive easily implementable asymptotic variance estimators for the proposed estimators. Through simulation studies, we show that the proposed estimators tend to be more efficient than instrument propensity score matching-based estimators or IPIW estimators. We apply our method to compare dialytic modality-specific survival for end stage renal disease patients using data from the U.S. Renal Data System.
Collapse
Affiliation(s)
- Sai H. Dharmarajan
- Office of Biostatistics, U.S. Food and Drug Administration, Silver Spring, MD
| | - Yun Li
- Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, PA
| | - Douglas Lehmann
- Department of Management Science, University of Miami, Coral Gables, FL
| | - Douglas E. Schaubel
- Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, PA
| |
Collapse
|
30
|
Zhang W, Ghosh D. A general approach to sensitivity analysis for Mendelian randomization. Stat Biosci 2021; 13:34-55. [PMID: 33737984 DOI: 10.1007/s12561-020-09280-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Mendelian Randomization (MR) represents a class of instrumental variable methods using genetic variants. It has become popular in epidemiological studies to account for the unmeasured confounders when estimating the effect of exposure on outcome. The success of Mendelian Randomization depends on three critical assumptions, which are difficult to verify. Therefore, sensitivity analysis methods are needed for evaluating results and making plausible conclusions. We propose a general and easy to apply approach to conduct sensitivity analysis for Mendelian Randomization studies. Bound et al. (1995) derived a formula for the asymptotic bias of the instrumental variable estimator. Based on their work, we derive a new sensitivity analysis formula. The parameters in the formula include sensitivity parameters such as the correlation between instruments and unmeasured confounder, the direct effect of instruments on outcome and the strength of instruments. In our simulation studies, we examined our approach in various scenarios using either individual SNPs or unweighted allele score as instruments. By using a previously published dataset from researchers involving a bone mineral density study, we demonstrate that our proposed method is a useful tool for MR studies, and that investigators can combine their domain knowledge with our method to obtain bias-corrected results and make informed conclusions on the scientific plausibility of their findings.
Collapse
Affiliation(s)
- Weiming Zhang
- Department of Biostatistics and Informatics, Colorado School of Public Health, Aurora, Colorado, U.S.A
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, Colorado School of Public Health, Aurora, Colorado, U.S.A
| |
Collapse
|
31
|
Comment L, Coull BA, Zigler C, Valeri L. Bayesian data fusion: Probabilistic sensitivity analysis for unmeasured confounding using informative priors based on secondary data. Biometrics 2021; 78:730-741. [PMID: 33527348 DOI: 10.1111/biom.13436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 04/10/2020] [Accepted: 01/13/2021] [Indexed: 11/28/2022]
Abstract
Bayesian causal inference offers a principled approach to policy evaluation of proposed interventions on mediators or time-varying exposures. Building on the Bayesian g-formula method introduced by Keil et al., we outline a general approach for the estimation of population-level causal quantities involving dynamic and stochastic treatment regimes, including regimes related to mediation estimands such as natural direct and indirect effects. We further extend this approach to propose a Bayesian data fusion (BDF), an algorithm for performing probabilistic sensitivity analysis when a confounder unmeasured in a primary data set is available in an external data source. When the relevant relationships are causally transportable between the two source populations, BDF corrects confounding bias and supports causal inference and decision-making within the main study population without sharing of the individual-level external data set. We present results from a simulation study comparing BDF to two common frequentist correction methods for unmeasured mediator-outcome confounding bias in the mediation setting. We use these methods to analyze data on the role of stage at cancer diagnosis in contributing to Black-White colorectal cancer survival disparities.
Collapse
Affiliation(s)
- Leah Comment
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | - Brent A Coull
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | - Corwin Zigler
- Department of Statistics and Data Sciences, University of Texas, Austin, Texas
| | - Linda Valeri
- Department of Biostatistics, Columbia University Mailman School of Public Health, New York, New York
| |
Collapse
|
32
|
Fu R, Sekercioglu N, Mathur MB, Couban R, Coyte PC. Dialysis Initiation and All-Cause Mortality Among Incident Adult Patients With Advanced CKD: A Meta-analysis With Bias Analysis. Kidney Med 2020; 3:64-75.e1. [PMID: 33604540 PMCID: PMC7873831 DOI: 10.1016/j.xkme.2020.09.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
Abstract
Rationale & Objectives Due to unmeasured confounding, observational studies have limitations when assessing whether dialysis initiation reduces mortality compared with conservative therapy among adults with advanced chronic kidney disease (CKD). We addressed this issue in this meta-analysis. Study Design Meta-analysis with bias analysis for unmeasured confounding. Setting & Study Population Adults with stage 4 or 5 CKD who had initiated dialysis or conservative treatment. Selection Criteria for Studies Prospective or retrospective cohort studies comparing survival of dialysis versus conservatively managed patients were searched on MEDLINE and Embase from January 2009 to March 20, 2019. Data Extraction HRs of all-cause mortality associated with dialysis initiation compared with conservative treatment. Analytical Approach We pooled HRs using a random-effects model. We estimated the percentage of effect sizes more protective than HRs of 0.80 and severity of unmeasured confounding that could reduce this percentage to only 10%. Subgroup analysis was performed for studies with only older patients (aged ≥ 65 years). Results 12 studies were included that involved 16,609 dialysis patients and 3,691 conservatively managed patients. A random-effects model suggested that dialysis initiation was associated with a mean mortality HR of 0.47 (95% CI, 0.34-0.64), in which 92% (95% CI, 50%-100%) of the true effects were more protective than HRs of 0.80. To reduce the percentage of HRs < 0.80 to 10%, unmeasured confounder(s) would need to be associated with both dialysis initiation and mortality by relative risks of 4.05 (95% CI, 2.39-4.15), which is equivalent to shifting each study's estimated HR by 2.31-fold (95% CI, 1.51-2.36). Restricting studies to include only older patients did not modify the results. Limitations Limited number of studies and evidence on the absence of publication bias. Conclusions Our findings suggest that dialysis initiation considerably reduces mortality among adults with advanced CKD. Future bias-adjusted meta-analyses need to assess outcomes beyond mortality.
Collapse
Affiliation(s)
- Rui Fu
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Nigar Sekercioglu
- Health Research Methods, Evidence, and Impact, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Maya B Mathur
- Quantitative Sciences Unit, Stanford University, Stanford, CA
| | - Rachel Couban
- Michael G. DeGroote Institute for Pain Research and Care, McMaster University, Hamilton, ON, Canada
| | - Peter C Coyte
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
33
|
Liu T, Hogan JW. Unifying instrumental variable and inverse probability weighting approaches for inference of causal treatment effect and unmeasured confounding in observational studies. Stat Methods Med Res 2020; 30:671-686. [PMID: 33213292 DOI: 10.1177/0962280220971835] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Confounding is a major concern when using data from observational studies to infer the causal effect of a treatment. Instrumental variables, when available, have been used to construct bound estimates on population average treatment effects when outcomes are binary and unmeasured confounding exists. With continuous outcomes, meaningful bounds are more challenging to obtain because the domain of the outcome is unrestricted. In this paper, we propose to unify the instrumental variable and inverse probability weighting methods, together with suitable assumptions in the context of an observational study, to construct meaningful bounds on causal treatment effects. The contextual assumptions are imposed in terms of the potential outcomes that are partially identified by data. The inverse probability weighting component incorporates a sensitivity parameter to encode the effect of unmeasured confounding. The instrumental variable and inverse probability weighting methods are unified using the principal stratification. By solving the resulting system of estimating equations, we are able to quantify both the causal treatment effect and the sensitivity parameter (i.e. the degree of the unmeasured confounding). We demonstrate our method by analyzing data from the HIV Epidemiology Research Study.
Collapse
Affiliation(s)
- Tao Liu
- Department of Biostatistics, Center for Statistical Sciences, 6752Brown University, Providence, RI, USA
| | - Joseph W Hogan
- Department of Biostatistics, Center for Statistical Sciences, 6752Brown University, Providence, RI, USA
| |
Collapse
|
34
|
Zhang X, Stamey JD, Mathur MB. Assessing the impact of unmeasured confounders for credible and reliable real-world evidence. Pharmacoepidemiol Drug Saf 2020; 29:1219-1227. [PMID: 32929830 DOI: 10.1002/pds.5117] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 08/17/2020] [Accepted: 08/20/2020] [Indexed: 12/23/2022]
Abstract
PURPOSE We review statistical methods for assessing the possible impact of bias due to unmeasured confounding in real world data analysis and provide detailed recommendations for choosing among the methods. METHODS By updating an earlier systematic review, we summarize modern statistical best practices for evaluating and correcting for potential bias due to unmeasured confounding in estimating causal treatment effect from non-interventional studies. RESULTS We suggest a hierarchical structure for assessing unmeasured confounding. First, for initial sensitivity analyses, we strongly recommend applying a recently developed method, the E-value, that is straightforward to apply and does not require prior knowledge or assumptions about the unmeasured confounder(s). When some such knowledge is available, the E-value could be supplemented by the rule-out or array method at this step. If these initial analyses suggest results may not be robust to unmeasured confounding, subsequent analyses could be conducted using more specialized statistical methods, which we categorize based on whether they require access to external data on the suspected unmeasured confounder(s), internal data, or no data. Other factors for choosing the subsequent sensitivity analysis methods are also introduced and discussed, including the types of unmeasured confounders and whether the subsequent sensitivity analysis is intended to provide a corrected causal treatment effect. CONCLUSION Various analytical methods have been proposed to address unmeasured confounding, but little research has discussed a structured approach to select appropriate methods in practice. In providing practical suggestions for choosing appropriate initial and, potentially, more specialized subsequent sensitivity analyses, we hope to facilitate the widespread reporting of such sensitivity analyses in non-interventional studies. The suggested approach also has the potential to inform pre-specification of sensitivity analyses before executing the analysis, and therefore increase the transparency and limit selective study reporting.
Collapse
Affiliation(s)
- Xiang Zhang
- Eli Lilly and Company, Lilly Corporate Center, Indianapolis, Indiana, USA
| | - James D Stamey
- Department of Statistics, Baylor University, Waco, Texas, USA
| | - Maya B Mathur
- Quantitative Sciences Unit, Stanford University, Stanford, California, USA
| |
Collapse
|
35
|
Abstract
The use of causal mediation analysis to evaluate the pathways by which an exposure affects an outcome is widespread in the social and biomedical sciences. Recent advances in this area have established formal conditions for identification and estimation of natural direct and indirect effects. However, these conditions typically involve stringent assumptions of no unmeasured confounding and that the mediator has been measured without error. These assumptions may fail to hold in many practical settings where mediation methods are applied. The goal of this article is two-fold. First, we formally establish that the natural indirect effect can in fact be identified in the presence of unmeasured exposure-outcome confounding provided there is no additive interaction between the mediator and unmeasured confounder(s). Second, we introduce a new estimator of the natural indirect effect that is robust to both classical measurement error of the mediator and unmeasured confounding of both exposure-outcome and mediator-outcome relations under certain no interaction assumptions. We provide formal proofs and a simulation study to illustrate our results. In addition, we apply the proposed methodology to data from the Harvard President's Emergency Plan for AIDS Relief (PEPFAR) program in Nigeria.
Collapse
Affiliation(s)
- Isabel R. Fulcher
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA
| | - Xu Shi
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA
| | | |
Collapse
|
36
|
Abstract
In order to design efficient interventions aimed to improve public health, policy makers need to be provided with reliable information of the health burden of different risk factors. For this purpose, we are interested in the proportion of cases that could be prevented had some harmful exposure been eliminated from the population, i.e. the attributable fraction. The attributable fraction is a causal measure; thus, to estimate the attributable fraction from observational data, we have to make appropriate adjustment for confounding. However, some confounders may be unobserved, or even unknown to the investigator. A possible solution to this problem is to use instrumental variable analysis. In this work, we present how the attributable fraction can be estimated with instrumental variable methods based on the two-stage estimator or the G-estimator. One situation when the problem of unmeasuredconfounding may be particularly severe is when assessing the effect of low educational qualifications on coronary heart disease. By using Mendelian randomization, a special case of instrumental variable analysis, it has been claimed that low educational qualifications is a causal risk factor for coronary heart disease. We use Mendelian randomization to estimate the causal risk ratio and causal odds ratio of low educational qualifications as a risk factor for coronary heart disease with data from the UK Biobank. We compare the two-stage and G-estimator as well as the attributable fraction based on the two estimators. The plausibility of drawing causal conclusion in this analysis is thoroughly discussed and alternative genetic instrumental variables are tested.
Collapse
Affiliation(s)
- Elisabeth Dahlqwist
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Zoltán Kutalik
- University Center for Primary Care and Public Health, University of Lausanne, Lausanne, Switzerland
| | - Arvid Sjölander
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
37
|
Caniglia EC, Zash R, Swanson SA, Wirth KE, Diseko M, Mayondi G, Lockman S, Mmalane M, Makhema J, Dryden-Peterson S, Kponee-Shovein KZ, John O, Murray EJ, Shapiro RL. Methodological Challenges When Studying Distance to Care as an Exposure in Health Research. Am J Epidemiol 2019; 188:1674-1681. [PMID: 31107529 PMCID: PMC6735874 DOI: 10.1093/aje/kwz121] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Revised: 03/08/2019] [Accepted: 03/12/2019] [Indexed: 01/27/2023] Open
Abstract
Distance to care is a common exposure and proposed instrumental variable in health research, but it is vulnerable to violations of fundamental identifiability conditions for causal inference. We used data collected from the Botswana Birth Outcomes Surveillance study between 2014 and 2016 to outline 4 challenges and potential biases when using distance to care as an exposure and as a proposed instrument: selection bias, unmeasured confounding, lack of sufficiently well-defined interventions, and measurement error. We describe how these issues can arise, and we propose sensitivity analyses for estimating the degree of bias.
Collapse
Affiliation(s)
- Ellen C Caniglia
- Department of Population Health, New York University School of Medicine, New York, New York
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | - Rebecca Zash
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - Sonja A Swanson
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
- Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Kathleen E Wirth
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | - Modiegi Diseko
- Botswana-Harvard AIDS Institute Partnership, Gaborone, Botswana
| | - Gloria Mayondi
- Botswana-Harvard AIDS Institute Partnership, Gaborone, Botswana
| | - Shahin Lockman
- Botswana-Harvard AIDS Institute Partnership, Gaborone, Botswana
- Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
- Department of Immunology and Infectious Diseases, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | - Mompati Mmalane
- Botswana-Harvard AIDS Institute Partnership, Gaborone, Botswana
| | - Joseph Makhema
- Botswana-Harvard AIDS Institute Partnership, Gaborone, Botswana
| | - Scott Dryden-Peterson
- Botswana-Harvard AIDS Institute Partnership, Gaborone, Botswana
- Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
- Department of Immunology and Infectious Diseases, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | | | - Oaitse John
- Botswana-Harvard AIDS Institute Partnership, Gaborone, Botswana
| | - Eleanor J Murray
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | - Roger L Shapiro
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
- Botswana-Harvard AIDS Institute Partnership, Gaborone, Botswana
- Department of Immunology and Infectious Diseases, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| |
Collapse
|
38
|
Hjellvik V, De Bruin ML, Samuelsen SO, Karlstad Ø, Andersen M, Haukka J, Vestergaard P, de Vries F, Furu K. Adjusting for unmeasured confounding using validation data: Simplified two-stage calibration for survival and dichotomous outcomes. Stat Med 2019; 38:2719-2734. [PMID: 30828842 DOI: 10.1002/sim.8131] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Revised: 12/10/2018] [Accepted: 02/01/2019] [Indexed: 01/05/2023]
Abstract
In epidemiology, one typically wants to estimate the risk of an outcome associated with an exposure after adjusting for confounders. Sometimes, outcome and exposure and maybe some confounders are available in a large data set, whereas some important confounders are only available in a validation data set that is typically a subset of the main data set. A generally applicable method in this situation is the two-stage calibration (TSC) method. We present a simplified easy-to-implement version of the TSC for the case where the validation data are a subset of the main data. We compared the simplified version to the standard TSC version for incidence rate ratios, odds ratios, relative risks, and hazard ratios using simulated data, and the simplified version performed better than our implementation of the standard version. The simplified version was also tested on real data and performed well.
Collapse
Affiliation(s)
- Vidar Hjellvik
- Department of Chronic Diseases and Ageing, Norwegian Institute of Public Health, Oslo, Norway
| | - Marie L De Bruin
- Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, Utrecht University, Utrecht, The Netherlands.,Department of Pharmacy, Copenhagen Centre for Regulatory Science, University of Copenhagen, Copenhagen, Denmark
| | - Sven O Samuelsen
- Department of Chronic Diseases and Ageing, Norwegian Institute of Public Health, Oslo, Norway.,Department of Mathematics, University of Oslo, Oslo, Norway
| | - Øystein Karlstad
- Department of Chronic Diseases and Ageing, Norwegian Institute of Public Health, Oslo, Norway
| | - Morten Andersen
- Centre for Pharmacoepidemiology, Karolinska Institutet, Clinical Epidemiology Division, Karolinska University Hospital, Solna, Sweden.,Department of Drug Design and Pharmacology, University of Copenhagen, Copenhagen, Denmark.,Research Unit of General Practice, University of Southern Denmark, Odense, Denmark
| | - Jari Haukka
- Department of Public Health, University of Helsinki, Helsinki, Finland
| | - Peter Vestergaard
- Department of Clinical Medicine and Department of Endocrinology, Aalborg University Hospital, Aalborg, Denmark
| | - Frank de Vries
- Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, Utrecht University, Utrecht, The Netherlands.,Department of Clinical Pharmacy and Toxicology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Kari Furu
- Department of Chronic Diseases and Ageing, Norwegian Institute of Public Health, Oslo, Norway
| |
Collapse
|
39
|
Handorf EA, Heitjan DF, Bekelman JE, Mitra N. Estimating cost-effectiveness from claims and registry data with measured and unmeasured confounders. Stat Methods Med Res 2019; 28:2227-2242. [PMID: 29468944 PMCID: PMC7673459 DOI: 10.1177/0962280218759137] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
The analysis of observational data to determine the cost-effectiveness of medical treatments is complicated by the need to account for skewness, censoring, and the effects of measured and unmeasured confounders. We quantify cost-effectiveness as the Net Monetary Benefit (NMB), a linear combination of the treatment effects on cost and effectiveness that denominates utility in monetary terms. We propose a parametric estimation approach that describes cost with a Gamma generalized linear model and survival time (the canonical effectiveness variable) with a Weibull accelerated failure time model. To account for correlation between cost and survival, we propose a bootstrap procedure to compute confidence intervals for NMB. To examine sensitivity to unmeasured confounders, we derive simple approximate relationships between naïve parameters, assuming only measured confounders, and the values those parameters would take if there was further adjustment for a single unmeasured confounder with a specified distribution. A simulation study shows that the method returns accurate estimates for treatment effects on cost, survival, and NMB under the assumed model. We apply our method to compare two treatments for Stage II/III bladder cancer, concluding that the NMB is sensitive to hypothesized unmeasured confounders that represent smoking status and personal income.
Collapse
Affiliation(s)
| | - Daniel F Heitjan
- 2 Southern Methodist University and University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Justin E Bekelman
- 3 Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | |
Collapse
|
40
|
Kuenzig ME, Sadatsafavi M, Aviña-Zubieta JA, Burne RM, Abrahamowicz M, Beauchamp ME, Kaplan GG, Benchimol EI. Asthma is not associated with the need for surgery in Crohn's disease when controlling for smoking status: a population-based cohort study. Clin Epidemiol 2018; 10:831-840. [PMID: 30038523 PMCID: PMC6049604 DOI: 10.2147/clep.s156772] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Purpose Growing evidence suggests asthma and Crohn’s disease commonly cooccur. However, the impact of asthma on the prognosis of Crohn’s disease is unknown. The aim of our study was to assess the effect of asthma on the need for intestinal resection in patients with Crohn’s disease while adjusting for smoking status, imputed from a smaller, secondary data set. Patients and methods Using health administrative data from a universally funded healthcare plan in Alberta, Canada, we conducted a cohort study to assess the effect of asthma on the need for surgery in patients with Crohn’s disease diagnosed between 2002 and 2008 (N=2,113). Validated algorithms were used to identify incident cases of Crohn’s disease, cooccurring asthma, and intestinal resection. The association between asthma and intestinal resection was estimated using multivariable Cox proportional hazards regression. Smoking status was imputed using a novel method using martingale residuals, derived from a data set of 485 patients enrolled in the Alberta Inflammatory Bowel Disease Consortium (2007 to 2014) who completed environmental questionnaires. All analyses were adjusted for age, sex, rural/urban status, and mean neighborhood income quintile. Results Asthma did not increase the risk of surgery in the health administrative data when not adjusting for smoking status (HR 1.03, 95% CI 0.81 to 1.29). The association remained nonsignificant after imputing smoking status in the health administrative data (HR 1.03, 95% CI 0.81 to 1.29). Conclusion Although asthma is associated with an increased risk of Crohn’s disease, co-occurring asthma is not associated with the risk of surgery in these patients. This null association persisted after adjusting for smoking status. This study described a novel method to adjust for confounding (smoking status) in time-to-event analyses, even when the confounding variable is unmeasured in health administrative data.
Collapse
Affiliation(s)
- M Ellen Kuenzig
- Children's Hospital of Eastern Ontario (CHEO) Inflammatory Bowel Disease Centre, Division of Gastroenterology, Hepatology and Nutrition, Children's Hospital of Eastern Ontario, Ottawa, ON, Canada, .,Health Information Technology Program, CHEO Research Institute, Ottawa, ON, Canada, .,Institute for Clinical Evaluative Sciences, Ottawa, ON, Canada,
| | - Mohsen Sadatsafavi
- Faculty of Pharmaceutical Sciences, University of British Columbia, Vancouver, BC, Canada
| | - J Antonio Aviña-Zubieta
- Arthritis Research Canada, Division of Rheumatology, University of British Columbia, Vancouver, BC, Canada
| | - Rebecca M Burne
- Department of Epidemiology, Biostatistics and Occupational Health, McGill University, Montreal, QC, Canada
| | - Michal Abrahamowicz
- Department of Epidemiology, Biostatistics and Occupational Health, McGill University, Montreal, QC, Canada.,Centre for Health Outcomes Research (CORE), Research Institute of the McGill University Health Centre, Montreal, QC, Canada
| | - Marie-Eve Beauchamp
- Centre for Health Outcomes Research (CORE), Research Institute of the McGill University Health Centre, Montreal, QC, Canada
| | - Gilaad G Kaplan
- Department of Medicine, University of Calgary, Calgary, AB, Canada.,Department of Community Health Sciences, University of Calgary, Calgary, AB, Canada.,O'Brien Institute for Public Health, University of Calgary, Calgary, AB, Canada.,Snyder Institute for Chronic Diseases, University of Calgary, Calgary, AB, Canada
| | - Eric I Benchimol
- Children's Hospital of Eastern Ontario (CHEO) Inflammatory Bowel Disease Centre, Division of Gastroenterology, Hepatology and Nutrition, Children's Hospital of Eastern Ontario, Ottawa, ON, Canada, .,Health Information Technology Program, CHEO Research Institute, Ottawa, ON, Canada, .,Institute for Clinical Evaluative Sciences, Ottawa, ON, Canada, .,Department of Pediatrics, University of Ottawa; Ottawa, ON, Canada.,School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
41
|
Kasza J, Wolfe R, Schuster T. Assessing the impact of unmeasured confounding for binary outcomes using confounding functions. Int J Epidemiol 2018; 46:1303-1311. [PMID: 28338913 DOI: 10.1093/ije/dyx023] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/06/2017] [Indexed: 11/13/2022] Open
Abstract
A critical assumption of causal inference is that of no unmeasured confounding: for estimated exposure effects to have valid causal interpretations, a sufficient set of predictors of exposure and outcome must be adequately measured and correctly included in the respective inference model(s). In an observational study setting, this assumption will often be unsatisfied, and the potential impact of unmeasured confounding on effect estimates should be investigated. The confounding function approach allows the impact of unmeasured confounding on estimates to be assessed, where unmeasured confounding may be due to unmeasured confounders and/or biases such as collider bias or information bias. Although this approach is easy to implement and pertains to the sum of all bias, its use has not been widespread, and discussion has typically been limited to continuous outcomes. In this paper, we consider confounding functions for use with binary outcomes and illustrate the approach with an example. We note that confounding function choice encodes assumptions about effect modification: some choices encode the belief that the true causal effect differs across exposure groups, whereas others imply that any difference between the true causal parameter and the estimate is entirely due to imbalanced risks between exposure groups. The confounding function approach is a useful method for assessing the impact of unmeasured confounding, in particular when alternative approaches, e.g. external adjustment or instrumental variable approaches, cannot be applied. We provide Stata and R code for the implementation of this approach when the causal estimand of interest is an odds or risk ratio.
Collapse
Affiliation(s)
- Jessica Kasza
- Department of Epidemiology and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
| | - Rory Wolfe
- Department of Epidemiology and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
| | - Tibor Schuster
- Clinical Epidemiology & Biostatistics Unit and Melbourne Children's Trial Centre, Murdoch Childrens Research Institute, Melbourne, Victoria, Australia.,Department of Family Medicine, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
42
|
Wan F, Small D, Mitra N. A general approach to evaluating the bias of 2-stage instrumental variable estimators. Stat Med 2018; 37:1997-2015. [PMID: 29572890 DOI: 10.1002/sim.7636] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2017] [Revised: 01/14/2018] [Accepted: 01/24/2018] [Indexed: 11/09/2022]
Abstract
Unmeasured confounding is a common concern when researchers attempt to estimate a treatment effect using observational data or randomized studies with nonperfect compliance. To address this concern, instrumental variable methods, such as 2-stage predictor substitution (2SPS) and 2-stage residual inclusion (2SRI), have been widely adopted. In many clinical studies of binary and survival outcomes, 2SRI has been accepted as the method of choice over 2SPS, but a compelling theoretical rationale has not been postulated. We evaluate the bias and consistency in estimating the conditional treatment effect for both 2SPS and 2SRI when the outcome is binary, count, or time to event. We demonstrate analytically that the bias in 2SPS and 2SRI estimators can be reframed to mirror the problem of omitted variables in nonlinear models and that there is a direct relationship with the collapsibility of effect measures. In contrast to conclusions made by previous studies (Terza et al, 2008), we demonstrate that the consistency of 2SRI estimators only holds under the following conditions: (1) when the null hypothesis is true; (2) when the outcome model is collapsible; or (3) when estimating the nonnull causal effect from Cox or logistic regression models, the strong and unrealistic assumption that the effect of the unmeasured covariates on the treatment is proportional to their effect on the outcome needs to hold. We propose a novel dissimilarity metric to provide an intuitive explanation of the bias of 2SRI estimators in noncollapsible models and demonstrate that with increasing dissimilarity between the effects of the unmeasured covariates on the treatment versus outcome, the bias of 2SRI increases in magnitude.
Collapse
Affiliation(s)
- Fei Wan
- Department of Biostatistics, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Dylan Small
- Department of Statistics, University of Pennsylvania, Philadelphia, PA, USA
| | - Nandita Mitra
- Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
43
|
Lu B, Cai D, Tong X. Testing causal effects in observational survival data using propensity score matching design. Stat Med 2018; 37:1846-1858. [PMID: 29399833 DOI: 10.1002/sim.7599] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2017] [Revised: 11/22/2017] [Accepted: 12/03/2017] [Indexed: 11/07/2022]
Abstract
Time-to-event data are very common in observational studies. Unlike randomized experiments, observational studies suffer from both observed and unobserved confounding biases. To adjust for observed confounding in survival analysis, the commonly used methods are the Cox proportional hazards (PH) model, the weighted logrank test, and the inverse probability of treatment weighted Cox PH model. These methods do not rely on fully parametric models, but their practical performances are highly influenced by the validity of the PH assumption. Also, there are few methods addressing the hidden bias in causal survival analysis. We propose a strategy to test for survival function differences based on the matching design and explore sensitivity of the P-values to assumptions about unmeasured confounding. Specifically, we apply the paired Prentice-Wilcoxon (PPW) test or the modified PPW test to the propensity score matched data. Simulation studies show that the PPW-type test has higher power in situations when the PH assumption fails. For potential hidden bias, we develop a sensitivity analysis based on the matched pairs to assess the robustness of our finding, following Rosenbaum's idea for nonsurvival data. For a real data illustration, we apply our method to an observational cohort of chronic liver disease patients from a Mayo Clinic study. The PPW test based on observed data initially shows evidence of a significant treatment effect. But this finding is not robust, as the sensitivity analysis reveals that the P-value becomes nonsignificant if there exists an unmeasured confounder with a small impact.
Collapse
Affiliation(s)
- Bo Lu
- Division of Biostatistics, College of Public Health, The Ohio State University, Columbus, OH 43210, U.S.A
| | - Dingjiao Cai
- School of Mathematics and Information Science, Henan University of Economics and Law, Henan, China
| | - Xingwei Tong
- Department of Statistics, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
44
|
Shardell M, Ferrucci L. Joint mixed-effects models for causal inference with longitudinal data. Stat Med 2017; 37:829-846. [PMID: 29205454 DOI: 10.1002/sim.7567] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2016] [Revised: 10/30/2017] [Accepted: 10/30/2017] [Indexed: 11/05/2022]
Abstract
Causal inference with observational longitudinal data and time-varying exposures is complicated due to the potential for time-dependent confounding and unmeasured confounding. Most causal inference methods that handle time-dependent confounding rely on either the assumption of no unmeasured confounders or the availability of an unconfounded variable that is associated with the exposure (eg, an instrumental variable). Furthermore, when data are incomplete, validity of many methods often depends on the assumption of missing at random. We propose an approach that combines a parametric joint mixed-effects model for the study outcome and the exposure with g-computation to identify and estimate causal effects in the presence of time-dependent confounding and unmeasured confounding. G-computation can estimate participant-specific or population-average causal effects using parameters of the joint model. The joint model is a type of shared parameter model where the outcome and exposure-selection models share common random effect(s). We also extend the joint model to handle missing data and truncation by death when missingness is possibly not at random. We evaluate the performance of the proposed method using simulation studies and compare the method to both linear mixed- and fixed-effects models combined with g-computation as well as to targeted maximum likelihood estimation. We apply the method to an epidemiologic study of vitamin D and depressive symptoms in older adults and include code using SAS PROC NLMIXED software to enhance the accessibility of the method to applied researchers.
Collapse
Affiliation(s)
- Michelle Shardell
- National Institute on Aging, 3001 S. Hanover Street, Baltimore, Maryland 21225, USA
| | - Luigi Ferrucci
- National Institute on Aging, 3001 S. Hanover Street, Baltimore, Maryland 21225, USA
| |
Collapse
|
45
|
Ananth CV, Schisterman EF. Confounding, causality, and confusion: the role of intermediate variables in interpreting observational studies in obstetrics. Am J Obstet Gynecol 2017; 217:167-175. [PMID: 28427805 DOI: 10.1016/j.ajog.2017.04.016] [Citation(s) in RCA: 196] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2016] [Revised: 03/29/2017] [Accepted: 04/05/2017] [Indexed: 11/26/2022]
Abstract
Prospective and retrospective cohorts and case-control studies are some of the most important study designs in epidemiology because, under certain assumptions, they can mimic a randomized trial when done well. These assumptions include, but are not limited to, properly accounting for 2 important sources of bias: confounding and selection bias. While not adjusting the causal association for an intermediate variable will yield an unbiased estimate of the exposure-outcome's total causal effect, it is often that obstetricians will want to adjust for an intermediate variable to assess if the intermediate is the underlying driver of the association. Such a practice must be weighed in light of the underlying research question and whether such an adjustment is necessary should be carefully considered. Gestational age is, by far, the most commonly encountered variable in obstetrics that is often mislabeled as a confounder when, in fact, it may be an intermediate. If, indeed, gestational age is an intermediate but if mistakenly labeled as a confounding variable and consequently adjusted in an analysis, the conclusions can be unexpected. The implications of this overadjustment of an intermediate as though it were a confounder can render an otherwise persuasive study downright meaningless. This commentary provides an exposition of confounding bias, collider stratification, and selection biases, with applications in obstetrics and perinatal epidemiology.
Collapse
|
46
|
Dong YH, Alcusky M, Maio V, Liu J, Liu M, Wu LC, Chang CH, Lai MS, Gagne JJ. Evidence of potential bias in a comparison of β blockers and calcium channel blockers in patients with chronic obstructive pulmonary disease and acute coronary syndrome: results of a multinational study. BMJ Open 2017; 7:e012997. [PMID: 28363921 PMCID: PMC5387948 DOI: 10.1136/bmjopen-2016-012997] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
OBJECTIVES A number of observational studies have reported that, in patients with chronic obstructive pulmonary disease (COPD), β blockers (BBs) decrease risk of mortality and COPD exacerbations. To address important methodological concerns of these studies, we compared the effectiveness and safety of cardioselective BBs versus non-dihydropyridine calcium channel blockers (non-DHP CCBs) in patients with COPD and acute coronary syndromes (ACS) using a propensity score (PS)-matched, active comparator, new user design. We also assessed for potential unmeasured confounding by examining a short-term COPD hospitalisation outcome. SETTING AND PARTICIPANTS We identified 22 985 patients with COPD and ACS starting cardioselective BBs or non-DHP CCBs across 5 claims databases from the USA, Italy and Taiwan. PRIMARY AND SECONDARY OUTCOME MEASURES Stratified Cox regression models were used to estimate HRs for mortality, cardiovascular (CV) hospitalisations and COPD hospitalisations in each database after variable-ratio PS matching. Results were combined with random-effects meta-analyses. RESULTS Cardioselective BBs were not associated with reduced risk of mortality (HR, 0.90; 95% CI 0.78 to 1.02) or CV hospitalisations (HR, 1.06; 95% CI 0.91 to 1.23), although statistical heterogeneity was observed across databases. In contrast, a consistent, inverse association for COPD hospitalisations was identified across databases (HR, 0.54; 95% CI 0.47 to 0.61), which persisted even within the first 30 days of follow-up (HR, 0.55; 95% CI 0.37 to 0.82). Results were similar across a variety of sensitivity analyses, including PS trimming, high dimensional-PS matching and restricting to high-risk patients. CONCLUSIONS This multinational study found a large inverse association between cardioselective BBs and short-term COPD hospitalisations. The persistence of this bias despite state-of-the-art pharmacoepidemiologic methods calls into question the ability of claims data to address confounding in studies of BBs in patients with COPD.
Collapse
Affiliation(s)
- Yaa-Hui Dong
- Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women's Hospital and Harvard Medical School, Boston, Massachusetts, USA
- Faculty of Pharmacy, National Yang-Ming University, Taipei, Taiwan
| | - Matthew Alcusky
- Jefferson College of Population Health, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
- Department of Quantitative Health Sciences, University of Massachusetts Medical School, Worcester, Massachusetts, USA
| | - Vittorio Maio
- Jefferson College of Population Health, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Jun Liu
- Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women's Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Mengdan Liu
- Center for Research in Medical Education and Health Care, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Li-Chiu Wu
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Chia-Hsuin Chang
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
- Graduate Institute of Epidemiology and Preventive Medicine, College of Public Health, National Taiwan University, Taipei, Taiwan
| | - Mei-Shu Lai
- Graduate Institute of Epidemiology and Preventive Medicine, College of Public Health, National Taiwan University, Taipei, Taiwan
| | - Joshua J Gagne
- Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women's Hospital and Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
47
|
Uddin MJ, Groenwold RHH, de Boer A, Afonso ASM, Primatesta P, Becker C, Belitser SV, Hoes AW, Roes KCB, Klungel OH. Evaluating different physician's prescribing preference based instrumental variables in two primary care databases: a study of inhaled long-acting beta2-agonist use and the risk of myocardial infarction. Pharmacoepidemiol Drug Saf 2017; 25 Suppl 1:132-41. [PMID: 27038359 DOI: 10.1002/pds.3860] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2014] [Revised: 07/14/2015] [Accepted: 07/24/2015] [Indexed: 11/11/2022]
Abstract
PURPOSE Instrumental variable (IV) analysis with physician's prescribing preference (PPP) as IV is increasingly used in pharmacoepidemiology. However, it is unclear whether this IV performs consistently across databases. We aimed to evaluate the validity of different PPPs in a study of inhaled long-acting beta2-agonist (LABA) use and myocardial infarction (MI). METHODS Information on adults with asthma and/or COPD and at least one prescription of beta2-agonist, or muscarinic antagonist was extracted from the CPRD (UK) and the Mondriaan (Netherlands) databases. LABA exposure was considered time-fixed or time-varying. We measured PPPs using previous LABA prescriptions of physicians or proportion of LABA prescriptions per practice. Correlation (r) and standardized difference (SDif) were used to assess assumption of IV analysis. RESULTS For time-fixed LABA, the IV based on 10 previous prescriptions outperformed the other IVs regarding strength of the IV (r ≥ 0.15) and balance of confounders between IV categories (SDif < 0.10). None of the IVs we considered appeared to be valid for time-varying LABA. In CPRD (n = 490,499), which included approximately 18 times more subjects than Mondriaan (n = 27,459), IVs appeared more valid. LABA was not associated with MI; hazard ratios ranged from 0.86 to 1.18 for conventional analysis, and from 0.61 to 1.24 for the IV analyses with apparent valid IVs. CONCLUSIONS The validity of physician's prescribing preference as IV strongly depends on how this IV is defined and in which database it is applied. Hence, general recommendations cannot be made, other than to generate several plausible IVs, assess their validity, and report the estimate(s) from apparently valid IVs.
Collapse
Affiliation(s)
- Md Jamal Uddin
- Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, University of Utrecht, Utrecht, The Netherlands.,Department of Statistics, Shahjalal University of Science and Technology, Sylhet, 3114, Bangladesh
| | - Rolf H H Groenwold
- Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, University of Utrecht, Utrecht, The Netherlands.,Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Anthonius de Boer
- Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, University of Utrecht, Utrecht, The Netherlands
| | - Ana S M Afonso
- Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, University of Utrecht, Utrecht, The Netherlands
| | | | - Claudia Becker
- Basel Pharmacoepidemiology Unit, Division of Clinical Pharmacy and Epidemiology, Department of Pharmaceutical Sciences, University of Basel, Basel, Switzerland
| | - Svetlana V Belitser
- Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, University of Utrecht, Utrecht, The Netherlands
| | - Arno W Hoes
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Kit C B Roes
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Olaf H Klungel
- Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, University of Utrecht, Utrecht, The Netherlands.,Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
48
|
Enders D, Ohlmeier C, Garbe E. The Potential of High-Dimensional Propensity Scores in Health Services Research: An Exemplary Study on the Quality of Care for Elective Percutaneous Coronary Interventions. Health Serv Res 2017; 53:197-213. [PMID: 28093724 DOI: 10.1111/1475-6773.12653] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
OBJECTIVE Evaluating the potential of the high-dimensional propensity score (HDPS) to control for residual confounding in studies analyzing quality of care based on administrative health insurance data. DATA SOURCE Secondary data from 2004 to 2009 from three German statutory health insurance providers. STUDY DESIGN We conducted a retrospective cohort study in patients with elective percutaneous coronary interventions (PCIs) and compared the mortality risk between the in- and outpatient setting using Cox regression. Adjustment for predefined confounders was performed using conventional propensity score (PS) techniques. Further, an HDPS was calculated based on predefined and empirically selected confounders from the database. PRINCIPAL FINDINGS Conventional PS methods showed a decreased mortality risk for outpatient compared to inpatient PCIs, while trimming of patients with nonoverlap in the HDPS distribution and weighting resulted in a comparable risk. Most comorbidities were less prevalent in the HDPS-trimmed population compared to the original one. CONCLUSION The HDPS methodology may reduce residual confounding by rendering the studied cohort more comparable through restriction. However, results cannot be generalized for the entire study population. To provide unbiased results, full assessment of all unmeasured confounders from proxy information in the database would be necessary.
Collapse
Affiliation(s)
- Dirk Enders
- Leibniz Institute for Prevention Research and Epidemiology - BIPS, Bremen, Germany
| | - Christoph Ohlmeier
- Leibniz Institute for Prevention Research and Epidemiology - BIPS, Bremen, Germany.,IGES Institut GmbH, Berlin, Germany
| | - Edeltraut Garbe
- Leibniz Institute for Prevention Research and Epidemiology - BIPS, Bremen, Germany.,Core Scientific Area "Health Sciences" at the University of Bremen, Bremen, Germany
| |
Collapse
|
49
|
Tannen R, Yu M. A new method to address unmeasured confounding of mortality in observational studies. Learn Health Syst 2017; 1:e10016. [PMID: 31245553 PMCID: PMC6516725 DOI: 10.1002/lrh2.10016] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2015] [Revised: 09/15/2016] [Accepted: 09/22/2016] [Indexed: 01/13/2023] Open
Abstract
INTRODUCTION The prior event rate ratio (PERR) overcomes "unmeasured confounding" by adjusting study outcomes for all confounding (measured and unmeasured) by comparing exposed to unexposed cohort outcomes prior to study entry when neither group is receiving treatment. However, PERR cannot address "unmeasured confounding" of death since prior events cannot occur. METHODS This study's goal is to determine whether a new method, built on the concepts that led to the PERR development, reliably overcomes unmeasured confounding for death. In contrast to the PERR, which precedes study onset, the new mortality analysis uses exposed and unexposed cohorts, not taking the treatment medication, at the end of the study. It is called the post-treated event rate ratio (PTERR). RESULTS Theoretical and simulation studies were used to evaluate the likelihood for reliable results using of this new analytic strategy. Also, prior empiric studies, which used both the UK GPRD and THIN databases to examine and validate the PERR method, were used to ascertain the validity of the PTERR method. CONCLUSION In the aggregate the results provide strong evidence that the PTERR method to evaluate unmeasured confounding will be a valuable analytic tool.
Collapse
Affiliation(s)
- Richard Tannen
- Clinical EpidemiologyUniversity of PennsylvaniaPhiladelphiaPennsylvania
| | - Menggang Yu
- Department of Biostatistics and InformaticsUniversity of Wisconsin MadisonMadisonWisconsin
| |
Collapse
|
50
|
Lin NX, Henley WE. Prior event rate ratio adjustment for hidden confounding in observational studies of treatment effectiveness: a pairwise Cox likelihood approach. Stat Med 2016; 35:5149-5169. [PMID: 27477530 PMCID: PMC5111612 DOI: 10.1002/sim.7051] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2014] [Revised: 06/23/2016] [Accepted: 07/01/2016] [Indexed: 11/06/2022]
Abstract
Observational studies provide a rich source of information for assessing effectiveness of treatment interventions in many situations where it is not ethical or practical to perform randomized controlled trials. However, such studies are prone to bias from hidden (unmeasured) confounding. A promising approach to identifying and reducing the impact of unmeasured confounding is prior event rate ratio (PERR) adjustment, a quasi-experimental analytic method proposed in the context of electronic medical record database studies. In this paper, we present a statistical framework for using a pairwise approach to PERR adjustment that removes bias inherent in the original PERR method. A flexible pairwise Cox likelihood function is derived and used to demonstrate the consistency of the simple and convenient alternative PERR (PERR-ALT) estimator. We show how to estimate standard errors and confidence intervals for treatment effect estimates based on the observed information and provide R code to illustrate how to implement the method. Assumptions required for the pairwise approach (as well as PERR) are clarified, and the consequences of model misspecification are explored. Our results confirm the need for researchers to consider carefully the suitability of the method in the context of each problem. Extensions of the pairwise likelihood to more complex designs involving time-varying covariates or more than two periods are considered. We illustrate the application of the method using data from a longitudinal cohort study of enzyme replacement therapy for lysosomal storage disorders. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Collapse
Affiliation(s)
- Nan Xuan Lin
- Department of Mathematics and Information Sciences, Northumbria University, Newcastle upon Tyne, NE2 1XE, U.K.,Health Statistics Group, Institute of Health Research, University of Exeter Medical School, Exeter, EX1 2LU, U.K
| | - William Edward Henley
- Health Statistics Group, Institute of Health Research, University of Exeter Medical School, Exeter, EX1 2LU, U.K..
| |
Collapse
|