1
|
Alexandria SJ, Hudgens MG, Aiello AE. Assessing intervention effects in a randomized trial within a social network. Biometrics 2023; 79:1409-1419. [PMID: 34825368 PMCID: PMC9133268 DOI: 10.1111/biom.13606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 11/15/2021] [Accepted: 11/18/2021] [Indexed: 11/29/2022]
Abstract
Studies of social networks provide unique opportunities to assess the causal effects of interventions that may impact more of the population than just those intervened on directly. Such effects are sometimes called peer or spillover effects, and may exist in the presence of interference, that is, when one individual's treatment affects another individual's outcome. Randomization-based inference (RI) methods provide a theoretical basis for causal inference in randomized studies, even in the presence of interference. In this article, we consider RI of the intervention effect in the eX-FLU trial, a randomized study designed to assess the effect of a social distancing intervention on influenza-like-illness transmission in a connected network of college students. The approach considered enables inference about the effect of the social distancing intervention on the per-contact probability of influenza-like-illness transmission in the observed network. The methods allow for interference between connected individuals and for heterogeneous treatment effects. The proposed methods are evaluated empirically via simulation studies, and then applied to data from the eX-FLU trial.
Collapse
Affiliation(s)
- Shaina J. Alexandria
- Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois, U.S.A
| | - Michael G. Hudgens
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| | - Allison E. Aiello
- Department of Epidemiology, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| |
Collapse
|
2
|
Heckman JJ, Karapakula G. Using a Satisficing Model of Experimenter Decision-Making to Guide Finite-Sample Inference for Compromised Experiments. THE ECONOMETRICS JOURNAL 2021; 24:C1-C39. [PMID: 34594155 PMCID: PMC8478285 DOI: 10.1093/ectj/utab009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This paper presents a simple decision-theoretic economic approach for analyzing social experiments with compromised random assignment protocols that are only partially documented. We model administratively constrained experimenters who satisfice in seeking covariate balance. We develop design-based small-sample hypothesis tests that use worst-case (least favorable) randomization null distributions. Our approach accommodates a variety of compromised experiments, including imperfectly documented re-randomization designs. To make our analysis concrete, we focus much of our discussion on the influential Perry Preschool Project. We reexamine previous estimates of program effectiveness using our methods. The choice of how to model reassignment vitally affects inference.
Collapse
Affiliation(s)
- James J Heckman
- Center for the Economics of Human Development, 1126 East 59th Street, Chicago, IL 60637
| | | |
Collapse
|
3
|
Affiliation(s)
- Jasjeet S. Sekhon
- Department of Statistics & Data Science and Department of Political Science, Yale University, New Haven, CT
| | - Yotam Shem-Tov
- Department of Economics, University of California at Los Angeles, Los Angeles, CA
| |
Collapse
|
4
|
Affiliation(s)
- Jason Wu
- Department of Statistics, University of California, Berkeley, CA
| | - Peng Ding
- Department of Statistics, University of California, Berkeley, CA
| |
Collapse
|
5
|
Thompson J, Davey C, Hayes R, Hargreaves J, Fielding K. swpermute: Permutation tests for Stepped-Wedge Cluster-Randomised Trials. THE STATA JOURNAL 2019; 19:803-819. [PMID: 32565746 PMCID: PMC7305031 DOI: 10.1177/1536867x19893624] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Permutation tests are useful in stepped-wedge trials to provide robust statistical tests of intervention-effect estimates. However, the Stata command permute does not produce valid tests in this setting because individual observations are not exchangeable. We introduce the swpermute command that permutes clusters to sequences to maintain exchangeability. The command provides additional functionality to aid users in performing analyses of stepped-wedge trials. In particular, we include the option "withinperiod" that performs the specified analysis separately in each period of the study with the resulting period-specific intervention-effect estimates combined as a weighted average. We also include functionality to test non-zero null hypotheses to aid the construction of confidence intervals. Examples of the application of swpermute are given using data from a trial testing the impact of a new tuberculosis diagnostic test on bacterial confirmation of a tuberculosis diagnosis.
Collapse
|
6
|
Lu J. Improved Neymanian analysis for 2Kfactorial designs with binary outcomes. STAT NEERL 2019. [DOI: 10.1111/stan.12186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Jiannan Lu
- Analysis and Experimentation,Microsoft Corporation Redmond Washington
| |
Collapse
|
7
|
Rigdon J, Baiocchi M, Basu S. Preventing false discovery of heterogeneous treatment effect subgroups in randomized trials. Trials 2018; 19:382. [PMID: 30012181 PMCID: PMC6048878 DOI: 10.1186/s13063-018-2774-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Accepted: 06/29/2018] [Indexed: 11/29/2022] Open
Abstract
Background Heterogeneous treatment effects (HTEs), or systematic differences in treatment effectiveness among participants with different observable features, may be important when applying trial results to clinical practice. Current methods suffer from a potential for false detection of HTEs due to imbalances in covariates between candidate subgroups. Methods We introduce a new method, matching plus classification and regression trees (mCART), that yields balance in covariates in identified HTE subgroups. We compared mCART to a classical method (logistic regression [LR] with backwards covariate selection using the Akaike information criterion ) and two machine-learning approaches increasingly applied to HTE detection (random forest [RF] and gradient RF) in simulations with a binary outcome with known HTE subgroups. We considered an N = 200 phase II oncology trial where there were either no HTEs (1A) or two HTE subgroups (1B) and an N = 6000 phase III cardiovascular disease trial where there were either no HTEs (2A) or four HTE subgroups (2B). Additionally, we considered an N = 6000 phase III cardiovascular disease trial where there was no average treatment effect but there were four HTE subgroups (2C). Results In simulations 1A and 2A (no HTEs), mCART did not identify any HTE subgroups, whereas LR found 2 and 448, RF 5 and 2, and gradient RF 5 and 24, respectively (all false positives). In simulation 1B, mCART failed to identify the two true HTE subgroups whereas LR found 4, RF 6, and gradient RF 10 (half or more of which were false positives). In simulations 2B and 2C, mCART captured the four true HTE subgroups, whereas the other methods found only false positives. All HTE subgroups identified by mCART had acceptable treated vs. control covariate balance with absolute standardized differences less than 0.2, whereas the absolute standardized differences for the other methods typically exceeded 0.2. The imbalance in covariates in identified subgroups for LR, RF, and gradient RF indicates the false HTE detection may have been due to confounding. Conclusions Covariate imbalances may be producing false positives in subgroup analyses. mCART could be a useful tool to help prevent the false discovery of HTE subgroups in secondary analyses of randomized trial data.
Collapse
Affiliation(s)
- Joseph Rigdon
- Quantitative Sciences Unit, Stanford University School of Medicine, 1070 Arastradero Road #3C3104, MC 5559, Palo Alto, California, 94304, USA.
| | - Michael Baiocchi
- Stanford Prevention Research Center, Stanford University School of Medicine, Medical School Office Building, Room 318,1265 Welch Road, MC 5411, Stanford, CA, 94305, USA
| | - Sanjay Basu
- Departments of Medicine and of Health Research and Policy, Center for Primary Care and Outcomes Research and Center for Population Health Sciences, Stanford University School of Medicine, 1070 Arastradero Road, Office 282 MC 5560, Palo Alto, CA, 94304, USA
| |
Collapse
|
8
|
Ding P, Miratrix LW. Model‐free causal inference of binary experimental data. Scand Stat Theory Appl 2018. [DOI: 10.1111/sjos.12343] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Affiliation(s)
- Peng Ding
- Department of Statistics University of California Berkeley CA USA
| | - Luke W. Miratrix
- Graduate School of Education and Department of Statistics Harvard University Cambridge MA USA
| |
Collapse
|
9
|
Lu J. Sharpening randomization-based causal inference for 2 2 factorial designs with binary outcomes. Stat Methods Med Res 2017; 28:1064-1078. [PMID: 29205103 DOI: 10.1177/0962280217745720] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In medical research, a scenario often entertained is randomized controlled 22 factorial design with a binary outcome. By utilizing the concept of potential outcomes, Dasgupta et al. proposed a randomization-based causal inference framework, allowing flexible and simultaneous estimations and inferences of the factorial effects. However, a fundamental challenge that Dasgupta et al.'s proposed methodology faces is that the sampling variance of the randomization-based factorial effect estimator is unidentifiable, rendering the corresponding classic "Neymanian" variance estimator suffering from over-estimation. To address this issue, for randomized controlled 22 factorial designs with binary outcomes, we derive the sharp lower bound of the sampling variance of the factorial effect estimator, which leads to a new variance estimator that sharpens the finite-population Neymanian causal inference. We demonstrate the advantages of the new variance estimator through a series of simulation studies, and apply our newly proposed methodology to two real-life datasets from randomized clinical trials, where we gain new insights.
Collapse
Affiliation(s)
- Jiannan Lu
- Analysis and Experimentation, Microsoft Corporation, Redmond, USA
| |
Collapse
|
10
|
|
11
|
Ding P. Rejoinder: A Paradox from Randomization-Based Causal Inference. Stat Sci 2017. [DOI: 10.1214/17-sts571rej] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
12
|
|
13
|
Halloran ME, Hudgens MG. Estimating population effects of vaccination using large, routinely collected data. Stat Med 2017; 37:294-301. [PMID: 28722190 DOI: 10.1002/sim.7392] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2016] [Revised: 04/30/2017] [Accepted: 06/08/2017] [Indexed: 11/08/2022]
Abstract
Vaccination in populations can have several kinds of effects. Establishing that vaccination produces population-level effects beyond the direct effects in the vaccinated individuals can have important consequences for public health policy. Formal methods have been developed for study designs and analysis that can estimate the different effects of vaccination. However, implementing field studies to evaluate the different effects of vaccination can be expensive, of limited generalizability, or unethical. It would be advantageous to use routinely collected data to estimate the different effects of vaccination. We consider how different types of data are needed to estimate different effects of vaccination. The examples include rotavirus vaccination of young children, influenza vaccination of elderly adults, and a targeted influenza vaccination campaign in schools. Directions for future research are discussed. Copyright © 2017 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- M Elizabeth Halloran
- Center for Inference and Dynamics of Infectious Diseases, Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, WA, U.S.A.,Department of Biostatistics, School of Public Health, University of Washington, Seattle, WA, U.S.A
| | - Michael G Hudgens
- Department of Biostatistics, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC, U.S.A
| |
Collapse
|
14
|
Chiba Y. Sharp nonparametric bounds and randomization inference for treatment effects on an ordinal outcome. Stat Med 2017; 36:3966-3975. [PMID: 28703430 DOI: 10.1002/sim.7400] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2016] [Revised: 05/10/2017] [Accepted: 06/14/2017] [Indexed: 11/09/2022]
Abstract
In clinical research, investigators are interested in inferring the average causal effect of a treatment. However, the causal parameter that can be used to derive the average causal effect is not well defined for ordinal outcomes. Although some definitions have been proposed, they are limited in that they are not identical to the well-defined causal risk for a binary outcome, which is the simplest ordinal outcome. In this paper, we propose the use of a causal parameter for an ordinal outcome, defined as the proportion that a potential outcome under one treatment condition would not be smaller than that under the other condition. For a binary outcome, this proportion is identical to the causal risk. Unfortunately, the proposed causal parameter cannot be identified, even under randomization. Therefore, we present a numerical method to calculate the sharp nonparametric bounds within a sample, reflecting the impact of confounding. When the assumption of independent potential outcomes is included, the causal parameter can be identified when randomization is in play. Then, we present exact tests and the associated confidence intervals for the relative treatment effect using the randomization-based approach, which are an extension of the existing methods for a binary outcome. Our methodologies are illustrated using data from an emetic prevention clinical trial.
Collapse
Affiliation(s)
- Yasutaka Chiba
- Clinical Research Center, Kinki University Hospital, Osaka, Japan
| |
Collapse
|
15
|
Chiba Y. Stratified exact tests for the weak causal null hypothesis in randomized trials with a binary outcome. Biom J 2017; 59:986-997. [PMID: 28605047 DOI: 10.1002/bimj.201600085] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2016] [Revised: 11/17/2016] [Accepted: 01/25/2017] [Indexed: 11/07/2022]
Abstract
Fisher's exact test is commonly used to compare two groups when the outcome is binary in randomized trials. In the context of causal inference, this test explores the sharp causal null hypothesis (i.e. the causal effect of treatment is the same for all subjects), but not the weak causal null hypothesis (i.e. the causal risks are the same in the two groups). Therefore, in general, rejection of the null hypothesis by Fisher's exact test does not mean that the causal risk difference is not zero. Recently, Chiba (Journal of Biometrics and Biostatistics 2015; 6: 244) developed a new exact test for the weak causal null hypothesis when the outcome is binary in randomized trials; the new test is not based on any large sample theory and does not require any assumption. In this paper, we extend the new test; we create a version of the test applicable to a stratified analysis. The stratified exact test that we propose is general in nature and can be used in several approaches toward the estimation of treatment effects after adjusting for stratification factors. The stratified Fisher's exact test of Jung (Biometrical Journal 2014; 56: 129-140) tests the sharp causal null hypothesis. This test applies a crude estimator of the treatment effect and can be regarded as a special case of our proposed exact test. Our proposed stratified exact test can be straightforwardly extended to analysis of noninferiority trials and to construct the associated confidence interval.
Collapse
Affiliation(s)
- Yasutaka Chiba
- Clinical Research Center, Kinki University Hospital, 377-2 Ohno-higashi, Osakasayama, Osaka, Japan
| |
Collapse
|
16
|
Fogarty CB, Shi P, Mikkelsen ME, Small DS. Randomization Inference and Sensitivity Analysis for Composite Null Hypotheses With Binary Outcomes in Matched Observational Studies. J Am Stat Assoc 2017. [DOI: 10.1080/01621459.2016.1138865] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Colin B. Fogarty
- MIT Sloan School of Management and Operations Research Center, Massachusetts Institute of Technology, Cambridge, MA
- Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA
| | - Pixu Shi
- Center for Clinical Epidemiology and Biostatistics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Mark E. Mikkelsen
- Division of Pulmonary and Critical Care Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Dylan S. Small
- Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA
| |
Collapse
|
17
|
Rigdon J, Loh WW, Hudgens MG. Response to comment on 'Randomization inference for treatment effects on a binary outcome'. Stat Med 2017; 36:876-880. [PMID: 28093845 PMCID: PMC5358813 DOI: 10.1002/sim.7192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
We thank Professor Yasutaka Chiba [1 ] for commenting on Rigdon and Hudgens (RH) [2 ]. Chiba [1 ] described a certain exact confidence interval reported in RH as “somewhat unnatural.” Chiba also presented an alternative approach to constructing confidence intervals [3 ]. In this response, we (i) provide a simple explanation why the confidence interval in RH appeared “unnatural,” and (ii) explain the relationship between the RH [2 ] and Chiba [3 ] confidence intervals. Essentially the two approaches are equivalent, except RH entails inverting one two-sided test whereas Chiba inverts two one-sided tests. We present a more computationally efficient method (RLH) for computing the RH intervals based on Chiba’s principal stratification formulation of the problem. We also propose a third method based on Blaker [4 ] which inverts a single two-sided test but forms a confidence interval that is at least as narrow as inverting two one-sided tests. Simulation results show the RLH intervals tend to be as narrow or narrower than the Chiba and Blaker intervals on average.
Collapse
Affiliation(s)
- Joseph Rigdon
- Quantitative Sciences Unit, Stanford University, Palo Alto, CA, U.S.A
| | - Wen Wei Loh
- Department of Biostatistics, University of North Carolina, Chapel Hill, NC, U.S.A
| | - Michael G Hudgens
- Department of Biostatistics, University of North Carolina, Chapel Hill, NC, U.S.A
| |
Collapse
|
18
|
Berger VW. An empirical demonstration of the need for exact tests. JOURNAL OF MODERN APPLIED STATISTICAL METHODS 2017. [DOI: 10.22237/jmasm/1493596920] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
19
|
Fogarty CB, Mikkelsen ME, Gaieski DF, Small DS. Discrete Optimization for Interpretable Study Populations and Randomization Inference in an Observational Study of Severe Sepsis Mortality. J Am Stat Assoc 2016. [DOI: 10.1080/01621459.2015.1112802] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
20
|
Chiba Y. A note on exact confidence interval for causal effects on a binary outcome in randomized trials. Stat Med 2016; 35:1739-41. [PMID: 27059605 DOI: 10.1002/sim.6826] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2015] [Accepted: 11/06/2015] [Indexed: 11/07/2022]
Affiliation(s)
- Yasutaka Chiba
- Clinical Research Center, Kinki University Hospital, 377-2, Ohno-Higashi, Osaka-Sayama, Osaka, 589-8511, Japan
| |
Collapse
|
21
|
Li X, Ding P. Exact confidence intervals for the average causal effect on a binary outcome. Stat Med 2016; 35:957-60. [PMID: 26833798 DOI: 10.1002/sim.6764] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2015] [Accepted: 09/23/2015] [Indexed: 11/11/2022]
Abstract
Based on the physical randomization of completely randomized experiments, in a recent article in Statistics in Medicine, Rigdon and Hudgens propose two approaches to obtaining exact confidence intervals for the average causal effect on a binary outcome. They construct the first confidence interval by combining, with the Bonferroni adjustment, the prediction sets for treatment effects among treatment and control groups, and the second one by inverting a series of randomization tests. With sample size n, their second approach requires performing O(n4 )randomization tests. We demonstrate that the physical randomization also justifies other ways to constructing exact confidence intervals that are more computationally efficient. By exploiting recent advances in hypergeometric confidence intervals and the stochastic order information of randomization tests, we propose approaches that either do not need to invoke Monte Carlo or require performing at most O(n2) randomization tests. We provide technical details and R code in the Supporting Information.
Collapse
Affiliation(s)
- Xinran Li
- Department of Statistics, Harvard University, Cambrdige, 02138, MA, U.S.A
| | - Peng Ding
- Department of Statistics, University of California, Berkeley, 94720-3860, CA, U.S.A
| |
Collapse
|