1
|
Fazzari MJ, Kim MY. Subgroup discovery in non-inferiority trials. Stat Med 2021; 40:5174-5187. [PMID: 34155676 DOI: 10.1002/sim.9118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 05/10/2021] [Accepted: 06/10/2021] [Indexed: 11/11/2022]
Abstract
Approaches and guidelines for performing subgroup analysis to assess heterogeneity of treatment effect in clinical trials have been the topic of numerous papers in the statistical and clinical literature, but have been discussed predominantly in the context of conventional superiority trials. Concerns about treatment heterogeneity are the same if not greater in non-inferiority (NI) trials, especially since overall similarity between two treatment arms in a successful NI trial could be due to the existence of qualitative interactions that are more likely when comparing two active therapies. Even in unsuccessful NI trials, subgroup analyses can yield important insights about the potential reasons for failure to demonstrate non-inferiority of the experimental therapy. Recent NI trials have performed a priori subgroup analyses using standard statistical tests for interaction, but there is increasing interest in more flexible machine learning approaches for post-hoc subgroup discovery. The performance and practical application of such methods in NI trials have not been systematically explored, however. We considered the Virtual Twin method for the NI setting, an algorithm for subgroup identification that combines random forest with classification and regression trees, and conducted extensive simulation studies to examine its performance under different NI trial conditions and to devise decision rules for selecting the final subgroups. We illustrate the utility of the method with data from a NI trial that was conducted to compare two acupuncture treatments for chronic musculoskeletal pain.
Collapse
Affiliation(s)
- Melissa J Fazzari
- Division of Biostatistics, Department of Epidemiology and Population, Albert Einstein College of Medicine, Bronx, New York, USA
| | - Mimi Y Kim
- Division of Biostatistics, Department of Epidemiology and Population, Albert Einstein College of Medicine, Bronx, New York, USA
| |
Collapse
|
2
|
Sidi Y, Harel O. Incomplete data analysis of non-inferiority clinical trials: Difference between binomial proportions case. Contemp Clin Trials Commun 2020; 18:100567. [PMID: 32426549 PMCID: PMC7226649 DOI: 10.1016/j.conctc.2020.100567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 04/03/2020] [Accepted: 04/11/2020] [Indexed: 12/02/2022] Open
Abstract
BACKGROUND Incomplete data analysis continues to be a major issue for non-inferiority clinical trials. Due to the steadily increasing use of non-inferiority study design, we believe this topic deserves an immediate attention. METHODS We evaluated the performance of various strategies, including complete case analysis and various imputations techniques for handling incomplete non-inferiority clinical trials when outcome of interest is difference between binomial proportions. Non-inferiority of a new treatment was determined using a fixed margin approach with 95-95% confidence interval method. The methods used to construct the confidence intervals were compared as well and included: Wald, Farrington-Manning and Newcombe methods. RESULTS We found that worst-case and best-case scenario imputation methods should not be used for analysis of incomplete data in non-inferiority trial design, since such methods seriously inflate type-I error rates and produce biased estimates. In addition, we report conditions under which complete case analysis is an acceptable strategy for missing at random missingness mechanism. Importantly, we show how two-stage multiple imputation could be successfully applied for incomplete data that follow missing not at random patterns, and thus result in controlled type-I error rates and unbiased estimates. CONCLUSION This thorough simulation study provides a road map for the analysis of incomplete data in non-inferiority clinical trials for different types of missingness. We believe that the results reported in this paper could serve practitioners who encounter missing data problems in their non-inferiority clinical trials.
Collapse
Affiliation(s)
- Yulia Sidi
- Department of Statistics, University of Connecticut, USA
| | - Ofer Harel
- Department of Statistics, University of Connecticut, USA
| |
Collapse
|
3
|
Bell ML, Rabe BA. The mixed model for repeated measures for cluster randomized trials: a simulation study investigating bias and type I error with missing continuous data. Trials 2020; 21:148. [PMID: 32033617 PMCID: PMC7006144 DOI: 10.1186/s13063-020-4114-9] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 01/28/2020] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Cluster randomized trials (CRTs) are a design used to test interventions where individual randomization is not appropriate. The mixed model for repeated measures (MMRM) is a popular choice for individually randomized trials with longitudinal continuous outcomes. This model's appeal is due to avoidance of model misspecification and its unbiasedness for data missing completely at random or at random. METHODS We extended the MMRM to cluster randomized trials by adding a random intercept for the cluster and undertook a simulation experiment to investigate statistical properties when data are missing at random. We simulated cluster randomized trial data where the outcome was continuous and measured at baseline and three post-intervention time points. We varied the number of clusters, the cluster size, the intra-cluster correlation, missingness and the data-generation models. We demonstrate the MMRM-CRT with an example of a cluster randomized trial on cardiovascular disease prevention among diabetics. RESULTS When simulating a treatment effect at the final time point we found that estimates were unbiased when data were complete and when data were missing at random. Variance components were also largely unbiased. When simulating under the null, we found that type I error was largely nominal, although for a few specific cases it was as high as 0.081. CONCLUSIONS Although there have been assertions that this model is inappropriate when there are more than two repeated measures on subjects, we found evidence to the contrary. We conclude that the MMRM for CRTs is a good analytic choice for cluster randomized trials with a continuous outcome measured longitudinally. TRIAL REGISTRATION ClinicalTrials.gov, ID: NCT02804698.
Collapse
Affiliation(s)
- Melanie L Bell
- Department of Epidemiology and Biostatistics, Mel and Enid Zuckerman College of Public Health, University of Arizona, 1295 N Martin Ave, Tucson, AZ, 85724, USA.
| | - Brooke A Rabe
- Department of Epidemiology and Biostatistics, Mel and Enid Zuckerman College of Public Health, University of Arizona, 1295 N Martin Ave, Tucson, AZ, 85724, USA
| |
Collapse
|
4
|
Bell ML, Floden L, Rabe BA, Hudgens S, Dhillon HM, Bray VJ, Vardy JL. Analytical approaches and estimands to take account of missing patient-reported data in longitudinal studies. PATIENT-RELATED OUTCOME MEASURES 2019; 10:129-140. [PMID: 31114411 PMCID: PMC6489631 DOI: 10.2147/prom.s178963] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Accepted: 03/14/2019] [Indexed: 11/30/2022]
Abstract
Patient-reported outcomes, such as quality of life, functioning, and symptoms, are used widely in therapeutic and behavioral trials and are increasingly used in drug development to represent the patient voice. Missing patient reported data is common and can undermine the validity of results reporting by reducing power, biasing estimates, and ultimately reducing confidence in the results. In this paper, we review statistically principled approaches for handling missing patient-reported outcome data and introduce the idea of estimands in the context of behavioral trials. Specifically, we outline a plan that considers missing data at each stage of research: design, data collection, analysis, and reporting. The design stage includes processes to prevent missing data, define the estimand, and specify primary and sensitivity analyses. The analytic strategy considering missing data depends on the estimand. Reviewed approaches include maximum likelihood-based models, multiple imputation, generalized estimating equations, and responder analysis. We outline sensitivity analyses to assess the robustness of the primary analysis results when data are missing. We also describe ad-hoc methods, including approaches to avoid. Last, we demonstrate methods using data from a behavioral intervention, where the primary outcome was self-reported cognition.
Collapse
Affiliation(s)
- Melanie L Bell
- Department of Epidemiology and Biostatistics, Mel and Enid Zuckerman College of Public Health, University of Arizona, Tucson, AZ 85724, USA.,Psycho-Oncology Co-operative Research Group, School of Psychology, University of Sydney, Sydney, NSW, Australia
| | - Lysbeth Floden
- Department of Epidemiology and Biostatistics, Mel and Enid Zuckerman College of Public Health, University of Arizona, Tucson, AZ 85724, USA.,Clinical Outcomes Solutions, Tucson, AZ 85718, USA
| | - Brooke A Rabe
- Department of Epidemiology and Biostatistics, Mel and Enid Zuckerman College of Public Health, University of Arizona, Tucson, AZ 85724, USA
| | | | - Haryana M Dhillon
- Psycho-Oncology Co-operative Research Group, School of Psychology, University of Sydney, Sydney, NSW, Australia.,Centre for Medical Psychology & Evidence-Based Decision-Making, School of Psychology, University of Sydney, Sydney, NSW, Australia
| | - Victoria J Bray
- Department of Medical Oncology, Liverpool Hospital and University of Sydney, Sydney, NSW, Australia
| | - Janette L Vardy
- Concord Cancer Centre and Sydney Medical School, University of Sydney, Sydney, NSW, Australia
| |
Collapse
|