1
|
Hu M, Cappelleri JC, Lan KKG. Applying the law of iterated logarithm to control type I error in cumulative meta-analysis of binary outcomes. Clin Trials 2016; 4:329-40. [PMID: 17848494 DOI: 10.1177/1740774507081219] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Background Cumulative meta-analysis typically involves performing an updated meta-analysis every time when new trials are added to a series of similar trials, which by definition involves multiple inspections. Neither the commonly used random effects model nor the conventional group sequential method can control the type I error for many practical situations. In our previous research, Lan et al. (Lan KKG, Hu M-X, Cappelleri JC. Applying the law of iterated logarithm to cumulative meta-analysis of a continuous endpoint. Statistica Sinica 2003; 13: 1135—45) proposed an approach based on the law of iterated logarithm (LIL) to this problem for the continuous case. Purpose The study is an extension and generalization of our previous research to binary outcomes. Although it is based on the same LIL principle, we found the discrete case much more complex and the results from the continuous case do not apply to the binary case. The simulation study presented here is also more extensive. Methods The LIL based method `penalizes' the Z-value of the test statistic to account for multiple tests and for the estimation of heterogeneity in treatment effects across studies. It involves an adjustment factor, which is directly related to the control of type I error and determined through extensive simulations under various conditions. Results With an adjustment factor of 2, the LIL-based test statistics controls the overall type I error when odds ratio or relative risk is the parameter of interest. For risk difference, the adjustment factor can be reduced to 1.5. More inspections may require a larger adjustment factor, but the required adjustment factor stabilizes after 25 inspections. Limitations It will be ideal if the adjustment factor can be obtained theoretically through a statistical model. Unfortunately, real life data are too complex and we have to solve the problem through simulation. However, for large number of inspections, the adjustment factor will have a limited effect and the type I error is controlled mainly by the LIL. Conclusions The LIL method controls the overall type I error for a very broad range of practical situations with a binary outcome, and the LIL works properly in controlling the type I error rates as the number of inspections becomes large. Clinical Trials 2007; 4: 329—340. http://ctj.sagepub.com
Collapse
Affiliation(s)
- Mingxiu Hu
- Millennium Pharmaceuticals, 35 Landsdowne Street, Cambridge, MA 02139, USA.
| | | | | |
Collapse
|
2
|
Senn S. A note regarding meta-analysis of sequential trials with stopping for efficacy. Pharm Stat 2014; 13:371-5. [PMID: 25296692 DOI: 10.1002/pst.1639] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2013] [Revised: 07/14/2014] [Accepted: 08/08/2014] [Indexed: 01/24/2023]
Abstract
It is shown that fixed-effect meta-analyses of naïve treatment estimates from sequentially run trials with the possibility of stopping for efficacy based on a single interim look are unbiassed (or at the very least consistent, depending on the point of view) provided that the trials are weighted by information provided. A simple proof of this is given. An argument is given suggesting that this also applies in the case of multiple looks. The implications for this are discussed.
Collapse
Affiliation(s)
- Stephen Senn
- Competence Center for Methodology and Statistics, CRP-Santé, Strassen, Luxembourg
| |
Collapse
|
3
|
Bassler D, Montori VM, Briel M, Glasziou P, Walter SD, Ramsay T, Guyatt G. Reflections on meta-analyses involving trials stopped early for benefit: Is there a problem and if so, what is it? Stat Methods Med Res 2011; 22:159-68. [DOI: 10.1177/0962280211432211] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We review controversies associated with randomized controlled trials (RCTs) stopped early for apparent benefit (truncated RCTs or tRCTs) and present our groups’ perspective. Long-established theory, simulations and recent empirical evidence demonstrate that tRCTs will on average overestimate treatment effects, and this overestimation may be large, particularly when tRCTs have small number of events. Theoretical considerations and simulations demonstrate that on average, meta-analyses of RCTs with appropriate stopping rules will lead to only trivial overestimation of treatment effects. However, tRCTs will disproportionally contribute to meta-analytic estimates when tRCTs occur early in the sequence of trials with few subsequent studies, publication of nontruncated RCTs is delayed, there is publication bias, or tRCTs result in a ‘freezing’ effect in which ‘correcting’ trials are never undertaken. To avoid applying overestimates of effect to clinical decision-making, clinicians should view the results of individual tRCTs with small sample sizes and small number of events with skepticism. Pooled effects from meta-analyses including tRCTs are likely to overestimate effect when there is a substantial difference in effect estimates between the tRCTs and the nontruncated RCTs, and in which the tRCTs have a substantial weight in the meta-analysis despite themselves having a relatively small number of events. Such circumstances call for sensitivity analyses omitting tRCTs.
Collapse
Affiliation(s)
- Dirk Bassler
- Center for Pediatric Clinical Studies, University Children’s Hospital Tuebingen, Tuebingen, Germany
- Department of Neonatology, University Children’s Hospital Tuebingen, Tuebingen, Germany
| | - Victor M Montori
- Knowledge and Evaluation Research Unit, Mayo Clinic, Rochester, MN, USA
| | - Matthias Briel
- Basel Institute for Clinical Epidemiology and Biostatistics, University Hospital Basel, Basel, Switzerland
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada
| | - Paul Glasziou
- Department of Primary Health Care, University of Oxford, Oxford, UK
- Centre for Evidence-Based Medicine, University of Oxford, Oxford, UK
| | - Stephen D Walter
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada
| | - Tim Ramsay
- Ottawa Hospital Research Institute, University of Ottawa, Ottawa, ON, Canada
| | - Gordon Guyatt
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
4
|
Affiliation(s)
- Michael S Rogers
- Department of Obstetrics and Gynaecology, Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, SAR, China
| | | | | |
Collapse
|