1
|
Proschan MA, Nason M, Ortega-Villa AM, Wang J. Changing interim monitoring in response to internal clinical trial data. Biometrics 2024; 80:ujae006. [PMID: 38477484 DOI: 10.1093/biomtc/ujae006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/18/2023] [Accepted: 01/29/2024] [Indexed: 03/14/2024]
Abstract
Designing clinical trials for emerging infectious diseases such as COVID-19 is challenging because information needed for proper planning may be lacking. Pre-specified adaptive designs can be attractive options, but what happens if a trial with no such design needs to be modified? For example, unexpectedly high efficacy (approximately 95%) in two COVID-19 vaccine trials might cause investigators in other COVID-19 vaccine trials to increase the number of interim analyses to allow earlier stopping for efficacy. If such a decision is based solely on external data, there are no issues, but what if internal trial data by arm are also examined? Fortunately, the conditional error principle of Müller and Schäfer (2004) can be used to ensure no inflation of the type 1 error rate, even if no interim analyses were planned. We study the properties, including limitations, of this method. We provide a shiny app to evaluate changes in timing of interim analyses in response to outcome data by arm in clinical trials.
Collapse
Affiliation(s)
- Michael A Proschan
- Biostatistics Research Branch, National Institute of Allergy and Infectious Diseases, Bethesda, MD, USA, 20892
| | - Martha Nason
- Biostatistics Research Branch, National Institute of Allergy and Infectious Diseases, Bethesda, MD, USA, 20892
| | - Ana M Ortega-Villa
- Biostatistics Research Branch, National Institute of Allergy and Infectious Diseases, Bethesda, MD, USA, 20892
| | - Jing Wang
- Biostatistics Research Branch, National Institute of Allergy and Infectious Diseases, Bethesda, MD, USA, 20892
| |
Collapse
|
2
|
The Bayesian Design of Adaptive Clinical Trials. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18020530. [PMID: 33435249 PMCID: PMC7826635 DOI: 10.3390/ijerph18020530] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/13/2023]
Abstract
This paper presents a brief overview of the recent literature on adaptive design of clinical trials from a Bayesian perspective for statistically not so sophisticated readers. Adaptive designs are attracting a keen interest in several disciplines, from a theoretical viewpoint and also—potentially—from a practical one, and Bayesian adaptive designs, in particular, have raised high expectations in clinical trials. The main conceptual tools are highlighted here, with a mention of several trial designs proposed in the literature that use these methods, including some of the registered Bayesian adaptive trials to this date. This review aims at complementing the existing ones on this topic, pointing at further interesting reading material.
Collapse
|
3
|
Robertson DS, Wason JMS. Familywise error control in multi-armed response-adaptive trials. Biometrics 2019; 75:885-894. [PMID: 30714095 PMCID: PMC6739232 DOI: 10.1111/biom.13042] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Accepted: 01/25/2019] [Indexed: 01/21/2023]
Abstract
Response-adaptive designs allow the randomization probabilities to change during the course of a trial based on cumulated response data so that a greater proportion of patients can be allocated to the better performing treatments. A major concern over the use of response-adaptive designs in practice, particularly from a regulatory viewpoint, is controlling the type I error rate. In particular, we show that the naïve z-test can have an inflated type I error rate even after applying a Bonferroni correction. Simulation studies have often been used to demonstrate error control but do not provide a guarantee. In this article, we present adaptive testing procedures for normally distributed outcomes that ensure strong familywise error control by iteratively applying the conditional invariance principle. Our approach can be used for fully sequential and block randomized trials and for a large class of adaptive randomization rules found in the literature. We show there is a high price to pay in terms of power to guarantee familywise error control for randomization schemes with extreme allocation probabilities. However, for proposed Bayesian adaptive randomization schemes in the literature, our adaptive tests maintain or increase the power of the trial compared to the z-test. We illustrate our method using a three-armed trial in primary hypercholesterolemia.
Collapse
Affiliation(s)
- D S Robertson
- MRC Biostatistics Unit, University of Cambridge, IPH Forvie Site, Robinson Way, Cambridge, CB2 0SR, UK
| | - J M S Wason
- MRC Biostatistics Unit, University of Cambridge, IPH Forvie Site, Robinson Way, Cambridge, CB2 0SR, UK.,Institute of Health and Society, Newcastle University, Newcastle upon Tyne, NE2 4AX, UK
| |
Collapse
|
4
|
Abstract
Blinded sample size reassessment is a popular means to control the power in clinical trials if no reliable information on nuisance parameters is available in the planning phase. We investigate how sample size reassessment based on blinded interim data affects the properties of point estimates and confidence intervals for parallel group superiority trials comparing the means of a normal endpoint. We evaluate the properties of two standard reassessment rules that are based on the sample size formula of the z-test, derive the worst case reassessment rule that maximizes the absolute mean bias and obtain an upper bound for the mean bias of the treatment effect estimate.
Collapse
Affiliation(s)
- Martin Posch
- Section for Medical Statistics, Center
for Medical Statistics, Informatics, and Intelligent Systems, Medical University of
Vienna, Vienna, Austria
| | - Florian Klinglmueller
- Section for Medical Statistics, Center
for Medical Statistics, Informatics, and Intelligent Systems, Medical University of
Vienna, Vienna, Austria
- Department of Statistical Sciences,
University of Padua, Padua, Italy
| | - Franz König
- Section for Medical Statistics, Center
for Medical Statistics, Informatics, and Intelligent Systems, Medical University of
Vienna, Vienna, Austria
| | - Frank Miller
- Department of Statistics, Stockholm
University, Stockholm, Sweden Martin Posch and Florian Klinglmueller share first
authorship
| |
Collapse
|
5
|
Abstract
BACKGROUND Sample size adjustment designs, which allow increasing the study sample size based on interim analysis of outcome data from a randomized clinical trial, have been increasingly promoted in the biostatistical literature. Although it is recognized that group sequential designs can be at least as efficient as sample size adjustment designs, many authors argue that a key advantage of these designs is their flexibility; interim sample size adjustment decisions can incorporate information and business interests external to the trial. Recently, Chen et al. (Clinical Trials 2015) considered sample size adjustment applications in the time-to-event setting using a design (CDL) that limits adjustments to situations where the interim results are promising. The authors demonstrated that while CDL provides little gain in unconditional power (versus fixed-sample-size designs), there is a considerable increase in conditional power for trials in which the sample size is adjusted. METHODS In time-to-event settings, sample size adjustment allows an increase in the number of events required for the final analysis. This can be achieved by either (a) following the original study population until the additional events are observed thus focusing on the tail of the survival curves or (b) enrolling a potentially large number of additional patients thus focusing on the early differences in survival curves. We use the CDL approach to investigate performance of sample size adjustment designs in time-to-event trials. RESULTS Through simulations, we demonstrate that when the magnitude of the true treatment effect changes over time, interim information on the shape of the survival curves can be used to enrich the final analysis with events from the time period with the strongest treatment effect. In particular, interested parties have the ability to make the end-of-trial treatment effect larger (on average) based on decisions using interim outcome data. Furthermore, in "clinical null" cases where there is no benefit due to crossing survival curves, the sample size adjustment design is shown to increase the probability of recommending an ineffective therapy. CONCLUSION Access to interim information on the shape of the survival curves may jeopardize the perceived integrity of trials using sample size adjustment designs. Therefore, given the lack of efficiency advantage over group sequential designs, sample size adjustment designs in time-to-event settings remain unjustified.
Collapse
Affiliation(s)
- Boris Freidlin
- Biometric Research Program, Division of Cancer Treatment and Diagnosis, National Cancer Institute, Bethesda, MD, USA
| | - Edward L Korn
- Biometric Research Program, Division of Cancer Treatment and Diagnosis, National Cancer Institute, Bethesda, MD, USA
| |
Collapse
|
6
|
Curtin F, Heritier S. The role of adaptive trial designs in drug development. Expert Rev Clin Pharmacol 2017; 10:727-736. [DOI: 10.1080/17512433.2017.1321985] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Affiliation(s)
- François Curtin
- Division of Clinical Pharmacology and Toxicology, University of Geneva, Geneva, Switzerland
- Research Center for Statistics, Geneva School of Economics and Management, University of Geneva, Geneva, Switzerland
- Geneuro SA, Geneva, Switzerland
| | - Stephane Heritier
- Department of Epidemiology and Preventive Medicine, School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
| |
Collapse
|
7
|
Broberg P, Miller F. Conditional estimation in two-stage adaptive designs. Biometrics 2017; 73:895-904. [PMID: 28099993 DOI: 10.1111/biom.12642] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2016] [Revised: 11/01/2016] [Accepted: 11/01/2016] [Indexed: 11/28/2022]
Abstract
We consider conditional estimation in two-stage sample size adjustable designs and the consequent bias. More specifically, we consider a design which permits raising the sample size when interim results look rather promising, and which retains the originally planned sample size when results look very promising. The estimation procedures reported comprise the unconditional maximum likelihood, the conditionally unbiased Rao-Blackwell estimator, the conditional median unbiased estimator, and the conditional maximum likelihood with and without bias correction. We compare these estimators based on analytical results and a simulation study. We show how they can be applied in a real clinical trial setting.
Collapse
Affiliation(s)
- Per Broberg
- Division of Cancer Epidemiology, Department of Clinical Sciences Lund, Lund University, Skane University Hospital, Lund, Sweden
| | - Frank Miller
- Department of Statistics, Stockholm University, 10691 Stockholm, Sweden
| |
Collapse
|
8
|
Bauer P, Bretz F, Dragalin V, König F, Wassmer G. Twenty-five years of confirmatory adaptive designs: opportunities and pitfalls. Stat Med 2016; 35:325-47. [PMID: 25778935 PMCID: PMC6680191 DOI: 10.1002/sim.6472] [Citation(s) in RCA: 130] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2014] [Revised: 02/03/2015] [Accepted: 02/19/2015] [Indexed: 12/26/2022]
Abstract
'Multistage testing with adaptive designs' was the title of an article by Peter Bauer that appeared 1989 in the German journal Biometrie und Informatik in Medizin und Biologie. The journal does not exist anymore but the methodology found widespread interest in the scientific community over the past 25 years. The use of such multistage adaptive designs raised many controversial discussions from the beginning on, especially after the publication by Bauer and Köhne 1994 in Biometrics: Broad enthusiasm about potential applications of such designs faced critical positions regarding their statistical efficiency. Despite, or possibly because of, this controversy, the methodology and its areas of applications grew steadily over the years, with significant contributions from statisticians working in academia, industry and agencies around the world. In the meantime, such type of adaptive designs have become the subject of two major regulatory guidance documents in the US and Europe and the field is still evolving. Developments are particularly noteworthy in the most important applications of adaptive designs, including sample size reassessment, treatment selection procedures, and population enrichment designs. In this article, we summarize the developments over the past 25 years from different perspectives. We provide a historical overview of the early days, review the key methodological concepts and summarize regulatory and industry perspectives on such designs. Then, we illustrate the application of adaptive designs with three case studies, including unblinded sample size reassessment, adaptive treatment selection, and adaptive endpoint selection. We also discuss the availability of software for evaluating and performing such designs. We conclude with a critical review of how expectations from the beginning were fulfilled, and - if not - discuss potential reasons why this did not happen.
Collapse
Affiliation(s)
- Peter Bauer
- Section of Medical StatisticsMedical University of ViennaSpitalgasse 231090 WienAustria
| | - Frank Bretz
- Novartis Pharma AGLichtstrasse 354002BaselSwitzerland
- Shanghai University of Finance and EconomicsChina
| | | | - Franz König
- Section of Medical StatisticsMedical University of ViennaSpitalgasse 231090 WienAustria
| | - Gernot Wassmer
- Aptiv Solutions, an ICON plc companyRobert‐Perthel‐Str. 77a50739KölnGermany
- Institute for Medical Statistics, Informatics and EpidemiologyUniversity of Cologne50924KölnGermany
| |
Collapse
|
9
|
Shih WJ, Li G, Wang Y. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches. Contemp Clin Trials 2015; 47:40-8. [PMID: 26674739 DOI: 10.1016/j.cct.2015.12.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2015] [Revised: 11/29/2015] [Accepted: 12/03/2015] [Indexed: 11/25/2022]
Abstract
Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one.
Collapse
Affiliation(s)
- Weichung Joe Shih
- Department of Biostatistics, Rutgers School of Public Health, Rutgers University, Piscataway, NJ 08854, United States.
| | - Gang Li
- Janssen Pharmaceutical Research and Development, Raritan, NJ 08869, United States
| | - Yining Wang
- Janssen Pharmaceutical Research and Development, Raritan, NJ 08869, United States
| |
Collapse
|
10
|
Stallard N, Kunz CU, Todd S, Parsons N, Friede T. Flexible selection of a single treatment incorporating short-term endpoint information in a phase II/III clinical trial. Stat Med 2015; 34:3104-15. [PMID: 26112909 PMCID: PMC4745001 DOI: 10.1002/sim.6567] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2014] [Revised: 05/11/2015] [Accepted: 06/01/2015] [Indexed: 11/07/2022]
Abstract
Seamless phase II/III clinical trials in which an experimental treatment is selected at an interim analysis have been the focus of much recent research interest. Many of the methods proposed are based on the group sequential approach. This paper considers designs of this type in which the treatment selection can be based on short-term endpoint information for more patients than have primary endpoint data available. We show that in such a case, the familywise type I error rate may be inflated if previously proposed group sequential methods are used and the treatment selection rule is not specified in advance. A method is proposed to avoid this inflation by considering the treatment selection that maximises the conditional error given the data available at the interim analysis. A simulation study is reported that illustrates the type I error rate inflation and compares the power of the new approach with two other methods: a combination testing approach and a group sequential method that does not use the short-term endpoint data, both of which also strongly control the type I error rate. The new method is also illustrated through application to a study in Alzheimer's disease.
Collapse
Affiliation(s)
- Nigel Stallard
- Statistics and Epidemiology, Division of Health SciencesWarwick Medical School, University of WarwickCoventryU.K.
| | - Cornelia Ursula Kunz
- Statistics and Epidemiology, Division of Health SciencesWarwick Medical School, University of WarwickCoventryU.K.
| | - Susan Todd
- Department of Mathematics and StatisticsUniversity of ReadingReadingU.K.
| | - Nicholas Parsons
- Statistics and Epidemiology, Division of Health SciencesWarwick Medical School, University of WarwickCoventryU.K.
| | - Tim Friede
- Department of Medical StatisticsUniversity Medical CenterGöttingenGermany
| |
Collapse
|
11
|
Jennison C, Turnbull BW. Adaptive sample size modification in clinical trials: start small then ask for more? Stat Med 2015; 34:3793-810. [PMID: 26172385 DOI: 10.1002/sim.6575] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2014] [Revised: 05/06/2015] [Accepted: 06/02/2015] [Indexed: 01/09/2023]
Abstract
We consider sample size re-estimation in a clinical trial, in particular when there is a significant delay before the measurement of patient response. Mehta and Pocock have proposed methods in which sample size is increased when interim results fall in a 'promising zone' where it is deemed worthwhile to increase conditional power by adding more subjects. Our analysis reveals potential pitfalls in applying this approach. Mehta and Pocock use results of Chen, DeMets and Lan to identify when increasing sample size, but applying a conventional level α significance test at the end of the trial does not inflate the type I error rate: we have found the greatest gains in power per additional observation are liable to lie outside the region defined by this method. Mehta and Pocock increase sample size to achieve a particular conditional power, calculated under the current estimate of treatment effect: this leads to high increases in sample size for a small range of interim outcomes, whereas we have found it more efficient to make moderate increases in sample size over a wider range of cases. If the aforementioned pitfalls are avoided, we believe the broad framework proposed by Mehta and Pocock is valuable for clinical trial design. Working in this framework, we propose sample size rules that apply explicitly the principle of adding observations when they are most beneficial. The resulting trial designs are closely related to efficient group sequential tests for a delayed response proposed by Hampson and Jennison.
Collapse
Affiliation(s)
| | - Bruce W Turnbull
- School of Operations Research and Information Engineering, Cornell University, Ithaca, NY, U.S.A
| |
Collapse
|
12
|
Wang SJ, Hung HMJ. A Conditional Adaptive Weighted Test Method for Confirmatory Trials. Ther Innov Regul Sci 2014; 48:51-55. [DOI: 10.1177/2168479013513891] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
13
|
Senn S. A note regarding meta-analysis of sequential trials with stopping for efficacy. Pharm Stat 2014; 13:371-5. [PMID: 25296692 DOI: 10.1002/pst.1639] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2013] [Revised: 07/14/2014] [Accepted: 08/08/2014] [Indexed: 01/24/2023]
Abstract
It is shown that fixed-effect meta-analyses of naïve treatment estimates from sequentially run trials with the possibility of stopping for efficacy based on a single interim look are unbiassed (or at the very least consistent, depending on the point of view) provided that the trials are weighted by information provided. A simple proof of this is given. An argument is given suggesting that this also applies in the case of multiple looks. The implications for this are discussed.
Collapse
Affiliation(s)
- Stephen Senn
- Competence Center for Methodology and Statistics, CRP-Santé, Strassen, Luxembourg
| |
Collapse
|
14
|
Graf AC, Bauer P, Glimm E, Koenig F. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications. Biom J 2014; 56:614-30. [PMID: 24753160 PMCID: PMC4282114 DOI: 10.1002/bimj.201300153] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2013] [Revised: 01/20/2014] [Accepted: 01/22/2014] [Indexed: 11/24/2022]
Abstract
Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate.
Collapse
Affiliation(s)
- Alexandra C Graf
- Center for Medical Statistics, Informatics and Intelligent Systems, Medical University of ViennaSpitalgasse 23, 1090, Vienna, Austria
- Competence Center for Clinical Trials, University of BremenLinzer Strasse 4, 28359, Bremen, Germany
| | - Peter Bauer
- Center for Medical Statistics, Informatics and Intelligent Systems, Medical University of ViennaSpitalgasse 23, 1090, Vienna, Austria
| | - Ekkehard Glimm
- Novartis Pharma AG, Novartis Campus4056, Basel, Switzerland
| | - Franz Koenig
- Center for Medical Statistics, Informatics and Intelligent Systems, Medical University of ViennaSpitalgasse 23, 1090, Vienna, Austria
| |
Collapse
|
15
|
Abstract
We consider situations where a drug developer gets access to additional financial resources when a promising result has been observed in a pre-planned interim analysis during a clinical trial which should lead to the registration of the drug. First the option that the drug developer completely puts the additional resources into increasing the second stage sample size has been investigated. If investors invest the more the larger the observed interim effect, this may not be a reasonable strategy: Then additional sample sizes are applied when the conditional power is already very large and hardly any impact on the overall power can be expected. Nevertheless, further reducing the type II error rate in promising situations may be of interest for a drug developer. In a second step, sample size was based on a utility function including the reward of registration (which was allowed to depend on the observed effect size at the end of the trial) and sampling costs. Utility as a function of the sample size may have more than one local maximum, one of them at the lowest per group sample size. For small effects an optimal strategy could be to apply the smallest sample size accepted by regulators.
Collapse
|
16
|
|
17
|
Broberg P. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions. BMC Med Res Methodol 2013; 13:94. [PMID: 23870495 PMCID: PMC3738874 DOI: 10.1186/1471-2288-13-94] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2012] [Accepted: 07/15/2013] [Indexed: 11/16/2022] Open
Abstract
Background One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. Methods A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Results Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. Conclusions There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.
Collapse
Affiliation(s)
- Per Broberg
- Department of Oncology and Cancer Epidemiology, Clinical Sciences, Lund University and Skåne University Hospital, 221 85 Lund, Sweden.
| |
Collapse
|
18
|
Lai D, Moyé LA, Chang KC, Hardy RJ. Sample Size Re-Estimation Based on Two-Stage Analysis of Variance: Interim Analysis of Clinical Trials. COMMUN STAT-THEOR M 2012. [DOI: 10.1080/03610926.2011.569675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
19
|
Perspective on adaptive designs: 4 years European Medicines Agency reflection paper, 1 year draft US FDA guidance – where are we now? ACTA ACUST UNITED AC 2012. [DOI: 10.4155/cli.12.5] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
20
|
Todd S, Valdés-Márquez E, West J. A practical comparison of blinded methods for sample size reviews in survival data clinical trials. Pharm Stat 2012; 11:141-8. [PMID: 22337635 DOI: 10.1002/pst.516] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2009] [Revised: 07/11/2011] [Accepted: 08/05/2011] [Indexed: 11/06/2022]
Abstract
This paper presents practical approaches to the problem of sample size re-estimation in the case of clinical trials with survival data when proportional hazards can be assumed. When data are readily available at the time of the review, on a full range of survival experiences across the recruited patients, it is shown that, as expected, performing a blinded re-estimation procedure is straightforward and can help to maintain the trial's pre-specified error rates. Two alternative methods for dealing with the situation where limited survival experiences are available at the time of the sample size review are then presented and compared. In this instance, extrapolation is required in order to undertake the sample size re-estimation. Worked examples, together with results from a simulation study are described. It is concluded that, as in the standard case, use of either extrapolation approach successfully protects the trial error rates.
Collapse
Affiliation(s)
- Susan Todd
- Department of Mathematics and Statistics, University of Reading, Philip Lyle Building, Reading RG6 6BX, UK.
| | | | | |
Collapse
|
21
|
Gutjahr G, Posch M, Brannath W. Familywise error control in multi-armed response-adaptive two-stage designs. J Biopharm Stat 2011; 21:818-30. [PMID: 21516571 DOI: 10.1080/10543406.2011.551337] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
For comparing multiple treatments against a single control with normally distributed observations, we consider two-stage designs of the following form: During the first stage, control and treatments are allocated by response-adaptive randomization; after completion of the first stage, some treatments are selected to proceed to the second stage; during the second stage, control and selected treatments are allocated by block randomization. Tests for such designs that use the data from both stages have been based on simulation under the global null hypothesis. We present an approach that does not rely on simulation and protects the familywise error rate in the strong sense. The main idea is to view the trial as a data-dependent modification of a simpler design, for which we know the distributions of its test statistics. To account for the data-dependent modification, we use the conditional invariance principle (Brannath et al., 2007).
Collapse
Affiliation(s)
- Georg Gutjahr
- Department of Mathematics, University of Bremen, Germany.
| | | | | |
Collapse
|
22
|
Porcher R, Lecocq B, Vray M. Adaptive methods: when and how should they be used in clinical trials? Therapie 2011; 66:309-17. [PMID: 21851793 DOI: 10.2515/therapie/2011042] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2011] [Accepted: 05/16/2011] [Indexed: 11/20/2022]
Abstract
Adaptive clinical trial designs are defined as designs that use data cumulated during trial to possibly modify certain aspects without compromising the validity and integrity of the said trial. Compared to more traditional trials, in theory, adaptive designs allow the same information to be generated but in a more efficient manner. The advantages and limits of this type of design together with the weight of the constraints, in particular of a logistic nature, that their use implies, differ depending on whether the trial is exploratory or confirmatory with a view to registration. One of the key elements ensuring trial integrity is the involvement of an independent committee to determine adaptations in terms of experimental design during the study. Adaptive methods for clinical trials are appealing and may be accepted by the relevant authorities. However, the constraints that they impose must be determined well in advance.
Collapse
Affiliation(s)
- Raphaël Porcher
- Université Paris Diderot, Hôpital Saint-Louis, Paris, France
| | | | | | | |
Collapse
|
23
|
Abstract
We review adaptive designs for clinical trials, giving special attention to the control of the Type I error in late-phase confirmatory trials, when the trial planner wishes to adjust the final sample size of the study in response to an unblinded analysis of interim estimates of treatment effects. We point out that there is considerable inefficiency in using the adaptive designs that employ conditional power calculations to reestimate the sample size and that maintain the Type I error by using certain weighted test statistics. Although these adaptive designs have little advantage over familiar group-sequential designs, our review also describes recent developments in adaptive designs that are both flexible and efficient. We also discuss the use of Bayesian designs, when the context of use demands control over operating characteristics (Type I and II errors) and correction of the bias of estimated treatment effects.
Collapse
Affiliation(s)
- Tze Leung Lai
- Department of Statistics, Stanford University, Stanford, California 94305
- Department of Health Research and Policy, Stanford University, Stanford, California 94305
| | - Philip William Lavori
- Department of Statistics, Stanford University, Stanford, California 94305
- Department of Health Research and Policy, Stanford University, Stanford, California 94305
| | - Mei-Chiung Shih
- Department of Health Research and Policy, Stanford University, Stanford, California 94305
- Cooperative Studies Program, U.S. Department of Veterans Affairs, Mountain View, California 94043
| |
Collapse
|
24
|
Porcher R, Lecocq B, Vray M, d’Andon A, Bassompierre F, Béhier JM, Belorgey C, Bénichou J, Berdeaux G, Bergougnoux L, Bilbault P, Chassany O, Brentano CF, Gersberg M, Labreveux C, Lassale C, Lebbé C, Lecocq B, Lévy V, Montestruc F, Morgan C, Nachbaur G, Palestro B, Paoletti X, Porcher R, Raison A, Spiess L, Strub N, Vitzling C, Vray M. Adaptive Methods: When and How Should They be Used in Clinical Trials? Therapie 2011. [DOI: 10.2515/therapie/2011044] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
25
|
De Martini D. Adapting by calibration the sample size of a phase III trial on the basis of phase II data. Pharm Stat 2011; 10:89-95. [DOI: 10.1002/pst.410] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
26
|
Heritier S, Lô SN, Morgan CC. An adaptive confirmatory trial with interim treatment selection: Practical experiences and unbalanced randomization. Stat Med 2011; 30:1541-54. [DOI: 10.1002/sim.4179] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2010] [Accepted: 12/03/2010] [Indexed: 10/18/2022]
|
27
|
Wang SJ. Editorial: Adaptive designs: appealing in development of therapeutics, and where do controversies lie? J Biopharm Stat 2011; 20:1083-7. [PMID: 21058102 DOI: 10.1080/10543406.2010.514461] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
28
|
Hung HMJ, Wang SJ, O'Neill R. Flexible design clinical trial methodology in regulatory applications. Stat Med 2011; 30:1519-27. [PMID: 21344470 DOI: 10.1002/sim.4021] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2010] [Accepted: 06/11/2010] [Indexed: 11/06/2022]
Abstract
Adaptive designs or flexible designs in a broader sense have increasingly been considered in planning pivotal registration clinical trials. Sample size reassessment design and adaptive selection design are two of such designs that appear in regulatory applications. At the design stage, consideration of sample size reassessment at an interim time of the trial should lead to extensive discussion about how to appropriately size the trial. Additionally, careful attention needs to be paid to the issue of how the size of the trial is impacted by the requirement that the final p-value of the trial meets the specific threshold of a clinically meaningful effect. These issues are not straightforward and will be discussed in this work. In a trial design that allows selection between a pre-specified patient subgroup and the initially planned overall patient population based on the accumulating data, there is an issue of what the 'overall' population means. In addition, it is critically important to know how such selection influences the validity of statistical inferences on the potentially modified overall population. This work presents the biases that may incur under adaptive patient selection designs.
Collapse
Affiliation(s)
- H M James Hung
- Division of Biometrics I, Office of Biostatistics, OTS, CDER, Food and Drug Administration, Silver Spring, MD 20994-0002, USA.
| | | | | |
Collapse
|
29
|
Soon GG, Nie L, Hammerstrom T, Zeng W, Chu H. Meeting the demand for more sophisticated study designs. A proposal for a new type of clinical trial: the hybrid design. BMJ Open 2011; 1:e000156. [PMID: 22021876 PMCID: PMC3191591 DOI: 10.1136/bmjopen-2011-000156] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/03/2011] [Accepted: 08/08/2011] [Indexed: 11/16/2022] Open
Abstract
Background Treatment effect is traditionally assessed through either superiority or non-inferiority clinical trials. Investigators may find that because of safety concerns and/or wide variability across strata of the superiority margin of active controls over placebo, neither a superiority nor a non-inferiority trial design is ethical or practical in some disease populations. Prior knowledge may allow and drive study designers to consider more sophisticated designs for a clinical trial. Design In this paper, the authors propose hybrid designs which may combine a superiority design in one subgroup with a non-inferiority design in another subgroup or combine designs with different control regimens in different subgroups in one trial when a uniform design is unethical or impractical. The authors show how the hybrid design can be planned and how inferences can be made. Through two examples, the authors illustrate the scenarios where hybrid designs are useful while the conventional designs are not preferable. Conclusion The hybrid design is a useful alternative to current superiority and non-inferiority designs.
Collapse
Affiliation(s)
- Guoxing G Soon
- Division of Biometrics IV, Office of Biostatistics/OTS/CDER/FDA, Silver Spring, Maryland, USA
| | - Lei Nie
- Division of Biometrics IV, Office of Biostatistics/OTS/CDER/FDA, Silver Spring, Maryland, USA
| | - Thomas Hammerstrom
- Division of Biometrics IV, Office of Biostatistics/OTS/CDER/FDA, Silver Spring, Maryland, USA
| | - Wen Zeng
- Division of Biometrics IV, Office of Biostatistics/OTS/CDER/FDA, Silver Spring, Maryland, USA
| | - Haitao Chu
- Division of Biostatistics, University of Minnesota at Twin Cities, Minneapolis, Minnesota, USA
| |
Collapse
|
30
|
Gutjahr G, Brannath W, Bauer P. An approach to the conditional error rate principle with nuisance parameters. Biometrics 2010; 67:1039-46. [PMID: 21114662 DOI: 10.1111/j.1541-0420.2010.01507.x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
In the presence of nuisance parameters, the conditional error rate principle is difficult to apply because of the dependency of the conditional error function of the preplanned test on nuisance parameters. To use the conditional error rate principle with nuisance parameters, we propose to search among tests that guarantee overall error control for the test that maximizes a weighted combination of the conditional error rates over possible values of the nuisance parameters. We show that the optimization problem that defines such a test can be solved efficiently by existing algorithms.
Collapse
Affiliation(s)
- Georg Gutjahr
- Center of Medical Statistics, Medical University of Vienna, Austria Department of Mathematics, University of Bremen, Germany.
| | | | | |
Collapse
|
31
|
Brannath W, Burger HU, Glimm E, Stallard N, Vandemeulebroecke M, Wassmer G. Comments on the Draft Guidance on “Adaptive Design Clinical Trials for Drugs and Biologics” of the U.S. Food and Drug Administration. J Biopharm Stat 2010; 20:1125-31. [DOI: 10.1080/10543406.2010.514453] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | | | | | - Nigel Stallard
- d Warwick Medical School, University of Warwick , United Kingdom
| | | | | |
Collapse
|
32
|
Cheng B, Chow SC. On Flexibility of Adaptive Designs and Criteria for Choosing A Good One—A Discussion of FDA Draft Guidance. J Biopharm Stat 2010; 20:1171-7. [DOI: 10.1080/10543406.2010.514460] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Affiliation(s)
- Bin Cheng
- a Department of Biostatistics , Columbia University , New York, New York, USA
| | - Shein-Chung Chow
- b Department of Biostatistics and Bioinformatics , Duke University School of Medicine , Durham, North Carolina, USA
| |
Collapse
|
33
|
Wang Y, Li G, Shih WJ. Estimation and Confidence Intervals for Two-Stage Sample-Size-Flexible Design with LSW Likelihood Approach. STATISTICS IN BIOSCIENCES 2010. [DOI: 10.1007/s12561-010-9023-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
34
|
Abstract
In recent years, there has been a drive to save development costs and shorten time-to-market of new therapies. Research into novel trial designs to facilitate this goal has led to, amongst other approaches, the development of methodology for seamless phase II/III designs. Such designs allow treatment or dose selection at an interim analysis and comparative evaluation of efficacy with control, in the same study. Methods have gained much attention because of their potential advantages compared to conventional drug development programmes with separate trials for individual phases. In this article, we review the various approaches to seamless phase II/III designs based upon the group-sequential approach, the combination test approach and the adaptive Dunnett method. The objective of this article is to describe the approaches in a unified framework and highlight their similarities and differences to allow choice of an appropriate methodology by a trialist considering conducting such a trial.
Collapse
Affiliation(s)
- Nigel Stallard
- Warwick Medical School, The University of Warwick, Coventry, UK.
| | | |
Collapse
|
35
|
Burman CF, Lisovskaja V. The dual test: safeguarding p-value combination tests for adaptive designs. Stat Med 2010; 29:797-807. [PMID: 20213723 DOI: 10.1002/sim.3704] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Many modern adaptive designs apply an analysis where p-values from different stages are weighted together to an overall hypothesis test. One merit of this combination approach is that the design can be made very flexible. However, combination tests violate the sufficiency and conditionality principles. As a consequence, combination tests may lead to absurd conclusions, such as 'proving' a positive effect while the average effect is negative. We explore the possibility of modifying the test so that such illogical conclusions are no longer possible. The dual test requires both the weighted combination test and a naïve test, ignoring the adaptations, to be statistically significant. The result is that the flexibility and type I error level control of the combination test are preserved, while the naïve test adds a safeguard against unconvincing results. The dual test is, by construction, at least as conservative as the combination test. However, many design changes will not lead to any power loss. A typical situation where the combination approach can be used is two-stage sample size reestimation (SSR). For this case, we give a complete specification of all sample size modifications for which the two tests are equally powerful. We also study the overall power loss for some suggested SSR rules. Rules based on conditional power generally lead to ignorable power loss while a decision analytic approach exhibits clear discrepancies between the two tests.
Collapse
Affiliation(s)
- Carl-Fredrik Burman
- Department of Biostatistics, AstraZeneca R&D, SE-431 83 Mölndal, Göteborg, Sweden.
| | | |
Collapse
|
36
|
Guidance for Industry: Adaptive Design Clinical Trials for Drugs and Biologics [excerpts]. Biotechnol Law Rep 2010. [DOI: 10.1089/blr.2010.9977] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
37
|
Bretz F, Koenig F, Brannath W, Glimm E, Posch M. Adaptive designs for confirmatory clinical trials. Stat Med 2009; 28:1181-217. [PMID: 19206095 DOI: 10.1002/sim.3538] [Citation(s) in RCA: 167] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Adaptive designs play an increasingly important role in clinical drug development. Such designs use accumulating data of an ongoing trial to decide how to modify design aspects without undermining the validity and integrity of the trial. Adaptive designs thus allow for a number of possible adaptations at midterm: Early stopping either for futility or success, sample size reassessment, change of population, etc. A particularly appealing application is the use of adaptive designs in combined phase II/III studies with treatment selection at interim. The expectation has arisen that carefully planned and conducted studies based on adaptive designs increase the efficiency of the drug development process by making better use of the observed data, thus leading to a higher information value per patient.In this paper we focus on adaptive designs for confirmatory clinical trials. We review the adaptive design methodology for a single null hypothesis and how to perform adaptive designs with multiple hypotheses using closed test procedures. We report the results of an extensive simulation study to evaluate the operational characteristics of the various methods. A case study and related numerical examples are used to illustrate the key results. In addition we provide a detailed discussion of current methods to calculate point estimates and confidence intervals for relevant parameters.
Collapse
Affiliation(s)
- Frank Bretz
- Novartis Pharma AG, Lichtstrasse 35, 4002 Basel, Switzerland.
| | | | | | | | | |
Collapse
|
38
|
|
39
|
Abstract
Adaptive designs promise the flexibility to redesign clinical trials at interim stages. This flexibility would provide greater efficiency in drug development. However, despite this promise, many hesitate to implement such designs. Here we explore three possible reasons for the hesitation: (i) confusion with respect to the definition of an 'adaptive design'; (ii) controversy surrounding the use of sample size re-estimation methods; and (iii) logistical barriers that must be overcome in order to use adaptive designs within existing trial frameworks.The large volume of recent work has created confusion with respect to the definition of an 'adaptive design'. Unfortunately, this has resulted in reduced usage of many acceptable methods because of guilt by association with the more controversial methods. This review attempts to clarify the differences among many common types of proposed adaptive designs. Once the differences are noted, it becomes apparent that some adaptive designs are well accepted while others remain very controversial. In fact, much of the controversy and criticism surrounding adaptive designs has focused on their use for sample size re-estimation. Hence, this review also examines the different types of adaptive designs for sample size re-estimation in order to clarify the controversy surrounding the use of these methods. Specifically, separating the controversial from good practice requires clarifying differences between adaptive designs with sample size re-estimation based on a revised treatment effect and re-estimation based only on nuisance parameters (internal pilot designs). Finally, many logistical barriers must be overcome in order to use adaptive designs within existing trial frameworks.If the promise of adaptive designs is to be achieved, it will be important to bring together large groups of individuals from funding sources and regulatory agencies to address these limitations. Very few discussions of these issues have appeared in journals that are targeted to clinical audiences. In fact, current use of adaptive designs is not really hindered by the lack of statistical methods to accommodate the adaptations. Rather, there is a need for education as to which adaptive designs are acceptable and which are not acceptable. These discussions will require the involvement of many individuals outside the statistical community. In this review, we summarize the existing methods and current controversies with the intent of providing a clarification that will enable these individuals to participate in these much-needed discussions.
Collapse
Affiliation(s)
- Christopher S Coffey
- Department of Biostatistics, School of Public Health, University of Alabama Birmingham, Birmingham, Alabama, USA
| | | |
Collapse
|
40
|
Kairalla JA, Coffey CS, Muller KE. GLUMIP 2.0: SAS/IML Software for Planning Internal Pilots. J Stat Softw 2008; 28. [PMID: 27774042 DOI: 10.18637/jss.v028.i07] [Citation(s) in RCA: 46] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
Internal pilot designs involve conducting interim power analysis (without interim data analysis) to modify the final sample size. Recently developed techniques have been described to avoid the type I error rate inflation inherent to unadjusted hypothesis tests, while still providing the advantages of an internal pilot design. We present GLUMIP 2.0, the latest version of our free SAS/IML software for planning internal pilot studies in the general linear univariate model (GLUM) framework. The new analytic forms incorporated into the updated software solve many problems inherent to current internal pilot techniques for linear models with Gaussian errors. Hence, the GLUMIP 2.0 software makes it easy to perform exact power analysis for internal pilots under the GLUM framework with independent Gaussian errors and fixed predictors.
Collapse
|
41
|
Vandemeulebroecke M. Group sequential and adaptive designs - a review of basic concepts and points of discussion. Biom J 2008; 50:541-57. [PMID: 18663761 DOI: 10.1002/bimj.200710436] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
In recent times, group sequential and adaptive designs for clinical trials have attracted great attention from industry, academia and regulatory authorities. These designs allow analyses on accumulating data - as opposed to classical, "fixed-sample" statistics. The rapid development of a great variety of statistical procedures is accompanied by a lively debate on their potential merits and shortcomings. The purpose of this review article is to ease orientation in both respects. First, we provide a concise overview of the essential technical concepts, with special emphasis on their interrelationships. Second, we give a structured review of the current controversial discussion on practical issues, opportunities and challenges of these new designs.
Collapse
|
42
|
Bartroff J, Lai TL. Efficient adaptive designs with mid-course sample size adjustment in clinical trials. Stat Med 2008; 27:1593-611. [PMID: 18275090 DOI: 10.1002/sim.3201] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Adaptive designs have been proposed for clinical trials in which the nuisance parameters or alternative of interest are unknown or likely to be misspecified before the trial. Although most previous works on adaptive designs and mid-course sample size re-estimation have focused on two-stage or group-sequential designs in the normal case, we consider here a new approach that involves at most three stages and is developed in the general framework of multiparameter exponential families. This approach not only maintains the prescribed type I error probability but also provides a simple but asymptotically efficient sequential test whose finite-sample performance, measured in terms of the expected sample size and power functions, is shown to be comparable to the optimal sequential design, determined by dynamic programming, in the simplified normal mean case with known variance and prespecified alternative, and superior to the existing two-stage designs and also to adaptive group-sequential designs when the alternative or nuisance parameters are unknown or misspecified.
Collapse
Affiliation(s)
- Jay Bartroff
- Department of Mathematics, University of Southern California, Los Angeles, CA 90089, U.S.A.
| | | |
Collapse
|
43
|
Bauer P. Adaptive designs: looking for a needle in the haystack-a new challenge in medical research. Stat Med 2008; 27:1565-80. [PMID: 17935261 DOI: 10.1002/sim.3090] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
The statistical principles of fully adaptive designs are outlined. The options of flexibility and the price to be paid in terms of statistical properties of the test procedures are discussed. It is stressed that controlled inference after major design modifications (changing hypotheses) will include a penalty: Intersections among all the hypotheses considered throughout the trial have to be rejected before testing individual hypotheses. Moreover, feasibility in terms of integrity and persuasiveness of the results achieved after adaptations based on unblinded data is considered as the crucial issue in practice. In the second part, sample size adaptive procedures are considered testing a large number of hypotheses under constraints on total sample size as in genetic studies. The advantage of sequential procedures is sketched for the example of two-stage designs with a pilot phase for screening promising hypotheses (markers) and controlling the false discovery rate. Finally, we turn to the clinical problem how to select markers and estimate a score from limited samples, e.g. for predicting the response to therapy of a future patient. The predictive ability of such scores will be rather poor when investigating a large number of hypotheses and truly large marker effects are lacking. An obvious dilemma will show up: More optimistic selection rules may be superior if in fact effective markers exist, but will produce more nuisance prediction if no effective markers exist compared with more cautious strategies, e.g. aiming at some control of type I error probabilities.
Collapse
Affiliation(s)
- Peter Bauer
- Section of Medical Statistics, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
44
|
Bartroff J, Lai TL. Generalized Likelihood Ratio Statistics and Uncertainty Adjustments in Efficient Adaptive Design of Clinical Trials. Seq Anal 2008. [DOI: 10.1080/07474940802241009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
45
|
Jennison C, Turnbull BW. Adaptive Seamless Designs: Selection and Prospective Testing of Hypotheses. J Biopharm Stat 2007; 17:1135-61. [DOI: 10.1080/10543400701645215] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
| | - Bruce W. Turnbull
- b Department of Statistical Science , Cornell University , Ithaca, New York, USA
| |
Collapse
|
46
|
|
47
|
|
48
|
|
49
|
|
50
|
|