1
|
On the distribution of the power function for the scale parameter of exponential families. Stat Med 2024; 43:1973-1992. [PMID: 38634314 DOI: 10.1002/sim.10043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 01/17/2024] [Accepted: 02/09/2024] [Indexed: 04/19/2024]
Abstract
The expected value of the standard power function of a test, computed with respect to a design prior distribution, is often used to evaluate the probability of success of an experiment. However, looking only at the expected value might be reductive. Instead, the whole probability distribution of the power function induced by the design prior can be exploited. In this article we consider one-sided testing for the scale parameter of exponential families and we derive general unifying expressions for cumulative distribution and density functions of the random power. Sample size determination criteria based on alternative summaries of these functions are discussed. The study sheds light on the relevance of the choice of the design prior in order to construct a successful experiment.
Collapse
|
2
|
Epistemic uncertainty in Bayesian predictive probabilities. J Biopharm Stat 2024; 34:394-412. [PMID: 37157818 DOI: 10.1080/10543406.2023.2204943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 04/15/2023] [Indexed: 05/10/2023]
Abstract
Bayesian predictive probabilities have become a ubiquitous tool for design and monitoring of clinical trials. The typical procedure is to average predictive probabilities over the prior or posterior distributions. In this paper, we highlight the limitations of relying solely on averaging, and propose the reporting of intervals or quantiles for the predictive probabilities. These intervals formalize the intuition that uncertainty decreases with more information. We present four different applications (Phase 1 dose escalation, early stopping for futility, sample size re-estimation, and assurance/probability of success) to demonstrate the practicality and generality of the proposed approach.
Collapse
|
3
|
Abstract
BACKGROUND High-quality decision-making in the pharmaceutical industry requires accurate assessments of the Probability of Technical Success of clinical trials. Failure to do so will lead to lost opportunities for both patients and investors. Pharmaceutical companies employ different methodologies to determine Probability of Technical Success values. Some companies use power and assurance calculations; others prefer to use industry benchmarks with or without the overlay of subjective modulations. At AstraZeneca, both assurance calculations and industry benchmarks are used, and both methods are combined with modulations. METHODS AstraZeneca has recently implemented a simple algorithm that allows for modulation of a Probability of Technical Success value. The algorithm is based on a set of multiple-choice questions. These questions cover a comprehensive set of issues that have historically been considered by AstraZeneca when subjective modulations to Probability of Technical Success values were made but do so in a much more structured way. RESULTS A set of 57 phase 3 Probability of Technical Success assessments suggests that AstraZeneca's historical estimation of Probability of Technical Success has been reasonably accurate. A good correlation between the subjective modulation and the modulation algorithm was found. This latter observation, combined with the finding that historically AstraZeneca has been reasonably accurate in its estimation of Probability of Technical Success, gives confidence in the validity of the novel method. DISCUSSION Although it is too early to demonstrate whether the method has improved the accuracy of company's Probability of Technical Success assessments, we present our data and analysis here in the hope that it may assist the pharmaceutical industry in addressing this key challenge. This new methodology, developed for pivotal studies, enables AstraZeneca to develop more consistent Probability of Technical Success assessments with less effort and can be used to adjust benchmarks as well as assurance calculations. CONCLUSION The Probability of Technical Success modulation algorithm addresses several concerns generally associated with assurance calculations or benchmark without modulation: selection biases, situations where little relevant prior data are available and the difficulty to model many factors affecting study outcomes. As opposed to using industry benchmarks, the Probability of Technical Success modulation algorithm allows to accommodate project-specific considerations.
Collapse
|
4
|
Probability of success and group sequential designs. Pharm Stat 2024; 23:185-203. [PMID: 37916276 DOI: 10.1002/pst.2346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 08/11/2023] [Accepted: 10/09/2023] [Indexed: 11/03/2023]
Abstract
In this article, I extend the use of probability of success calculations, previously developed for fixed sample size studies to group sequential designs (GSDs) both for studies planned to be analyzed by standard frequentist techniques or Bayesian approaches. The structure of GSDs lends itself to sequential learning which in turn allows us to consider how knowledge about the result of an interim analysis can influence our assessment of the study's probability of success. In this article, I build on work by Temple and Robertson who introduced the idea of conditional probability of success, an idea which I also treated in a recent monograph.
Collapse
|
5
|
A Bayesian approach to study design and analysis with type I error rate control for response variables of mixed types. Stat Med 2023; 42:1722-1740. [PMID: 36929939 DOI: 10.1002/sim.9696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 11/29/2022] [Accepted: 02/20/2023] [Indexed: 03/18/2023]
Abstract
There has been increased interest in the design and analysis of studies consisting of multiple response variables of mixed types. For example, in clinical trials, it is desirable to establish efficacy for a treatment effect in primary and secondary outcomes. In this article, we develop Bayesian approaches for hypothesis testing and study planning for data consisting of multiple response variables of mixed types with covariates. We assume that the responses are correlated via a Gaussian copula, and that the model for each response is, marginally, a generalized linear model (GLM). Taking a fully Bayesian approach, the proposed method enables inference based on the joint posterior distribution of the parameters. Under some mild conditions, we show that the joint distribution of the posterior probabilities under any Bayesian analysis converges to a Gaussian copula distribution as the sample size tends to infinity. Using this result, we develop an approach to control the type I error rate under multiple testing. Simulation results indicate that the method is more powerful than conducting marginal regression models and correcting for multiplicity using the Bonferroni-Holm Method. We also develop a Bayesian approach to sample size determination in the presence of response variables of mixed types, extending the concept of probability of success (POS) to multiple response variables of mixed types.
Collapse
|
6
|
Utilization of treatment effect on a surrogate endpoint for planning a study to evaluate treatment effect on a final endpoint. Pharm Stat 2023. [PMID: 36866697 DOI: 10.1002/pst.2298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/04/2023]
Abstract
To design a phase III study with a final endpoint and calculate the required sample size for the desired probability of success, we need a good estimate of the treatment effect on the endpoint. It is prudent to fully utilize all available information including the historical and phase II information of the treatment as well as external data of the other treatments. It is not uncommon that a phase II study may use a surrogate endpoint as the primary endpoint and has no or limited data for the final endpoint. On the other hand, external information from the other studies for the other treatments on the surrogate and final endpoints may be available to establish a relationship between the treatment effects on the two endpoints. Through this relationship, making full use of the surrogate information may enhance the estimate of the treatment effect on the final endpoint. In this research, we propose a bivariate Bayesian analysis approach to comprehensively deal with the problem. A dynamic borrowing approach is considered to regulate the amount of historical data and surrogate information borrowing based on the level of consistency. A much simpler frequentist method is also discussed. Simulations are conducted to compare the performances of different approaches. An example is used to illustrate the applications of the methods.
Collapse
|
7
|
Updating the probability of study success for combination therapies using related combination study data. Stat Methods Med Res 2023; 32:712-731. [PMID: 36776025 PMCID: PMC10363930 DOI: 10.1177/09622802231151218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/14/2023]
Abstract
Combination therapies are becoming increasingly used in a range of therapeutic areas such as oncology and infectious diseases, providing potential benefits such as minimising drug resistance and toxicity. Sets of combination studies may be related, for example, if they have at least one treatment in common and are used in the same indication. In this setting, value can be gained by sharing information between related combination studies. We present a framework that allows the study success probabilities of a set of related combination therapies to be updated based on the outcome of a single combination study. This allows us to incorporate both direct and indirect data on a combination therapy in the decision-making process for future studies. We also provide a robustification that accounts for the fact that the prior assumptions on the correlation structure of the set of combination therapies may be incorrect. We show how this framework can be used in practice and highlight the use of the study success probabilities in the planning of clinical studies.
Collapse
|
8
|
Predicting drug approvals: The Novartis data science and artificial intelligence challenge. PATTERNS (NEW YORK, N.Y.) 2021; 2:100312. [PMID: 34430930 PMCID: PMC8369231 DOI: 10.1016/j.patter.2021.100312] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 04/26/2021] [Accepted: 06/21/2021] [Indexed: 11/25/2022]
Abstract
We describe a novel collaboration between academia and industry, an in-house data science and artificial intelligence challenge held by Novartis to develop machine-learning models for predicting drug-development outcomes, building upon research at MIT using data from Informa as the starting point. With over 50 cross-functional teams from 25 Novartis offices around the world participating in the challenge, the domain expertise of these Novartis researchers was leveraged to create predictive models with greater sophistication. Ultimately, two winning teams developed models that outperformed the baseline MIT model-areas under the curve of 0.88 and 0.84 versus 0.78, respectively-through state-of-the-art machine-learning algorithms and the use of newly incorporated features and data. In addition to validating the variables shown to be associated with drug approval in the earlier MIT study, the challenge also provided new insights into the drivers of drug-development success and failure.
Collapse
|
9
|
Assurance in vaccine efficacy clinical trial design based on immunological responses. Biom J 2021; 63:1434-1443. [PMID: 34254347 PMCID: PMC9292007 DOI: 10.1002/bimj.202100015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 05/05/2021] [Accepted: 06/05/2021] [Indexed: 11/06/2022]
Abstract
The assurance of a future clinical trial is a key quantitative tool for decision-making in drug development. It is derived from prior knowledge (Bayesian approach) about the clinical endpoint of interest, typically from previous clinical trials. In this paper, we examine assurance in the specific context of vaccine development, where early development (Phase 2) is often based on immunological endpoints (e.g., antibody levels), while the confirmatory trial (Phase 3) is based on the clinical endpoint (very large sample sizes and long follow-up). Our proposal is to use the Phase 2 vaccine efficacy predicted by the immunological endpoint (using a model estimated from epidemiological studies) as prior information for the calculation of the assurance.
Collapse
|
10
|
Survey of Japanese Orphan Drug Program: Factors Related to Successful Marketing Approval. J Clin Pharmacol 2019; 60:117-124. [PMID: 31364772 PMCID: PMC6972571 DOI: 10.1002/jcph.1501] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Accepted: 07/15/2019] [Indexed: 11/29/2022]
Abstract
The basic components of regulatory and supporting policies for orphan drug development appear similar between the United States and Japan, but drugs designated as orphan drugs have been different between the 2 countries. The probabilities of development success (ie, marketing approval) in designated orphan drugs have also been significantly different. In this study, we analyzed recent outcomes of development for orphan drugs designated from 1993 to 2017 in Japan, considering their development and approval status in the United States. Our analysis showed that success for orphan drug development in Japan was apparently associated with prior approval status in the United States. Company size, orphan development experience, and patient enrichment were also positively associated with successful marketing approval. Although similar designations and priority review systems for orphan drugs have been enacted, economic incentives and regulatory conditions provided by the systems seem to be different between the 2 countries, which may lead to varied performance in orphan designation and approval. We need to pay close attention to the impact of industrial global development strategies when comparing the outcomes and performance of different orphan drug promotion systems.
Collapse
|
11
|
Predictive probability of success using surrogate endpoints. Stat Med 2018; 38:1753-1774. [PMID: 30548627 DOI: 10.1002/sim.8060] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Revised: 10/29/2018] [Accepted: 11/19/2018] [Indexed: 12/30/2022]
Abstract
The predictive probability of success of a future clinical trial is a key quantitative tool for decision-making in drug development. It is derived from prior knowledge and available evidence, and the latter typically comes from the accumulated data on the clinical endpoint of interest in previous clinical trials. However, a surrogate endpoint could be used as primary endpoint in early development and, usually, no or limited data are collected on the clinical endpoint of interest. We propose a general, reliable, and broadly applicable methodology to predict the success of a future trial from surrogate endpoints, in a way that makes the best use of all the available evidence. The predictions are based on an informative prior, called surrogate prior, derived from the results of past trials on one or several surrogate endpoints. If available, in a Bayesian framework, this prior could be combined with data from past trials on the clinical endpoint of interest. Two methods are proposed to address a potential discordance between the surrogate prior and the data on the clinical endpoint. We investigate the patterns of behavior of the predictions in a comprehensive simulation study, and we present an application to the development of a drug in Multiple Sclerosis. The proposed methodology is expected to support decision-making in many different situations, since the use of predictive markers is important to accelerate drug developments and to select promising drug candidates, better and earlier.
Collapse
|
12
|
Optimal sample size allocation and go/no-go decision rules for phase II/III programs where several phase III trials are performed. Biom J 2018; 61:357-378. [PMID: 30182372 DOI: 10.1002/bimj.201700241] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Revised: 05/08/2018] [Accepted: 07/01/2018] [Indexed: 11/07/2022]
Abstract
The conduct of phase II and III programs is costly, time-consuming and, due to high failure rates in late development stages, risky. There is a strong connection between phase II and III trials as the go/no-go decision and the sample size chosen for phase III are based on the results observed in phase II. An integrated planning of phase II and III is therefore reasonable. The success of phase II/III programs crucially depends on the allocation of the resources to phase II and III in terms of sample size and the rule applied to decide whether to stop or to proceed with phase III. Recently, a utility-based approach was proposed, where optimal planning of phase II/III programs is achieved by taking fixed and variable costs of the drug development program and potential gains after a successful launch into account. However, this method is restricted to programs with a single phase III trial, while regulatory authorities usually require statistical significance in two or more phase III trials. We present a generalization of this procedure to programs where two or more phase III trials are performed. Optimal phase II sample sizes and go/no-go decision rules are provided for time-to-event outcomes and cases, where at least one, two, or three phase III trials need to be successful. Different drug development program strategies (e.g. one large vs. two phase III trials) are compared within these different cases. Application to practical examples typically met in oncology trials illustrates the proposed method.
Collapse
|
13
|
Abstract
One of the most critical decision points in clinical development is Go/No-Go decision-making after a proof-of-concept study. Traditional decision-making relies on a formal hypothesis testing with control of type I and type II error rates, which is limited by assessing the strength of efficacy evidence in a small isolated trial. In this article, we propose a quantitative Bayesian/frequentist decision framework for Go/No-Go criteria and sample size evaluation in Phase II randomized studies with a time-to-event endpoint. By taking the uncertainty of treatment effect into consideration, we propose an integrated quantitative approach for a program when both the Phase II and Phase III trials share a common endpoint while allowing a discount of the observed Phase II data. Our results confirm the argument that an increase in the sample size of a Phase II trial will result in greater increase in the probability of success of a Phase III trial than increasing the Phase III trial sample size by equal amount. We illustrate the steps in quantitative decision-making with a real example of a randomized Phase II study in metastatic pancreatic cancer.
Collapse
|
14
|
Decision-making in drug development using a composite definition of success. Pharm Stat 2018; 17:555-569. [PMID: 29956453 DOI: 10.1002/pst.1870] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2017] [Revised: 03/29/2018] [Accepted: 05/08/2018] [Indexed: 11/11/2022]
Abstract
Evidence-based quantitative methodologies have been proposed to inform decision-making in drug development, such as metrics to make go/no-go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit-risk assessments could enhance decision-making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit-risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision-makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision-making case.
Collapse
|
15
|
Assessment of Bayesian expected power via Bayesian bootstrap. Stat Med 2018; 37:3471-3485. [PMID: 29938832 DOI: 10.1002/sim.7826] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Revised: 04/26/2018] [Accepted: 04/27/2018] [Indexed: 11/12/2022]
Abstract
The Bayesian expected power (BEP) has become increasingly popular in assessing the probability of success for a future trial. While the traditional power assumes a single value for the unknown effect size Δ and is thus highly dependent on the assumed value, the BEP embraces the uncertainty around Δ given the prior information and is therefore a less subjective measure for the probability of success than the traditional power especially when the prior information is not rich. Current methods for assessing BEP are often based in a parametric framework by imposing a model on the pilot data to derive and sample from the posterior distributions of Δ. The model-based approach can be analytically challenging and computationally costly especially for multivariate data sets, and it also runs the risk of generating misleading BEP if the model is misspecified. We propose an approach based on the Bayesian bootstrap (BBS) technique to simulate future trials in the presence of individual-level pilot data, based on which the empirical BEP can be calculated. The BBS approach is model-free with no assumptions about the distribution of the prior data and also circumvents the analytical and computational complexity associated with obtaining the posterior distribution of the Δ. Information from multiple pilot studies is also straightforward to combine. We also propose the double bootstrap technique, a frequentist counterpart to the BBS, that shares similar properties and achieves the same goal as the BBS for BEP assessment. Simulation and case studies are presented to demonstrate the implementation of the BBS technique and the double bootstrap technique and to compare the BEP results with model-based approach.
Collapse
|
16
|
Optimal planning of phase II/III programs for clinical trials with multiple endpoints. Pharm Stat 2018; 17:437-457. [PMID: 29700949 DOI: 10.1002/pst.1861] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Revised: 01/23/2018] [Accepted: 02/22/2018] [Indexed: 11/08/2022]
Abstract
Owing to increased costs and competition pressure, drug development becomes more and more challenging. Therefore, there is a strong need for improving efficiency of clinical research by developing and applying methods for quantitative decision making. In this context, the integrated planning for phase II/III programs plays an important role as numerous quantities can be varied that are crucial for cost, benefit, and program success. Recently, a utility-based framework has been proposed for an optimal planning of phase II/III programs that puts the choice of decision boundaries and phase II sample sizes on a quantitative basis. However, this method is restricted to studies with a single time-to-event endpoint. We generalize this procedure to the setting of clinical trials with multiple endpoints and (asymptotically) normally distributed test statistics. Optimal phase II sample sizes and go/no-go decision rules are provided for both the "all-or-none" and "at-least-one" win criteria. Application of the proposed method is illustrated by drug development programs in the fields of Alzheimer disease and oncology.
Collapse
|
17
|
Practical experiences of adopting assurance as a quantitative framework to support decision making in drug development. Pharm Stat 2018; 17:317-328. [PMID: 29635777 DOI: 10.1002/pst.1856] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Revised: 01/25/2018] [Accepted: 02/08/2018] [Indexed: 11/08/2022]
Abstract
All clinical trials are designed for success of their primary objectives. Hence, evaluating the probability of success (PoS) should be a key focus at the design stage both to support funding approval from sponsor governance boards and to inform trial design itself. Use of assurance-that is, expected success probability averaged over a prior probability distribution for the treatment effect-to quantify PoS of a planned study has grown across the industry in recent years, and has now become routine within the authors' company. In this paper, we illustrate some of the benefits of systematically adopting assurance as a quantitative framework to support decision making in drug development through several case-studies where evaluation of assurance has proved impactful in terms of trial design and in supporting governance-board reviews of project proposals. In addition, we describe specific features of how the assurance framework has been implemented within our company, highlighting the critical role that prior elicitation plays in this process, and illustrating how the overall assurance calculation may be decomposed into a sequence of conditional PoS estimates which can provide greater insight into how and when different development options are able to discharge risk.
Collapse
|
18
|
Simulation-based adjustment after exploratory biomarker subgroup selection in phase II. Stat Med 2017; 36:2378-2390. [PMID: 28436046 DOI: 10.1002/sim.7294] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2016] [Accepted: 03/08/2017] [Indexed: 01/08/2023]
Abstract
As part of the evaluation of phase II trials, it is common practice to perform exploratory subgroup analyses with the aim of identifying patient populations with a beneficial treatment effect. When investigating targeted therapies, these subgroups are typically defined by biomarkers. Promising results may lead to the decision to select the respective subgroup as the target population for a subsequent phase III trial. However, a selection based on a large observed treatment effect may potentially induce an upwards-bias leading to over-optimistic expectations on the success probability of the phase III trial. We describe how Approximate Bayesian Computation techniques can be used to derive a simulation-based bias adjustment method in this situation. Recommendations for the implementation of the approach are given. Simulation studies show that the proposed method reduces bias substantially compared with the maximum likelihood estimator. The procedure is illustrated with data from an oncology trial. Copyright © 2017 John Wiley & Sons, Ltd.
Collapse
|
19
|
Comparing oncology clinical programs by use of innovative designs and expected net present value optimization: Which adaptive approach leads to the best result? J Biopharm Stat 2017; 27:457-476. [PMID: 28281911 DOI: 10.1080/10543406.2017.1289949] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Designing an oncology clinical program is more challenging than designing a single study. The standard approaches have been proven to be not very successful during the last decade; the failure rate of Phase 2 and Phase 3 trials in oncology remains high. Improving a development strategy by applying innovative statistical methods is one of the major objectives of a drug development process. The oncology sub-team on Adaptive Program under the Drug Information Association Adaptive Design Scientific Working Group (DIA ADSWG) evaluated hypothetical oncology programs with two competing treatments and published the work in the Therapeutic Innovation and Regulatory Science journal in January 2014. Five oncology development programs based on different Phase 2 designs, including adaptive designs and a standard two parallel arm Phase 3 design were simulated and compared in terms of the probability of clinical program success and expected net present value (eNPV). In this article, we consider eight Phase2/Phase3 development programs based on selected combinations of five Phase 2 study designs and three Phase 3 study designs. We again used the probability of program success and eNPV to compare simulated programs. For the development strategies, we considered that the eNPV showed robust improvement for each successive strategy, with the highest being for a three-arm response adaptive randomization design in Phase 2 and a group sequential design with 5 analyses in Phase 3.
Collapse
|
20
|
Defensive efficacy interim design: Dynamic benefit/risk ratio view using probability of success. J Biopharm Stat 2016; 27:683-690. [PMID: 27295497 DOI: 10.1080/10543406.2016.1198370] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Traditional efficacy interim design is based on alpha spending which does not have intuitive interpretation and hence is difficult to communicate with non-statistician colleagues. The alpha-spending approach is based on efficacy alone and hence does not have the flexibility to incorporate newly emerged safety signal. Newly emerged safety signal may nullify the originally set efficacy boundary. In contrast, the probability of success (POS) concept has intuitive interpretation and hence can facilitate our communication with non-statistician colleagues and help to obtain health authority (HA) buying. The success criteria of POS are not restricted to statistical significance. Hence, POS has the capability to incorporate both efficacy and safety information. We propose to use POS and its credible interval to design efficacy interim. In the proposed method, the efficacy boundary is adjustable to offset newly emerged safety signal.
Collapse
|
21
|
Sample size planning for phase II trials based on success probabilities for phase III. Pharm Stat 2015; 14:515-24. [PMID: 26412484 DOI: 10.1002/pst.1717] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2014] [Revised: 07/14/2015] [Indexed: 11/08/2022]
Abstract
In recent years, high failure rates in phase III trials were observed. One of the main reasons is overoptimistic assumptions for the planning of phase III resulting from limited phase II information and/or unawareness of realistic success probabilities. We present an approach for planning a phase II trial in a time-to-event setting that considers the whole phase II/III clinical development programme. We derive stopping boundaries after phase II that minimise the number of events under side conditions for the conditional probabilities of correct go/no-go decision after phase II as well as the conditional success probabilities for phase III. In addition, we give general recommendations for the choice of phase II sample size. Our simulations show that unconditional probabilities of go/no-go decision as well as the unconditional success probabilities for phase III are influenced by the number of events observed in phase II. However, choosing more than 150 events in phase II seems not necessary as the impact on these probabilities then becomes quite small. We recommend considering aspects like the number of compounds in phase II and the resources available when determining the sample size. The lower the number of compounds and the lower the resources are for phase III, the higher the investment for phase II should be.
Collapse
|
22
|
Utility-based optimization of phase II/III programs. Stat Med 2015; 35:305-16. [PMID: 26256550 DOI: 10.1002/sim.6624] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2014] [Revised: 06/02/2015] [Accepted: 07/26/2015] [Indexed: 11/10/2022]
Abstract
Phase II and phase III trials play a crucial role in drug development programs. They are costly and time consuming and, because of high failure rates in late development stages, at the same time risky investments. Commonly, sample size calculation of phase III is based on the treatment effect observed in phase II. Therefore, planning of phases II and III can be linked. The performance of the phase II/III program crucially depends on the allocation of the resources to phases II and III by appropriate choice of the sample size and the rule applied to decide whether to stop the program after phase II or to proceed. We present methods for a program-wise phase II/III planning that aim at determining optimal phase II sample sizes and go/no-go decisions in a time-to-event setting. Optimization is based on a utility function that takes into account (fixed and variable) costs of the drug development program and potential gains after successful launch. The proposed methods are illustrated by application to a variety of scenarios typically met in oncology drug development.
Collapse
|