1
|
Proschan MA, Nason M, Ortega-Villa AM, Wang J. Changing interim monitoring in response to internal clinical trial data. Biometrics 2024; 80:ujae006. [PMID: 38477484 DOI: 10.1093/biomtc/ujae006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/18/2023] [Accepted: 01/29/2024] [Indexed: 03/14/2024]
Abstract
Designing clinical trials for emerging infectious diseases such as COVID-19 is challenging because information needed for proper planning may be lacking. Pre-specified adaptive designs can be attractive options, but what happens if a trial with no such design needs to be modified? For example, unexpectedly high efficacy (approximately 95%) in two COVID-19 vaccine trials might cause investigators in other COVID-19 vaccine trials to increase the number of interim analyses to allow earlier stopping for efficacy. If such a decision is based solely on external data, there are no issues, but what if internal trial data by arm are also examined? Fortunately, the conditional error principle of Müller and Schäfer (2004) can be used to ensure no inflation of the type 1 error rate, even if no interim analyses were planned. We study the properties, including limitations, of this method. We provide a shiny app to evaluate changes in timing of interim analyses in response to outcome data by arm in clinical trials.
Collapse
Affiliation(s)
- Michael A Proschan
- Biostatistics Research Branch, National Institute of Allergy and Infectious Diseases, Bethesda, MD, USA, 20892
| | - Martha Nason
- Biostatistics Research Branch, National Institute of Allergy and Infectious Diseases, Bethesda, MD, USA, 20892
| | - Ana M Ortega-Villa
- Biostatistics Research Branch, National Institute of Allergy and Infectious Diseases, Bethesda, MD, USA, 20892
| | - Jing Wang
- Biostatistics Research Branch, National Institute of Allergy and Infectious Diseases, Bethesda, MD, USA, 20892
| |
Collapse
|
2
|
Zehetmayer S, Posch M, Graf A. Impact of adaptive filtering on power and false discovery rate in RNA-seq experiments. BMC Bioinformatics 2022; 23:388. [PMID: 36153479 PMCID: PMC9509565 DOI: 10.1186/s12859-022-04928-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 09/13/2022] [Indexed: 11/10/2022] Open
Abstract
Background In RNA-sequencing studies a large number of hypothesis tests are performed to compare the differential expression of genes between several conditions. Filtering has been proposed to remove candidate genes with a low expression level which may not be relevant and have little or no chance of showing a difference between conditions. This step may reduce the multiple testing burden and increase power. Results We show in a simulation study that filtering can lead to some increase in power for RNA-sequencing data, too aggressive filtering, however, can lead to a decline. No uniformly optimal filter in terms of power exists. Depending on the scenario different filters may be optimal. We propose an adaptive filtering strategy which selects one of several filters to maximise the number of rejections. No additional adjustment for multiplicity has to be included, but a rule has to be considered if the number of rejections is too small. Conclusions For a large range of simulation scenarios, the adaptive filter maximises the power while the simulated False Discovery Rate is bounded by the pre-defined significance level. Using the adaptive filter, it is not necessary to pre-specify a single individual filtering method optimised for a specific scenario. Supplementary Information The online version contains supplementary material available at 10.1186/s12859-022-04928-z.
Collapse
|
3
|
Zhan T, Zhou Y, Geng Z, Gu Y, Kang J, Wang L, Huang X, Slate EH. Deep historical borrowing framework to prospectively and simultaneously synthesize control information in confirmatory clinical trials with multiple endpoints. J Biopharm Stat 2021; 32:90-106. [PMID: 34632951 DOI: 10.1080/10543406.2021.1975128] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
In current clinical trial development, historical information is receiving more attention as it provides utility beyond sample size calculation. Meta-analytic-predictive (MAP) priors and robust MAP priors have been proposed for prospectively borrowing historical data on a single endpoint. To simultaneously synthesize control information from multiple endpoints in confirmatory clinical trials, we propose to approximate posterior probabilities from a Bayesian hierarchical model and estimate critical values by deep learning to construct pre-specified strategies for hypothesis testing. This feature is important to ensure study integrity by establishing prospective decision functions before the trial conduct. Simulations are performed to show that our method properly controls family-wise error rate and preserves power as compared with a typical practice of choosing constant critical values given a subset of null space. Satisfactory performance under prior-data conflict is also demonstrated. We further illustrate our method using a case study in Immunology.
Collapse
Affiliation(s)
- Tianyu Zhan
- Data and Statistical Sciences, AbbVie Inc, North Chicago, Illinois, USA
| | - Yiwang Zhou
- Department of Biostatistics, St. Jude Children's Research Hospital, Memphis, Tennessee, USA
| | - Ziqian Geng
- Data and Statistical Sciences, AbbVie Inc, North Chicago, Illinois, USA
| | - Yihua Gu
- Data and Statistical Sciences, AbbVie Inc, North Chicago, Illinois, USA
| | - Jian Kang
- Department of Biostatistics, University of Michigan, Ann Arbor, Michigan, USA
| | - Li Wang
- Data and Statistical Sciences, AbbVie Inc, North Chicago, Illinois, USA
| | - Xiaohong Huang
- Data and Statistical Sciences, AbbVie Inc, North Chicago, Illinois, USA
| | - Elizabeth H Slate
- Department of Statistics, Florida State University, Tallahassee, Florida, USA
| |
Collapse
|
4
|
Evaluation of a Meta-Analysis of Ambient Air Quality as a Risk Factor for Asthma Exacerbation. JOURNAL OF RESPIRATION 2021. [DOI: 10.3390/jor1030017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Background: An irreproducibility crisis currently afflicts a wide range of scientific disciplines, including public health and biomedical science. A study was undertaken to assess the reliability of a meta-analysis examining whether air quality components (carbon monoxide, particulate matter 10 µm and 2.5 µm (PM10 and PM2.5), sulfur dioxide, nitrogen dioxide and ozone) are risk factors for asthma exacerbation. Methods: The number of statistical tests and models were counted in 17 randomly selected base papers from 87 used in the meta-analysis. Confidence intervals from all 87 base papers were converted to p-values. p-value plots for each air component were constructed to evaluate the effect heterogeneity of the p-values. Results: The number of statistical tests possible in the 17 selected base papers was large, median = 15,360 (interquartile range = 1536–40,960), in comparison to results presented. Each p-value plot showed a two-component mixture with small p-values < 0.001 while other p-values appeared random (p-values > 0.05). Given potentially large numbers of statistical tests conducted in the 17 selected base papers, p-hacking cannot be ruled out as explanations for small p-values. Conclusions: Our interpretation of the meta-analysis is that random p-values indicating null associations are more plausible and the meta-analysis is unlikely to replicate in the absence of bias.
Collapse
|
5
|
Gou J. Sample size optimization and initial allocation of the significance levels in group sequential trials with multiple endpoints. Biom J 2021; 64:301-311. [PMID: 33751645 DOI: 10.1002/bimj.202000081] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 08/09/2020] [Accepted: 10/08/2020] [Indexed: 11/10/2022]
Abstract
We consider multistage tests of multiple hypotheses under a flexible setting of calendar time and information fraction, focusing on the case where there are two hypotheses under testing. Explicit expressions of statistical powers are derived. With a proof of existence and uniqueness of solution, we develop a numerical method to search the optimal sample size. The proposed method allows us to find the suitable allocation of initial significance level along with the minimum sample size for group sequential designs, with and without hierarchical structures among different endpoints.
Collapse
Affiliation(s)
- Jiangtao Gou
- Department of Mathematics and Statistics, Villanova University, Villanova, PA, USA
| |
Collapse
|
6
|
Burnett T, Mozgunov P, Pallmann P, Villar SS, Wheeler GM, Jaki T. Adding flexibility to clinical trial designs: an example-based guide to the practical use of adaptive designs. BMC Med 2020; 18:352. [PMID: 33208155 PMCID: PMC7677786 DOI: 10.1186/s12916-020-01808-2] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Accepted: 10/07/2020] [Indexed: 12/18/2022] Open
Abstract
Adaptive designs for clinical trials permit alterations to a study in response to accumulating data in order to make trials more flexible, ethical, and efficient. These benefits are achieved while preserving the integrity and validity of the trial, through the pre-specification and proper adjustment for the possible alterations during the course of the trial. Despite much research in the statistical literature highlighting the potential advantages of adaptive designs over traditional fixed designs, the uptake of such methods in clinical research has been slow. One major reason for this is that different adaptations to trial designs, as well as their advantages and limitations, remain unfamiliar to large parts of the clinical community. The aim of this paper is to clarify where adaptive designs can be used to address specific questions of scientific interest; we introduce the main features of adaptive designs and commonly used terminology, highlighting their utility and pitfalls, and illustrate their use through case studies of adaptive trials ranging from early-phase dose escalation to confirmatory phase III studies.
Collapse
Affiliation(s)
- Thomas Burnett
- Department of Mathematics and Statistics, Lancaster University, Fylde College, Lancaster, LA1 4YF UK
| | - Pavel Mozgunov
- Department of Mathematics and Statistics, Lancaster University, Fylde College, Lancaster, LA1 4YF UK
| | - Philip Pallmann
- Centre for Trials Research, College of Biomedical & Life Sciences, Cardiff University, Cardiff, UK
| | - Sofia S. Villar
- MRC Biostatistics Unit, University of Cambridge School of Clinical Medicine, Cambridge Institute of Public Health, Forvie Site, Robinson Way, Cambridge Biomedical Campus, Cambridge, CB2 0SR UK
| | - Graham M. Wheeler
- Cancer Research UK & UCL Cancer Trials Centre, University College London, 90 Tottenham Court Road, London, W1T 4TJ UK
| | - Thomas Jaki
- Department of Mathematics and Statistics, Lancaster University, Fylde College, Lancaster, LA1 4YF UK
- MRC Biostatistics Unit, University of Cambridge School of Clinical Medicine, Cambridge Institute of Public Health, Forvie Site, Robinson Way, Cambridge Biomedical Campus, Cambridge, CB2 0SR UK
| |
Collapse
|
7
|
The Evolution of Master Protocol Clinical Trial Designs: A Systematic Literature Review. Clin Ther 2020; 42:1330-1360. [DOI: 10.1016/j.clinthera.2020.05.010] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Revised: 04/10/2020] [Accepted: 05/11/2020] [Indexed: 02/07/2023]
|
8
|
Dimairo M, Pallmann P, Wason J, Todd S, Jaki T, Julious SA, Mander AP, Weir CJ, Koenig F, Walton MK, Nicholl JP, Coates E, Biggs K, Hamasaki T, Proschan MA, Scott JA, Ando Y, Hind D, Altman DG. The adaptive designs CONSORT extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. Trials 2020; 21:528. [PMID: 32546273 PMCID: PMC7298968 DOI: 10.1186/s13063-020-04334-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Adaptive designs (ADs) allow pre-planned changes to an ongoing trial without compromising the validity of conclusions and it is essential to distinguish pre-planned from unplanned changes that may also occur. The reporting of ADs in randomised trials is inconsistent and needs improving. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.This extension to the Consolidated Standards Of Reporting Trials (CONSORT) 2010 statement was developed to enhance the reporting of randomised AD clinical trials. We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting. Members of the CONSORT Group were involved during the development process.The paper presents the ACE checklists for AD randomised trial reports and abstracts, as well as an explanation with examples to aid the application of the guideline. The ACE checklist comprises seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits. In order to encourage its wide dissemination this article is freely accessible on the BMJ and Trials journal websites."To maximise the benefit to society, you need to not just do research but do it well" Douglas G Altman.
Collapse
Affiliation(s)
- Munyaradzi Dimairo
- School of Health and Related Research, University of Sheffield, Sheffield, S1 4DA, UK.
| | | | - James Wason
- MRC Biostatistics Unit, University of Cambridge, Cambridge, UK
- Institute of Health and Society, Newcastle University, Newcastle, UK
| | - Susan Todd
- Department of Mathematics and Statistics, University of Reading, Reading, UK
| | - Thomas Jaki
- Department of Mathematics and Statistics, Lancaster University, Lancaster, UK
| | - Steven A Julious
- School of Health and Related Research, University of Sheffield, Sheffield, S1 4DA, UK
| | - Adrian P Mander
- Centre for Trials Research, Cardiff University, Cardiff, UK
- MRC Biostatistics Unit, University of Cambridge, Cambridge, UK
| | - Christopher J Weir
- Edinburgh Clinical Trials Unit, Usher Institute, University of Edinburgh, Edinburgh, UK
| | - Franz Koenig
- Centre for Medical Statistics, Informatics, and Intelligent Systems, Medical University of Vienna, Vienna, Austria
| | - Marc K Walton
- Janssen Pharmaceuticals, Titusville, New Jersey, USA
| | - Jon P Nicholl
- School of Health and Related Research, University of Sheffield, Sheffield, S1 4DA, UK
| | - Elizabeth Coates
- School of Health and Related Research, University of Sheffield, Sheffield, S1 4DA, UK
| | - Katie Biggs
- School of Health and Related Research, University of Sheffield, Sheffield, S1 4DA, UK
| | | | - Michael A Proschan
- National Institute of Allergy and Infectious Diseases, National Institutes of Health, Bethesda, USA
| | - John A Scott
- Division of Biostatistics in the Center for Biologics Evaluation and Research, Food and Drug Administration, Rockville, USA
| | - Yuki Ando
- Pharmaceuticals and Medical Devices Agency, Tokyo, Japan
| | - Daniel Hind
- School of Health and Related Research, University of Sheffield, Sheffield, S1 4DA, UK
| | - Douglas G Altman
- Centre for Statistics in Medicine, University of Oxford, Oxford, UK
| |
Collapse
|
9
|
Dimairo M, Pallmann P, Wason J, Todd S, Jaki T, Julious SA, Mander AP, Weir CJ, Koenig F, Walton MK, Nicholl JP, Coates E, Biggs K, Hamasaki T, Proschan MA, Scott JA, Ando Y, Hind D, Altman DG. The Adaptive designs CONSORT Extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. BMJ 2020; 369:m115. [PMID: 32554564 PMCID: PMC7298567 DOI: 10.1136/bmj.m115] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/19/2019] [Indexed: 12/11/2022]
Abstract
Adaptive designs (ADs) allow pre-planned changes to an ongoing trial without compromising the validity of conclusions and it is essential to distinguish pre-planned from unplanned changes that may also occur. The reporting of ADs in randomised trials is inconsistent and needs improving. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.This extension to the Consolidated Standards Of Reporting Trials (CONSORT) 2010 statement was developed to enhance the reporting of randomised AD clinical trials. We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting. Members of the CONSORT Group were involved during the development process.The paper presents the ACE checklists for AD randomised trial reports and abstracts, as well as an explanation with examples to aid the application of the guideline. The ACE checklist comprises seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits.
Collapse
Affiliation(s)
- Munyaradzi Dimairo
- School of Health and Related Research, University of Sheffield, Sheffield S1 4DA, UK
| | | | - James Wason
- MRC Biostatistics Unit, University of Cambridge, UK
- Institute of Health and Society, Newcastle University, UK
| | - Susan Todd
- Department of Mathematics and Statistics, University of Reading, UK
| | - Thomas Jaki
- Department of Mathematics and Statistics, Lancaster University, UK
| | - Steven A Julious
- School of Health and Related Research, University of Sheffield, Sheffield S1 4DA, UK
| | - Adrian P Mander
- Centre for Trials Research, Cardiff University, UK
- MRC Biostatistics Unit, University of Cambridge, UK
| | - Christopher J Weir
- Edinburgh Clinical Trials Unit, Usher Institute, University of Edinburgh, UK
| | - Franz Koenig
- Centre for Medical Statistics, Informatics, and Intelligent Systems, Medical University of Vienna, Austria
| | | | - Jon P Nicholl
- School of Health and Related Research, University of Sheffield, Sheffield S1 4DA, UK
| | - Elizabeth Coates
- School of Health and Related Research, University of Sheffield, Sheffield S1 4DA, UK
| | - Katie Biggs
- School of Health and Related Research, University of Sheffield, Sheffield S1 4DA, UK
| | | | - Michael A Proschan
- National Institute of Allergy and Infectious Diseases, National Institutes of Health, USA
| | - John A Scott
- Division of Biostatistics in the Center for Biologics Evaluation and Research, Food and Drug Administration, USA
| | - Yuki Ando
- Pharmaceuticals and Medical Devices Agency, Japan
| | - Daniel Hind
- School of Health and Related Research, University of Sheffield, Sheffield S1 4DA, UK
| | | |
Collapse
|
10
|
Li Y, Wang G, Szychowski JM. Sample size re-estimation for confirmatory two-stage flexible multi-arm trial with normal outcomes. J STAT COMPUT SIM 2019. [DOI: 10.1080/00949655.2019.1675070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Yan Li
- Department of Biostatistics, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Guoqiao Wang
- Division of Biostatistics, Washington University School of Medicine, Saint Louis, MO, USA
| | - Jeff M. Szychowski
- Department of Biostatistics, University of Alabama at Birmingham, Birmingham, AL, USA
| |
Collapse
|
11
|
Pallmann P, Bedding AW, Choodari-Oskooei B, Dimairo M, Flight L, Hampson LV, Holmes J, Mander AP, Odondi L, Sydes MR, Villar SS, Wason JMS, Weir CJ, Wheeler GM, Yap C, Jaki T. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med 2018; 16:29. [PMID: 29490655 PMCID: PMC5830330 DOI: 10.1186/s12916-018-1017-7] [Citation(s) in RCA: 361] [Impact Index Per Article: 60.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Accepted: 01/30/2018] [Indexed: 12/11/2022] Open
Abstract
Adaptive designs can make clinical trials more flexible by utilising results accumulating in the trial to modify the trial's course in accordance with pre-specified rules. Trials with an adaptive design are often more efficient, informative and ethical than trials with a traditional fixed design since they often make better use of resources such as time and money, and might require fewer participants. Adaptive designs can be applied across all phases of clinical research, from early-phase dose escalation to confirmatory trials. The pace of the uptake of adaptive designs in clinical research, however, has remained well behind that of the statistical literature introducing new methods and highlighting their potential advantages. We speculate that one factor contributing to this is that the full range of adaptations available to trial designs, as well as their goals, advantages and limitations, remains unfamiliar to many parts of the clinical community. Additionally, the term adaptive design has been misleadingly used as an all-encompassing label to refer to certain methods that could be deemed controversial or that have been inadequately implemented.We believe that even if the planning and analysis of a trial is undertaken by an expert statistician, it is essential that the investigators understand the implications of using an adaptive design, for example, what the practical challenges are, what can (and cannot) be inferred from the results of such a trial, and how to report and communicate the results. This tutorial paper provides guidance on key aspects of adaptive designs that are relevant to clinical triallists. We explain the basic rationale behind adaptive designs, clarify ambiguous terminology and summarise the utility and pitfalls of adaptive designs. We discuss practical aspects around funding, ethical approval, treatment supply and communication with stakeholders and trial participants. Our focus, however, is on the interpretation and reporting of results from adaptive design trials, which we consider vital for anyone involved in medical research. We emphasise the general principles of transparency and reproducibility and suggest how best to put them into practice.
Collapse
Affiliation(s)
- Philip Pallmann
- Department of Mathematics & Statistics, Lancaster University, Lancaster, LA1 4YF UK
| | | | - Babak Choodari-Oskooei
- MRC Clinical Trials Unit at UCL, Institute of Clinical Trials and Methodology, University College London, London, UK
| | | | - Laura Flight
- Medical Statistics Group, University of Sheffield, Sheffield, UK
| | - Lisa V. Hampson
- Department of Mathematics & Statistics, Lancaster University, Lancaster, LA1 4YF UK
- Statistical Innovation Group, Advanced Analytics Centre, AstraZeneca, Cambridge, UK
| | - Jane Holmes
- Centre for Statistics in Medicine, University of Oxford, Oxford, UK
| | | | - Lang’o Odondi
- Centre for Statistics in Medicine, University of Oxford, Oxford, UK
| | - Matthew R. Sydes
- MRC Clinical Trials Unit at UCL, Institute of Clinical Trials and Methodology, University College London, London, UK
| | - Sofía S. Villar
- MRC Biostatistics Unit, University of Cambridge, Cambridge, UK
| | - James M. S. Wason
- MRC Biostatistics Unit, University of Cambridge, Cambridge, UK
- Institute of Health and Society, Newcastle University, Newcastle, UK
| | - Christopher J. Weir
- Usher Institute of Population Health Sciences and Informatics, University of Edinburgh, Edinburgh, UK
| | - Graham M. Wheeler
- MRC Biostatistics Unit, University of Cambridge, Cambridge, UK
- Cancer Research UK & UCL Cancer Trials Centre, University College London, London, UK
| | - Christina Yap
- Cancer Research UK Clinical Trials Unit, University of Birmingham, Birmingham, UK
| | - Thomas Jaki
- Department of Mathematics & Statistics, Lancaster University, Lancaster, LA1 4YF UK
| |
Collapse
|
12
|
Advanced Research and Data Methods in Women's Health: Big Data Analytics, Adaptive Studies, and the Road Ahead. Obstet Gynecol 2017; 129:249-264. [PMID: 28079771 DOI: 10.1097/aog.0000000000001865] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Technical advances in science have had broad implications in reproductive and women's health care. Recent innovations in population-level data collection and storage have made available an unprecedented amount of data for analysis while computational technology has evolved to permit processing of data previously thought too dense to study. "Big data" is a term used to describe data that are a combination of dramatically greater volume, complexity, and scale. The number of variables in typical big data research can readily be in the thousands, challenging the limits of traditional research methodologies. Regardless of what it is called, advanced data methods, predictive analytics, or big data, this unprecedented revolution in scientific exploration has the potential to dramatically assist research in obstetrics and gynecology broadly across subject matter. Before implementation of big data research methodologies, however, potential researchers and reviewers should be aware of strengths, strategies, study design methods, and potential pitfalls. Examination of big data research examples contained in this article provides insight into the potential and the limitations of this data science revolution and practical pathways for its useful implementation.
Collapse
|
13
|
Graf AC, Gutjahr G, Brannath W. Precision of maximum likelihood estimation in adaptive designs. Stat Med 2016; 35:922-41. [PMID: 26459506 PMCID: PMC4755201 DOI: 10.1002/sim.6761] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2015] [Revised: 09/17/2015] [Accepted: 09/24/2015] [Indexed: 12/02/2022]
Abstract
There has been increasing interest in trials that allow for design adaptations like sample size reassessment or treatment selection at an interim analysis. Ignoring the adaptive and multiplicity issues in such designs leads to an inflation of the type 1 error rate, and treatment effect estimates based on the maximum likelihood principle become biased. Whereas the methodological issues concerning hypothesis testing are well understood, it is not clear how to deal with parameter estimation in designs were adaptation rules are not fixed in advanced so that, in practice, the maximum likelihood estimate (MLE) is used. It is therefore important to understand the behavior of the MLE in such designs. The investigation of Bias and mean squared error (MSE) is complicated by the fact that the adaptation rules need not be fully specified in advance and, hence, are usually unknown. To investigate Bias and MSE under such circumstances, we search for the sample size reassessment and selection rules that lead to the maximum Bias or maximum MSE. Generally, this leads to an overestimation of Bias and MSE, which can be reduced by imposing realistic constraints on the rules like, for example, a maximum sample size. We consider designs that start with k treatment groups and a common control and where selection of a single treatment and control is performed at the interim analysis with the possibility to reassess each of the sample sizes. We consider the case of unlimited sample size reassessments as well as several realistically restricted sample size reassessment rules. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Collapse
Affiliation(s)
- Alexandra Christine Graf
- Medical University of Vienna, Center for Medical Statistics, Informatics and Intelligent Systems, Spitalgasse 23, 1090, Vienna, Austria
| | - Georg Gutjahr
- University of Bremen, Competence Center for Clinical Trials, Linzer Strasse 4, 28359, Bremen, Germany
| | - Werner Brannath
- University of Bremen, Competence Center for Clinical Trials, Linzer Strasse 4, 28359, Bremen, Germany
| |
Collapse
|
14
|
Bauer P, Bretz F, Dragalin V, König F, Wassmer G. Twenty-five years of confirmatory adaptive designs: opportunities and pitfalls. Stat Med 2016; 35:325-47. [PMID: 25778935 PMCID: PMC6680191 DOI: 10.1002/sim.6472] [Citation(s) in RCA: 135] [Impact Index Per Article: 16.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2014] [Revised: 02/03/2015] [Accepted: 02/19/2015] [Indexed: 12/26/2022]
Abstract
'Multistage testing with adaptive designs' was the title of an article by Peter Bauer that appeared 1989 in the German journal Biometrie und Informatik in Medizin und Biologie. The journal does not exist anymore but the methodology found widespread interest in the scientific community over the past 25 years. The use of such multistage adaptive designs raised many controversial discussions from the beginning on, especially after the publication by Bauer and Köhne 1994 in Biometrics: Broad enthusiasm about potential applications of such designs faced critical positions regarding their statistical efficiency. Despite, or possibly because of, this controversy, the methodology and its areas of applications grew steadily over the years, with significant contributions from statisticians working in academia, industry and agencies around the world. In the meantime, such type of adaptive designs have become the subject of two major regulatory guidance documents in the US and Europe and the field is still evolving. Developments are particularly noteworthy in the most important applications of adaptive designs, including sample size reassessment, treatment selection procedures, and population enrichment designs. In this article, we summarize the developments over the past 25 years from different perspectives. We provide a historical overview of the early days, review the key methodological concepts and summarize regulatory and industry perspectives on such designs. Then, we illustrate the application of adaptive designs with three case studies, including unblinded sample size reassessment, adaptive treatment selection, and adaptive endpoint selection. We also discuss the availability of software for evaluating and performing such designs. We conclude with a critical review of how expectations from the beginning were fulfilled, and - if not - discuss potential reasons why this did not happen.
Collapse
Affiliation(s)
- Peter Bauer
- Section of Medical StatisticsMedical University of ViennaSpitalgasse 231090 WienAustria
| | - Frank Bretz
- Novartis Pharma AGLichtstrasse 354002BaselSwitzerland
- Shanghai University of Finance and EconomicsChina
| | | | - Franz König
- Section of Medical StatisticsMedical University of ViennaSpitalgasse 231090 WienAustria
| | - Gernot Wassmer
- Aptiv Solutions, an ICON plc companyRobert‐Perthel‐Str. 77a50739KölnGermany
- Institute for Medical Statistics, Informatics and EpidemiologyUniversity of Cologne50924KölnGermany
| |
Collapse
|
15
|
Zehetmayer S, Graf AC, Posch M. Sample size reassessment for a two-stage design controlling the false discovery rate. Stat Appl Genet Mol Biol 2015; 14:429-42. [PMID: 26461844 PMCID: PMC4789494 DOI: 10.1515/sagmb-2014-0025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.
Collapse
Affiliation(s)
- Sonja Zehetmayer
- Center for Medical Statistics, Informatics, and Intelligent Systems, Medical University of Vienna, Spitalgasse 23, Vienna, Austria
| | - Alexandra C. Graf
- Center for Medical Statistics, Informatics, and Intelligent Systems, Medical University of Vienna, Spitalgasse 23, Vienna, Austria
| | - Martin Posch
- Center for Medical Statistics, Informatics, and Intelligent Systems, Medical University of Vienna, Spitalgasse 23, Vienna, Austria
| |
Collapse
|
16
|
Stallard N, Kunz CU, Todd S, Parsons N, Friede T. Flexible selection of a single treatment incorporating short-term endpoint information in a phase II/III clinical trial. Stat Med 2015; 34:3104-15. [PMID: 26112909 PMCID: PMC4745001 DOI: 10.1002/sim.6567] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2014] [Revised: 05/11/2015] [Accepted: 06/01/2015] [Indexed: 11/07/2022]
Abstract
Seamless phase II/III clinical trials in which an experimental treatment is selected at an interim analysis have been the focus of much recent research interest. Many of the methods proposed are based on the group sequential approach. This paper considers designs of this type in which the treatment selection can be based on short-term endpoint information for more patients than have primary endpoint data available. We show that in such a case, the familywise type I error rate may be inflated if previously proposed group sequential methods are used and the treatment selection rule is not specified in advance. A method is proposed to avoid this inflation by considering the treatment selection that maximises the conditional error given the data available at the interim analysis. A simulation study is reported that illustrates the type I error rate inflation and compares the power of the new approach with two other methods: a combination testing approach and a group sequential method that does not use the short-term endpoint data, both of which also strongly control the type I error rate. The new method is also illustrated through application to a study in Alzheimer's disease.
Collapse
Affiliation(s)
- Nigel Stallard
- Statistics and Epidemiology, Division of Health SciencesWarwick Medical School, University of WarwickCoventryU.K.
| | - Cornelia Ursula Kunz
- Statistics and Epidemiology, Division of Health SciencesWarwick Medical School, University of WarwickCoventryU.K.
| | - Susan Todd
- Department of Mathematics and StatisticsUniversity of ReadingReadingU.K.
| | - Nicholas Parsons
- Statistics and Epidemiology, Division of Health SciencesWarwick Medical School, University of WarwickCoventryU.K.
| | - Tim Friede
- Department of Medical StatisticsUniversity Medical CenterGöttingenGermany
| |
Collapse
|