1
|
Damone EM, Zhu J, Pang H, Li X, Zhao Y, Kwiatkowski E, Carey LA, Ibrahim JG. Incorporating external controls in the design of randomized clinical trials: a case study in solid tumors. BMC Med Res Methodol 2024; 24:264. [PMID: 39487399 PMCID: PMC11529009 DOI: 10.1186/s12874-024-02383-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 10/21/2024] [Indexed: 11/04/2024] Open
Abstract
BACKGROUND The use of historical external control data in clinical trials has grown in interest and needs when considering the design of future trials. Hybrid control designs can be more efficient to achieve the same power with fewer patients and limited resources. The literature is sparse on appropriate statistical methods which can account for the differences between historical external controls and the control patients in a study. In this article, we illustrate the analysis framework of a clinical trial if a hybrid control design was used after determining an RCT may not be feasible. METHODS We utilize two previously completed RCTs in nonsquamous NSCLC and a nationwide electronic health record derived de-identified database as examples and compare 5 analysis methods on each trial, as well as a set of simulations to determine operating characteristics of such designs. RESULTS In single trial estimation, the Case Weighted Adaptive Power Prior provided estimated treatment hazard ratios consistent with the original trial's conclusions with narrower confidence intervals. The simulation studies showed that the Case Weighted Adaptive Power Prior achieved the highest power (and well controlled type-1 error) across all 5 methods with consistent study sample size. CONCLUSIONS By following the proposed hybrid control framework, one can design a hybrid control trial transparently and accounting for differences between control groups while controlling type-1 error and still achieving efficiency gains from the additional contribution from external controls.
Collapse
Affiliation(s)
- Emily M Damone
- Department of Biostatistics, University of North Carolina, 3109 McGavran-Greenberg Hall, CB #7420, Chapel Hill, NC, 27599, USA
| | - Jiawen Zhu
- Department of Data Science, Product Development, Genentech, Inc, South San Francisco, CA, USA
| | - Herbert Pang
- Department of Data Science, Product Development, Genentech, Inc, South San Francisco, CA, USA
| | - Xiao Li
- Department of Personalized Healthcare, Product Development, Genentech, Inc, South San Francisco, CA, USA
| | - Yinqi Zhao
- Department of Data Science, Product Development, Genentech, Inc, South San Francisco, CA, USA
| | - Evan Kwiatkowski
- Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Lisa A Carey
- Division of Oncology, University of North Carolina, Chapel Hill, NC, USA
| | - Joseph G Ibrahim
- Department of Biostatistics, University of North Carolina, 3109 McGavran-Greenberg Hall, CB #7420, Chapel Hill, NC, 27599, USA.
| |
Collapse
|
2
|
Zhou Y, Yao M, Mei F, Ma Y, Huan J, Zou K, Li L, Sun X. Integrating randomized controlled trials and non-randomized studies of interventions to assess the effect of rare events: a Bayesian re-analysis of two meta-analyses. BMC Med Res Methodol 2024; 24:219. [PMID: 39333867 PMCID: PMC11430109 DOI: 10.1186/s12874-024-02347-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 09/19/2024] [Indexed: 09/30/2024] Open
Abstract
BACKGROUND There is a growing trend to include non-randomised studies of interventions (NRSIs) in rare events meta-analyses of randomised controlled trials (RCTs) to complement the evidence from the latter. An important consideration when combining RCTs and NRSIs is how to address potential bias and down-weighting of NRSIs in the pooled estimates. The aim of this study is to explore the use of a power prior approach in a Bayesian framework for integrating RCTs and NRSIs to assess the effect of rare events. METHODS We proposed a method of specifying the down-weighting factor based on judgments of the relative magnitude (no information, and low, moderate, serious and critical risk of bias) of the overall risk of bias for each NRSI using the ROBINS-I tool. The methods were illustrated using two meta-analyses, with particular interest in the risk of diabetic ketoacidosis (DKA) in patients using sodium/glucose cotransporter-2 (SGLT-2) inhibitors compared with active comparators, and the association between low-dose methotrexate exposure and melanoma. RESULTS No significant results were observed for these two analyses when the data from RCTs only were pooled (risk of DKA: OR = 0.82, 95% confidence interval (CI): 0.25-2.69; risk of melanoma: OR = 1.94, 95%CI: 0.72-5.27). When RCTs and NRSIs were directly combined without distinction in the same meta-analysis, both meta-analyses showed significant results (risk of DKA: OR = 1.50, 95%CI: 1.11-2.03; risk of melanoma: OR = 1.16, 95%CI: 1.08-1.24). Using Bayesian analysis to account for NRSI bias, there was a 90% probability of an increased risk of DKA in users receiving SGLT-2 inhibitors and an 91% probability of an increased risk of melanoma in patients using low-dose methotrexate. CONCLUSIONS Our study showed that including NRSIs in a meta-analysis of RCTs for rare events could increase the certainty and comprehensiveness of the evidence. The estimates obtained from NRSIs are generally considered to be biased, and the possible influence of NRSIs on the certainty of the combined evidence needs to be carefully investigated.
Collapse
Affiliation(s)
- Yun Zhou
- Department of Neurosurgery and Chinese Evidence-Based Medicine Center and Cochrane China, Center and MAGIC China Center, West China Hospital, Sichuan University, 37 Guo Xue Xiang, Chengdu, Sichuan, China
- President & Dean's Office, West China Hospital, Sichuan University, Chengdu, China
| | - Minghong Yao
- Department of Neurosurgery and Chinese Evidence-Based Medicine Center and Cochrane China, Center and MAGIC China Center, West China Hospital, Sichuan University, 37 Guo Xue Xiang, Chengdu, Sichuan, China
- NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, West China Hospital, Sichuan University, Chengdu, China
- Sichuan Center of Technology Innovation for Real World Data, West China Hospital, Sichuan University, Chengdu, China
| | - Fan Mei
- Department of Neurosurgery and Chinese Evidence-Based Medicine Center and Cochrane China, Center and MAGIC China Center, West China Hospital, Sichuan University, 37 Guo Xue Xiang, Chengdu, Sichuan, China
- NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, West China Hospital, Sichuan University, Chengdu, China
- Sichuan Center of Technology Innovation for Real World Data, West China Hospital, Sichuan University, Chengdu, China
| | - Yu Ma
- Department of Neurosurgery and Chinese Evidence-Based Medicine Center and Cochrane China, Center and MAGIC China Center, West China Hospital, Sichuan University, 37 Guo Xue Xiang, Chengdu, Sichuan, China
- NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, West China Hospital, Sichuan University, Chengdu, China
- Sichuan Center of Technology Innovation for Real World Data, West China Hospital, Sichuan University, Chengdu, China
| | - Jiayidaer Huan
- Department of Neurosurgery and Chinese Evidence-Based Medicine Center and Cochrane China, Center and MAGIC China Center, West China Hospital, Sichuan University, 37 Guo Xue Xiang, Chengdu, Sichuan, China
- NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, West China Hospital, Sichuan University, Chengdu, China
- Sichuan Center of Technology Innovation for Real World Data, West China Hospital, Sichuan University, Chengdu, China
| | - Kang Zou
- Department of Neurosurgery and Chinese Evidence-Based Medicine Center and Cochrane China, Center and MAGIC China Center, West China Hospital, Sichuan University, 37 Guo Xue Xiang, Chengdu, Sichuan, China
- NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, West China Hospital, Sichuan University, Chengdu, China
- Sichuan Center of Technology Innovation for Real World Data, West China Hospital, Sichuan University, Chengdu, China
| | - Ling Li
- Department of Neurosurgery and Chinese Evidence-Based Medicine Center and Cochrane China, Center and MAGIC China Center, West China Hospital, Sichuan University, 37 Guo Xue Xiang, Chengdu, Sichuan, China.
- NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, West China Hospital, Sichuan University, Chengdu, China.
- Sichuan Center of Technology Innovation for Real World Data, West China Hospital, Sichuan University, Chengdu, China.
| | - Xin Sun
- Department of Neurosurgery and Chinese Evidence-Based Medicine Center and Cochrane China, Center and MAGIC China Center, West China Hospital, Sichuan University, 37 Guo Xue Xiang, Chengdu, Sichuan, China.
- NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, West China Hospital, Sichuan University, Chengdu, China.
- Sichuan Center of Technology Innovation for Real World Data, West China Hospital, Sichuan University, Chengdu, China.
- Department of Epidemiology and Biostatistics, West China School of Public Health and West China Fourth Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
3
|
Bullement A, Edmondson-Jones M, Guyot P, Welton NJ, Baio G, Stevenson M, Latimer NR. MPES-R: Multi-Parameter Evidence Synthesis in R for Survival Extrapolation-A Tutorial. PHARMACOECONOMICS 2024:10.1007/s40273-024-01425-4. [PMID: 39207595 DOI: 10.1007/s40273-024-01425-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/11/2024] [Indexed: 09/04/2024]
Abstract
Survival extrapolation often plays an important role in health technology assessment (HTA), and there are a range of different approaches available. Approaches that can leverage external evidence (i.e. data or information collected outside the main data source of interest) may be helpful, given the extent of uncertainty often present when determining a suitable survival extrapolation. One of these methods is the multi-parameter evidence synthesis (MPES) approach, first proposed for use in HTA by Guyot et al., and more recently by Jackson. While MPES has potential benefits over conventional extrapolation approaches (such as simple or flexible parametric models), it is more computationally complex and requires use of specialist software. This tutorial presents an introduction to MPES for HTA, alongside a user-friendly, publicly available operationalisation of Guyot's original MPES that can be executed using the statistical software package R. Through two case studies, both Guyot's and Jackson's MPES approaches are explored, along with sensitivity analyses relevant to HTA. Finally, the discussion section of the tutorial details important considerations for analysts considering use of an MPES approach, along with potential further developments. MPES has not been used often in HTA, and so there are limited examples of how it has been used and perceived. However, this tutorial may aid future research efforts exploring the use of MPES further.
Collapse
Affiliation(s)
- Ash Bullement
- School of Medicine and Population Health, Sheffield Centre for Health and Related Research, University of Sheffield, Sheffield, UK.
- Delta Hat, Nottingham, UK.
| | - Mark Edmondson-Jones
- Delta Hat, Nottingham, UK
- Population Health Sciences, University of Leicester, Leicester, UK
| | | | - Nicky J Welton
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
| | - Gianluca Baio
- Department of Statistical Science, University College London, London, UK
| | - Matthew Stevenson
- School of Medicine and Population Health, Sheffield Centre for Health and Related Research, University of Sheffield, Sheffield, UK
| | - Nicholas R Latimer
- School of Medicine and Population Health, Sheffield Centre for Health and Related Research, University of Sheffield, Sheffield, UK
- Delta Hat, Nottingham, UK
| |
Collapse
|
4
|
Liao JJ, Asatiani E, Liu Q, Hou K. Three steps toward dose optimization for oncology dose finding. Contemp Clin Trials Commun 2024; 40:101329. [PMID: 39036557 PMCID: PMC11259781 DOI: 10.1016/j.conctc.2024.101329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 05/09/2024] [Accepted: 06/21/2024] [Indexed: 07/23/2024] Open
Abstract
Background Traditional dose selection for oncology registration trials typically employs a one- or two-step single maximum tolerated dose (MTD) approach. However, this approach may not be appropriate for molecularly targeted therapy, which tends to have toxicity profiles that are markedly different than cytotoxic agents. The US Food and Drug Administration launched Project Optimus to reform dose optimization in oncology drug development and has recently released a related guidance for industry. Methods We propose a "three steps toward dose optimization" procedure, in response to these initiatives, and discuss the details in dose-optimization designs and analyses. The first step is dose escalation to identify the MTD or maximum administered dose with an efficient hybrid design, which can offer good overdose control and increases the likelihood of the recommended MTD being close to the true MTD. The second step is the selection of appropriate recommended doses for expansion (RDEs), based on all available data, including emerging safety, pharmacokinetics, pharmacodynamics, and other biomarker information. The third step is dose optimization, which uses data from a randomized fractional factorial design with multiple RDEs explored in multiple tumor cohorts during the expansion phase to ensure a feasible dose is selected for registration trials, and that the tumor type most sensitive to the investigative treatment is identified. Conclusion We believe using this three-step approach can increase the likelihood of selecting an optimal dose for a registration trial that demonstrates a balanced safety profile while retaining much of the efficacy observed at the MTD.
Collapse
Affiliation(s)
- Jason J.Z. Liao
- Incyte Corporation, 1801 Augustine Cut-off, Wilmington, DE, 19803, United States
| | - Ekaterine Asatiani
- Incyte Biosciences International Sàrl, Rue Docteur-Yersin 12, 1110, Morges, Switzerland
| | - Qingyang Liu
- Incyte Corporation, 1801 Augustine Cut-off, Wilmington, DE, 19803, United States
| | - Kevin Hou
- Incyte Corporation, 1801 Augustine Cut-off, Wilmington, DE, 19803, United States
| |
Collapse
|
5
|
Fougeray R, Vidot L, Ratta M, Teng Z, Skanji D, Saint-Hilary G. Futility Interim Analysis Based on Probability of Success Using a Surrogate Endpoint. Pharm Stat 2024. [PMID: 38956450 DOI: 10.1002/pst.2410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 05/06/2024] [Accepted: 05/29/2024] [Indexed: 07/04/2024]
Abstract
In clinical trials with time-to-event data, the evaluation of treatment efficacy can be a long and complex process, especially when considering long-term primary endpoints. Using surrogate endpoints to correlate the primary endpoint has become a common practice to accelerate decision-making. Moreover, the ethical need to minimize sample size and the practical need to optimize available resources have encouraged the scientific community to develop methodologies that leverage historical data. Relying on the general theory of group sequential design and using a Bayesian framework, the methodology described in this paper exploits a documented historical relationship between a clinical "final" endpoint and a surrogate endpoint to build an informative prior for the primary endpoint, using surrogate data from an early interim analysis of the clinical trial. The predictive probability of success of the trial is then used to define a futility-stopping rule. The methodology demonstrates substantial enhancements in trial operating characteristics when there is a good agreement between current and historical data. Furthermore, incorporating a robust approach that combines the surrogate prior with a vague component mitigates the impact of the minor prior-data conflicts while maintaining acceptable performance even in the presence of significant prior-data conflicts. The proposed methodology was applied to design a Phase III clinical trial in metastatic colorectal cancer, with overall survival as the primary endpoint and progression-free survival as the surrogate endpoint.
Collapse
Affiliation(s)
- Ronan Fougeray
- Institut de Recherches Internationales Servier (IRIS), Gif-sur-Yvette, France
| | - Loïck Vidot
- Institut de Recherches Internationales Servier (IRIS), Gif-sur-Yvette, France
| | | | | | - Donia Skanji
- Institut de Recherches Internationales Servier (IRIS), Gif-sur-Yvette, France
| | | |
Collapse
|
6
|
Chen C, Han P, Chen S, Shardell M, Qin J. Integrating external summary information in the presence of prior probability shift: an application to assessing essential hypertension. Biometrics 2024; 80:ujae090. [PMID: 39248121 PMCID: PMC11381951 DOI: 10.1093/biomtc/ujae090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 07/05/2024] [Accepted: 08/22/2024] [Indexed: 09/10/2024]
Abstract
Recent years have witnessed a rise in the popularity of information integration without sharing of raw data. By leveraging and incorporating summary information from external sources, internal studies can achieve enhanced estimation efficiency and prediction accuracy. However, a noteworthy challenge in utilizing summary-level information is accommodating the inherent heterogeneity across diverse data sources. In this study, we delve into the issue of prior probability shift between two cohorts, wherein the difference of two data distributions depends on the outcome. We introduce a novel semi-parametric constrained optimization-based approach to integrate information within this framework, which has not been extensively explored in existing literature. Our proposed method tackles the prior probability shift by introducing the outcome-dependent selection function and effectively addresses the estimation uncertainty associated with summary information from the external source. Our approach facilitates valid inference even in the absence of a known variance-covariance estimate from the external source. Through extensive simulation studies, we observe the superiority of our method over existing ones, showcasing minimal estimation bias and reduced variance for both binary and continuous outcomes. We further demonstrate the utility of our method through its application in investigating risk factors related to essential hypertension, where the reduced estimation variability is observed after integrating summary information from an external data.
Collapse
Affiliation(s)
- Chixiang Chen
- Department of Epidemiology and Public Health, University of Maryland School of Medicine, Baltimore, 21201, United States
- Department of Neurosurgery, University of Maryland School of Medicine, Baltimore, 21201, United States
- University of Maryland Institute for Health Computing, Bethesda, MD 20852, United States
| | - Peisong Han
- Biostatistics Innovation Group, Gilead Sciences, Foster City, CA 94404, United States
| | - Shuo Chen
- Department of Epidemiology and Public Health, University of Maryland School of Medicine, Baltimore, 21201, United States
| | - Michelle Shardell
- Department of Epidemiology and Public Health, University of Maryland School of Medicine, Baltimore, 21201, United States
- Institute of Genome Sciences, University of Maryland School of Medicine, Baltimore, MD 21201, United States
| | - Jing Qin
- Biostatistics Research Branch, National Institute of Allergy and Infectious Diseases, National Institute of Health, Bethesda, MD 20892, United States
| |
Collapse
|
7
|
Alt EM, Chang X, Jiang X, Liu Q, Mo M, Xia HA, Ibrahim JG. LEAP: the latent exchangeability prior for borrowing information from historical data. Biometrics 2024; 80:ujae083. [PMID: 39329230 DOI: 10.1093/biomtc/ujae083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 01/08/2024] [Accepted: 08/09/2024] [Indexed: 09/28/2024]
Abstract
It is becoming increasingly popular to elicit informative priors on the basis of historical data. Popular existing priors, including the power prior, commensurate prior, and robust meta-analytic predictive prior, provide blanket discounting. Thus, if only a subset of participants in the historical data are exchangeable with the current data, these priors may not be appropriate. In order to combat this issue, propensity score approaches have been proposed. However, these approaches are only concerned with the covariate distribution, whereas exchangeability is typically assessed with parameters pertaining to the outcome. In this paper, we introduce the latent exchangeability prior (LEAP), where observations in the historical data are classified into exchangeable and non-exchangeable groups. The LEAP discounts the historical data by identifying the most relevant subjects from the historical data. We compare our proposed approach against alternative approaches in simulations and present a case study using our proposed prior to augment a control arm in a phase 3 clinical trial in plaque psoriasis with an unbalanced randomization scheme.
Collapse
Affiliation(s)
- Ethan M Alt
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27516, United States
| | - Xiuya Chang
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27516, United States
| | - Xun Jiang
- Amgen Inc., One Amgen Center Drive, Thousand Oaks, CA 91320, USA
| | - Qing Liu
- Amgen Inc., One Amgen Center Drive, Thousand Oaks, CA 91320, USA
| | - May Mo
- Amgen Inc., One Amgen Center Drive, Thousand Oaks, CA 91320, USA
| | - Hong Amy Xia
- Amgen Inc., One Amgen Center Drive, Thousand Oaks, CA 91320, USA
| | - Joseph G Ibrahim
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27516, United States
| |
Collapse
|
8
|
Scott D, Lewin A. Discussion on "LEAP: the latent exchangeability prior for borrowing information from historical data" by Ethan M. Alt, Xiuya Chang, Xun Jiang, Qing Liu, May Mo, H. Amy Xia, and Joseph G. Ibrahim. Biometrics 2024; 80:ujae085. [PMID: 39329231 DOI: 10.1093/biomtc/ujae085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 06/24/2024] [Accepted: 09/25/2024] [Indexed: 09/28/2024]
Abstract
In the following discussion, we describe the various assumptions of exchangeability that have been made in the context of Bayesian borrowing and related models. In this context, we are able to highlight the difficulty of dynamic Bayesian borrowing under the assumption of individuals in the historical data being exchangeable with the current data and thus the strengths and novel features of the latent exchangeability prior. As borrowing methods are popular within clinical trials to augment the control arm, some potential challenges are identified with the application of the approach in this setting.
Collapse
Affiliation(s)
- Darren Scott
- AstraZeneca, Biomedical Campus, 1 Francis Crick Avenue, Cambridge CB2 0AA, United Kingdom
| | - Alex Lewin
- London School of Hygiene and Tropical Medicine, Keppel Street, London WC1E 7HT, United Kingdom
| |
Collapse
|
9
|
Zhang J, Lin R, Chen X, Yan F. Adaptive Bayesian information borrowing methods for finding and optimizing subgroup-specific doses. Clin Trials 2024; 21:308-321. [PMID: 38243401 PMCID: PMC11132956 DOI: 10.1177/17407745231212193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2024]
Abstract
In precision oncology, integrating multiple cancer patient subgroups into a single master protocol allows for the simultaneous assessment of treatment effects in these subgroups and promotes the sharing of information between them, ultimately reducing sample sizes and costs and enhancing scientific validity. However, the safety and efficacy of these therapies may vary across different subgroups, resulting in heterogeneous outcomes. Therefore, identifying subgroup-specific optimal doses in early-phase clinical trials is crucial for the development of future trials. In this article, we review various innovative Bayesian information-borrowing strategies that aim to determine and optimize subgroup-specific doses. Specifically, we discuss Bayesian hierarchical modeling, Bayesian clustering, Bayesian model averaging or selection, pairwise borrowing, and other relevant approaches. By employing these Bayesian information-borrowing methods, investigators can gain a better understanding of the intricate relationships between dose, toxicity, and efficacy in each subgroup. This increased understanding significantly improves the chances of identifying an optimal dose tailored to each specific subgroup. Furthermore, we present several practical recommendations to guide the design of future early-phase oncology trials involving multiple subgroups when using the Bayesian information-borrowing methods.
Collapse
Affiliation(s)
- Jingyi Zhang
- Research Center of Biostatistics and Computational Pharmacy, China Pharmaceutical University, Nanjing, China
| | - Ruitao Lin
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Xin Chen
- Research Center of Biostatistics and Computational Pharmacy, China Pharmaceutical University, Nanjing, China
| | - Fangrong Yan
- Research Center of Biostatistics and Computational Pharmacy, China Pharmaceutical University, Nanjing, China
| |
Collapse
|
10
|
Wei W, Blaha O, Esserman D, Zelterman D, Kane M, Liu R, Lin J. A Bayesian platform trial design with hybrid control based on multisource exchangeability modelling. Stat Med 2024; 43:2439-2451. [PMID: 38594809 PMCID: PMC11325877 DOI: 10.1002/sim.10077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 02/25/2024] [Accepted: 03/27/2024] [Indexed: 04/11/2024]
Abstract
Enrolling patients to the standard of care (SOC) arm in randomized clinical trials, especially for rare diseases, can be very challenging due to the lack of resources, restricted patient population availability, and ethical considerations. As the therapeutic effect for the SOC is often well documented in historical trials, we propose a Bayesian platform trial design with hybrid control based on the multisource exchangeability modelling (MEM) framework to harness historical control data. The MEM approach provides a computationally efficient method to formally evaluate the exchangeability of study outcomes between different data sources and allows us to make better informed data borrowing decisions based on the exchangeability between historical and concurrent data. We conduct extensive simulation studies to evaluate the proposed hybrid design. We demonstrate the proposed design leads to significant sample size reduction for the internal control arm and borrows more information compared to competing Bayesian approaches when historical and internal data are compatible.
Collapse
Affiliation(s)
- Wei Wei
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut 06520
| | - Ondrej Blaha
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut 06520
| | - Denise Esserman
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut 06520
| | - Daniel Zelterman
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut 06520
| | - Michael Kane
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut 06520
| | - Rachael Liu
- Takeda Pharmaceuticals, Cambridge, Massachusetts 02139, United States
| | - Jianchang Lin
- Takeda Pharmaceuticals, Cambridge, Massachusetts 02139, United States
| |
Collapse
|
11
|
Lee SY. Using Bayesian statistics in confirmatory clinical trials in the regulatory setting: a tutorial review. BMC Med Res Methodol 2024; 24:110. [PMID: 38714936 PMCID: PMC11077897 DOI: 10.1186/s12874-024-02235-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 04/24/2024] [Indexed: 05/12/2024] Open
Abstract
Bayesian statistics plays a pivotal role in advancing medical science by enabling healthcare companies, regulators, and stakeholders to assess the safety and efficacy of new treatments, interventions, and medical procedures. The Bayesian framework offers a unique advantage over the classical framework, especially when incorporating prior information into a new trial with quality external data, such as historical data or another source of co-data. In recent years, there has been a significant increase in regulatory submissions using Bayesian statistics due to its flexibility and ability to provide valuable insights for decision-making, addressing the modern complexity of clinical trials where frequentist trials are inadequate. For regulatory submissions, companies often need to consider the frequentist operating characteristics of the Bayesian analysis strategy, regardless of the design complexity. In particular, the focus is on the frequentist type I error rate and power for all realistic alternatives. This tutorial review aims to provide a comprehensive overview of the use of Bayesian statistics in sample size determination, control of type I error rate, multiplicity adjustments, external data borrowing, etc., in the regulatory environment of clinical trials. Fundamental concepts of Bayesian sample size determination and illustrative examples are provided to serve as a valuable resource for researchers, clinicians, and statisticians seeking to develop more complex and innovative designs.
Collapse
Affiliation(s)
- Se Yoon Lee
- Department of Statistics, Texas A &M University, 3143 TAMU, College Station, TX, 77843, USA.
| |
Collapse
|
12
|
Struebing A, McKibbon C, Ruan H, Mackay E, Dennis N, Velummailum R, He P, Tanaka Y, Xiong Y, Springford A, Rosenlund M. Augmenting external control arms using Bayesian borrowing: a case study in first-line non-small cell lung cancer. J Comp Eff Res 2024; 13:e230175. [PMID: 38573331 PMCID: PMC11036906 DOI: 10.57264/cer-2023-0175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 03/01/2024] [Indexed: 04/05/2024] Open
Abstract
Aim: This study aimed to improve comparative effectiveness estimates and discuss challenges encountered through the application of Bayesian borrowing (BB) methods to augment an external control arm (ECA) constructed from real-world data (RWD) using historical clinical trial data in first-line non-small-cell lung cancer (NSCLC). Materials & methods: An ECA for a randomized controlled trial (RCT) in first-line NSCLC was constructed using ConcertAI Patient360™ to assess chemotherapy with or without cetuximab, in the bevacizumab-inappropriate subpopulation. Cardinality matching was used to match patient characteristics between the treatment arm (cetuximab + chemotherapy) and ECA. Overall survival (OS) was assessed as the primary outcome using Cox proportional hazards (PH). BB was conducted using a static power prior under a Weibull PH parameterization with borrowing weights from 0.0 to 1.0 and augmentation of the ECA from a historical control trial. Results: The constructed ECA yielded a higher overall survival (OS) hazard ratio (HR) (HR = 1.53; 95% CI: 1.21-1.93) than observed in the matched population of the RCT (HR = 0.91; 95% CI: 0.73-1.13). The OS HR decreased through the incorporation of BB (HR = 1.30; 95% CI: 1.08-1.54, borrowing weight = 1.0). BB was applied to augment the RCT control arm via a historical control which improved the precision of the observed HR estimate (1.03; 95% CI: 0.86-1.22, borrowing weight = 1.0), in comparison to the matched population of the RCT alone. Conclusion: In this study, the RWD ECA was unable to successfully replicate the OS estimates from the matched population of the selected RCT. The inability to replicate could be due to unmeasured confounding and variations in time-periods, follow-up and subsequent therapy. Despite these findings, we demonstrate how BB can improve precision of comparative effectiveness estimates, potentially aid as a bias assessment tool and mitigate challenges of traditional methods when appropriate external data sources are available.
Collapse
Affiliation(s)
| | | | - Haoyao Ruan
- Cytel Inc., Toronto, Ontario, M5J, 2P1, Canada
| | - Emma Mackay
- Cytel Inc., Toronto, Ontario, M5J, 2P1, Canada
| | | | | | - Philip He
- Daiichi Sankyo, Inc., Basking Ridge, NJ 07920, USA
| | - Yoko Tanaka
- Daiichi Sankyo, Inc., Basking Ridge, NJ 07920, USA
| | - Yan Xiong
- Daiichi Sankyo, Inc., Basking Ridge, NJ 07920, USA
| | | | - Mats Rosenlund
- Daiichi Sankyo Europe, Munich, 81379, Germany
- Department of Learning, Informatics, Management & Ethics (LIME), Karolinska Institutet, Stockholm, 171 77, Sweden
| |
Collapse
|
13
|
Warren JL, Wang Q, Ciarleglio MM. A scaled kernel density estimation prior for dynamic borrowing of historical information with application to clinical trial design. Stat Med 2024; 43:1615-1626. [PMID: 38345148 PMCID: PMC11483151 DOI: 10.1002/sim.10032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 11/07/2023] [Accepted: 01/23/2024] [Indexed: 03/16/2024]
Abstract
Incorporating historical data into a current data analysis can improve estimation of parameters shared across both datasets and increase the power to detect associations of interest while reducing the time and cost of new data collection. Several methods for prior distribution elicitation have been introduced to allow for the data-driven borrowing of historical information within a Bayesian analysis of the current data. We propose scaled Gaussian kernel density estimation (SGKDE) prior distributions as potentially more flexible alternatives. SGKDE priors directly use posterior samples collected from a historical data analysis to approximate probability density functions, whose variances depend on the degree of similarity between the historical and current datasets, which are used as prior distributions in the current data analysis. We compare the performances of the SGKDE priors with some existing approaches using a simulation study. Data from a recently completed phase III clinical trial of a maternal vaccine for respiratory syncytial virus are used to further explore the properties of SGKDE priors when designing a new clinical trial while incorporating historical data. Overall, both studies suggest that the new approach results in improved parameter estimation and power in the current data analysis compared to the considered existing methods.
Collapse
Affiliation(s)
- Joshua L Warren
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA
| | - Qi Wang
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA
| | - Maria M Ciarleglio
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA
| |
Collapse
|
14
|
Aceituno D, Razzouk D, Jin H, Pennington M, Gadelha A, Bressan R, Noto C, Crossley N, Prina M. Cost-effectiveness of early intervention in psychosis in low- and middle-income countries: economic evaluation from São Paulo, Brazil. Epidemiol Psychiatr Sci 2024; 33:e21. [PMID: 38576239 PMCID: PMC11022262 DOI: 10.1017/s2045796024000222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 01/16/2024] [Accepted: 03/08/2024] [Indexed: 04/06/2024] Open
Abstract
AIMS The effectiveness and cost-effectiveness of early intervention for psychosis (EIP) services are well established in high-income countries but not in low- and middle-income countries (LMICs). Despite the scarcity of local evidence, several EIP services have been implemented in LMICs. Local evaluations are warranted before adopting speciality models of care in LMICs. We aimed to estimate the cost-effectiveness of implementing EIP services in Brazil. METHODS A model-based economic evaluation of EIP services was conducted from the Brazilian healthcare system perspective. A Markov model was developed using a cohort study conducted in São Paulo. Cost data were retrieved from local sources. The outcome of interest was the incremental cost-effectiveness ratio (ICER) measured as the incremental costs over the incremental quality-adjusted life-years (QALYs). Sensitivity analyses were performed to test the robustness of the results. RESULTS The study included 357 participants (38% female), with a mean (SD) age of 26 (7.38) years. According to the model, implementing EIP services in Brazil would result in a mean incremental cost of 4,478 Brazilian reals (R$) and a mean incremental benefit of 0.29 QALYs. The resulting ICER of R$ 15,495 (US dollar [USD] 7,640 adjusted for purchase power parity [PPP]) per QALY can be considered cost-effective at a willingness-to-pay threshold of 1 Gross domestic product (GDP) per capita (R$ 18,254; USD 9,000 PPP adjusted). The model results were robust to sensitivity analyses. CONCLUSIONS This study supports the economic advantages of implementing EIP services in Brazil. Although cultural adaptations are required, these data suggest EIP services might be cost-effective even in less-resourced countries.
Collapse
Affiliation(s)
- D. Aceituno
- Department of Psychiatry, Pontificia Universidad Católica de Chile, Santiago, Chile
- King’s Health Economics, Health Service and Population Research Department, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, David Goldberg Centre, London, UK
- Mental Health Service, Complejo Asistencial Dr. Sotero del Rio, Puente Alto, Chile
| | - D. Razzouk
- Centre of Mental Health Economics, Department of Psychiatry, Universidade Federal de São Paulo, São Paulo, Brazil
| | - H. Jin
- King’s Health Economics, Health Service and Population Research Department, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, David Goldberg Centre, London, UK
| | - M. Pennington
- King’s Health Economics, Health Service and Population Research Department, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, David Goldberg Centre, London, UK
| | - A. Gadelha
- Schizophrenia Program (PROESQ), Department of Psychiatry, Federal University of São Paulo, São Paulo, Brazil
- Interdisciplinary Laboratory in Clinical Neuroscience (LiNC), Department of Psychiatry, Federal University of São Paulo, São Paulo, Brazil
| | - R. Bressan
- Schizophrenia Program (PROESQ), Department of Psychiatry, Federal University of São Paulo, São Paulo, Brazil
- Interdisciplinary Laboratory in Clinical Neuroscience (LiNC), Department of Psychiatry, Federal University of São Paulo, São Paulo, Brazil
| | - C. Noto
- Schizophrenia Program (PROESQ), Department of Psychiatry, Federal University of São Paulo, São Paulo, Brazil
- Interdisciplinary Laboratory in Clinical Neuroscience (LiNC), Department of Psychiatry, Federal University of São Paulo, São Paulo, Brazil
| | - N. Crossley
- Department of Psychiatry, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - M. Prina
- Health Service and Population Research Department, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, UK
- Population Health Sciences Institute, Newcastle University, Newcastle, UK
| | | |
Collapse
|
15
|
Evrenoglou T, Metelli S, Thomas JS, Siafis S, Turner RM, Leucht S, Chaimani A. Sharing information across patient subgroups to draw conclusions from sparse treatment networks. Biom J 2024; 66:e2200316. [PMID: 38637311 DOI: 10.1002/bimj.202200316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 11/07/2023] [Accepted: 12/26/2023] [Indexed: 04/20/2024]
Abstract
Network meta-analysis (NMA) usually provides estimates of the relative effects with the highest possible precision. However, sparse networks with few available studies and limited direct evidence can arise, threatening the robustness and reliability of NMA estimates. In these cases, the limited amount of available information can hamper the formal evaluation of the underlying NMA assumptions of transitivity and consistency. In addition, NMA estimates from sparse networks are expected to be imprecise and possibly biased as they rely on large-sample approximations that are invalid in the absence of sufficient data. We propose a Bayesian framework that allows sharing of information between two networks that pertain to different population subgroups. Specifically, we use the results from a subgroup with a lot of direct evidence (a dense network) to construct informative priors for the relative effects in the target subgroup (a sparse network). This is a two-stage approach where at the first stage, we extrapolate the results of the dense network to those expected from the sparse network. This takes place by using a modified hierarchical NMA model where we add a location parameter that shifts the distribution of the relative effects to make them applicable to the target population. At the second stage, these extrapolated results are used as prior information for the sparse network. We illustrate our approach through a motivating example of psychiatric patients. Our approach results in more precise and robust estimates of the relative effects and can adequately inform clinical practice in presence of sparse networks.
Collapse
Affiliation(s)
- Theodoros Evrenoglou
- Center of Research in Epidemiology and Statistics (CRESS-U1153), Université Paris Cité, INSERM, Paris, France
| | - Silvia Metelli
- Center of Research in Epidemiology and Statistics (CRESS-U1153), Université Paris Cité, INSERM, Paris, France
| | - Johannes-Schneider Thomas
- Department of Psychiatry and Psychotherapy, School of Medicine, Technical University of Munich, Munchen, Germany
| | - Spyridon Siafis
- Department of Psychiatry and Psychotherapy, School of Medicine, Technical University of Munich, Munchen, Germany
| | | | - Stefan Leucht
- Department of Psychiatry and Psychotherapy, School of Medicine, Technical University of Munich, Munchen, Germany
| | - Anna Chaimani
- Center of Research in Epidemiology and Statistics (CRESS-U1153), Université Paris Cité, INSERM, Paris, France
| |
Collapse
|
16
|
McIlroy G, Lax S, Gaskell C, Jackson A, Rhodes M, Seale T, Fox S, Hopkins L, Okosun J, Barrington SF, Ringshausen I, Ramsay AG, Calaminici M, Linton K, Bishton M. Investigator choice of standard therapy versus sequential novel therapy arms in the treatment of relapsed follicular lymphoma (REFRACT): study protocol for a multi-centre, open-label, randomised, phase II platform trial. BMC Cancer 2024; 24:370. [PMID: 38528445 PMCID: PMC10962099 DOI: 10.1186/s12885-024-12112-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Accepted: 03/12/2024] [Indexed: 03/27/2024] Open
Abstract
BACKGROUND Relapsed or refractory follicular lymphoma (rrFL) is an incurable disease associated with shorter remissions and survival after each line of standard therapy. Many promising novel, chemotherapy-free therapies are in development, but few are licensed as their role in current treatment pathways is poorly defined. METHODS The REFRACT trial is an investigator-initiated, UK National Cancer Research Institute, open-label, multi-centre, randomised phase II platform trial aimed at accelerating clinical development of novel therapies by addressing evidence gaps. The first of the three sequential novel therapy arms is epcoritamab plus lenalidomide, to be compared with investigator choice standard therapy (ICT). Patients aged 18 years or older with biopsy proven relapsed or refractory CD20 positive, grade 1-3a follicular lymphoma and assessable disease by PET-CT are eligible. The primary outcome is complete metabolic response by PET-CT at 24 weeks using the Deauville 5-point scale and Lugano 2014 criteria. Secondary outcomes include overall metabolic response, progression-free survival, overall survival, duration of response, and quality of life assessed by EQ-5D-5 L and FACT-Lym. The trial employs an innovative Bayesian design with a target sample size of 284 patients: 95 in the ICT arm and 189 in the novel therapy arms. DISCUSSION Whilst there are many promising novel drugs in early clinical development for rrFL, understanding the relative efficacy and safety of these agents, and their place in modern treatment pathways, is limited by a lack of randomised trials and dearth of published outcomes for standard regimens to act as historic controls. Therefore, the aim of REFRACT is to provide an efficient platform to evaluate novel agents against standard therapies for rrFL. The adaptive Bayesian power prior methodology design will minimise patient numbers and accelerate trial delivery. TRIAL REGISTRATION ClinicalTrials.gov: NCT05848765; 08-May-2023. EUDRACT 2022-000677-75; 10-Feb-2022.
Collapse
Affiliation(s)
- Graham McIlroy
- Cancer Research UK Clinical Trials Unit (CRCTU), University of Birmingham, Birmingham, UK.
| | - Siân Lax
- Cancer Research UK Clinical Trials Unit (CRCTU), University of Birmingham, Birmingham, UK
| | - Charlotte Gaskell
- Cancer Research UK Clinical Trials Unit (CRCTU), University of Birmingham, Birmingham, UK
| | - Aimee Jackson
- Cancer Research UK Clinical Trials Unit (CRCTU), University of Birmingham, Birmingham, UK
| | | | - Tania Seale
- Division of Cancer Sciences, University of Manchester, Manchester, UK
| | - Sonia Fox
- Cancer Research UK Clinical Trials Unit (CRCTU), University of Birmingham, Birmingham, UK
| | - Lousie Hopkins
- Cancer Research UK Clinical Trials Unit (CRCTU), University of Birmingham, Birmingham, UK
| | - Jessica Okosun
- Barts Cancer Institute, Queen Mary University of London, London, UK
| | - Sally F Barrington
- King's College London and Guy's and St Thomas' PET Centre, School of Biomedical Engineering and Imaging Sciences, King's College London, King's Health Partners, London, UK
| | | | - Alan G Ramsay
- School of Cancer and Pharmaceutical Sciences, Faculty of Life Sciences & Medicine, King's College London, London, UK
| | - Maria Calaminici
- Department of Cellular Pathology Barts Health and Centre for Haemato-Oncology, Barts Cancer Institute, Queen Mary University of London, London, UK
| | - Kim Linton
- Division of Cancer Sciences, University of Manchester, Manchester, UK
- Department of Medical Oncology, The Christie NHS Foundation Trust, Manchester, UK
| | - Mark Bishton
- Translational Medical Sciences, University of Nottingham, Nottingham, UK
- Department of Haematology, Nottingham University Hospitals NHS Trust, Nottingham, UK
| |
Collapse
|
17
|
Wang P, Chow SC. The use of real-world data for clinical investigation of effectiveness in drug development. J Biopharm Stat 2024:1-24. [PMID: 38519266 DOI: 10.1080/10543406.2024.2330215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 03/01/2024] [Indexed: 03/24/2024]
Abstract
With the growing interest in leveraging real-world data (RWD) to support effectiveness evaluations for new indications, new target populations, and post-market performance, the United States Food and Drug Administration has published several guidance documents on RWD sources and real-world studies (RWS) to assist sponsors in generating credible real-world evidence (RWE). Meanwhile, the randomized controlled trial (RCT) remains the gold standard in drug evaluation. Along this line, we propose a hybrid two-stage adaptive design to evaluate effectiveness based on evidence from both RCT and RWS. At the first stage, a typical non-inferiority test is conducted using RCT data to test for not-ineffectiveness. Once not-ineffectiveness is established, the study proceeds to the second stage to conduct an RWS and test for effectiveness using integrated information from RCT and RWD. The composite likelihood approach is implemented as a down-weighing strategy to account for the impact of high variability in RWS population. An optimal sample size determination procedure for RCT and RWS is introduced, aiming to achieve the minimal expected sample size. Through extensive numerical study, the proposed design demonstrates the ability to control type I error inflation in most cases and consistently maintain statistical power above the desired level. In general, this RCT/RWS hybrid two-stage adaptive design is beneficial for effectiveness evaluations in drug development, especially for oncology and rare diseases.
Collapse
Affiliation(s)
- Peijin Wang
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, North Carolina, USA
| | - Shein-Chung Chow
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, North Carolina, USA
| |
Collapse
|
18
|
Tong L, Li C, Xia J, Wang L. A Bayesian approach based on discounting factor for consistency assessment in multi-regional clinical trial. J Biopharm Stat 2024:1-17. [PMID: 38506674 DOI: 10.1080/10543406.2024.2328591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Accepted: 03/05/2024] [Indexed: 03/21/2024]
Abstract
Multi-regional clinical trial (MRCT) has become an increasing trend for its supporting simultaneous global drug development. After MRCT, consistency assessment needs to be conducted to evaluate regional efficacy. The weighted Z-test approach is a common consistency assessment approach in which the weighting parameter W does not have a good practical significance; the discounting factor approach improved from the weighted Z-test approach by converting the estimation of W in original weighted Z-test approach to the estimation of discounting factor D. However, the discounting factor approach is an approach of frequency statistics, in which D was fixed as a certain value; the variation of D was not considered, which may lead to un-reasonable results. In this paper, we proposed a Bayesian approach based on D to evaluate the treatment effect for the target region in MRCT, in which the variation of D was considered. Specifically, we first took D random instead of fixed as a certain value and specified a beta distribution for it. According to the results of simulation, we further adjusted the Bayesian approach. The application of the proposed approach was illustrated by Markov Chain Monte Carlo simulation.
Collapse
Affiliation(s)
- Liang Tong
- Department of Health Statistics, Faculty of Preventive Medicine, Air Force Medical University, Xi'an, Shaanxi, China
- Center for Disease Control and Prevention of Central Theater Command, Beijing, China
| | - Chen Li
- Department of Health Statistics, Faculty of Preventive Medicine, Air Force Medical University, Xi'an, Shaanxi, China
- Ministry of Education Key Lab of Hazard Assessment and Control in Special Operational Environment, Xi'an, Shaanxi, China
| | - Jielai Xia
- Department of Health Statistics, Faculty of Preventive Medicine, Air Force Medical University, Xi'an, Shaanxi, China
- Ministry of Education Key Lab of Hazard Assessment and Control in Special Operational Environment, Xi'an, Shaanxi, China
| | - Ling Wang
- Department of Health Statistics, Faculty of Preventive Medicine, Air Force Medical University, Xi'an, Shaanxi, China
- Ministry of Education Key Lab of Hazard Assessment and Control in Special Operational Environment, Xi'an, Shaanxi, China
| |
Collapse
|
19
|
Burman CF, Hermansson E, Bock D, Franzén S, Svensson D. Digital twins and Bayesian dynamic borrowing: Two recent approaches for incorporating historical control data. Pharm Stat 2024. [PMID: 38439136 DOI: 10.1002/pst.2376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 01/29/2024] [Accepted: 02/20/2024] [Indexed: 03/06/2024]
Abstract
Recent years have seen an increasing interest in incorporating external control data for designing and evaluating randomized clinical trials (RCT). This may decrease costs and shorten inclusion times by reducing sample sizes. For small populations, with limited recruitment, this can be especially important. Bayesian dynamic borrowing (BDB) has been a popular choice as it claims to protect against potential prior data conflict. Digital twins (DT) has recently been proposed as another method to utilize historical data. DT, also known as PROCOVA™, is based on constructing a prognostic score from historical control data, typically using machine learning. This score is included in a pre-specified ANCOVA as the primary analysis of the RCT. The promise of this idea is power increase while guaranteeing strong type 1 error control. In this paper, we apply analytic derivations and simulations to analyze and discuss examples of these two approaches. We conclude that BDB and DT, although similar in scope, have fundamental differences which need be considered in the specific application. The inflation of the type 1 error is a serious issue for BDB, while more evidence is needed of a tangible value of DT for real RCTs.
Collapse
Affiliation(s)
- Carl-Fredrik Burman
- Early Biometrics & Statistical Innovation, Data Science & Artificial Intelligence, R&D, AstraZeneca, Gothenburg, Sweden
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Erik Hermansson
- Early Biometrics & Statistical Innovation, Data Science & Artificial Intelligence, R&D, AstraZeneca, Gothenburg, Sweden
| | - David Bock
- Early Biometrics & Statistical Innovation, Data Science & Artificial Intelligence, R&D, AstraZeneca, Gothenburg, Sweden
| | - Stefan Franzén
- BMP Evidence Statistics, BioPharmaceuticals Medical, AstraZeneca, Gothenburg, Sweden
| | - David Svensson
- Early Biometrics & Statistical Innovation, Data Science & Artificial Intelligence, R&D, AstraZeneca, Gothenburg, Sweden
| |
Collapse
|
20
|
Warren JL, Sundaram M, Pitzer VE, Omer SB, Weinberger DM. Incorporating Efficacy Data from Initial Trials Into Subsequent Evaluations: Application to Vaccines Against Respiratory Syncytial Virus. Epidemiology 2024; 35:130-136. [PMID: 37963353 PMCID: PMC10842163 DOI: 10.1097/ede.0000000000001690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
BACKGROUND When a randomized controlled trial fails to demonstrate statistically significant efficacy against the primary endpoint, a potentially costly new trial would need to be conducted to receive licensure. Incorporating data from previous trials might allow for more efficient follow-up trials to demonstrate efficacy, speeding the availability of effective vaccines. METHODS Based on the outcomes from a failed trial of a maternal vaccine against respiratory syncytial virus (RSV), we simulated data for a new Bayesian group-sequential trial. We analyzed the data either ignoring data from the previous trial (i.e., weakly informative prior distributions) or using prior distributions incorporating the historical data into the analysis. We evaluated scenarios where efficacy in the new trial was the same, greater than, or less than that in the original trial. For each scenario, we evaluated the statistical power and type I error rate for estimating the vaccine effect following interim analyses. RESULTS When we used a stringent threshold to control the type I error rate, analyses incorporating historical data had a small advantage over trials that did not. If control of type I error is less important (e.g., in a postlicensure evaluation), the incorporation of historical data can provide a substantial boost in efficiency. CONCLUSIONS Due to the need to control the type I error rate in trials used to license a vaccine, incorporating historical data provides little additional benefit in terms of stopping the trial early. However, these statistical approaches could be promising in evaluations that use real-world evidence following licensure.
Collapse
Affiliation(s)
- Joshua L. Warren
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
| | - Maria Sundaram
- Marshfield Clinic Research Institute, Center for Clinical Epidemiology & Population Health, Marshfield, WI, USA
| | - Virginia E. Pitzer
- Department of Epidemiology of Microbial Diseases, Yale School of Public Health, New Haven, CT, USA
| | - Saad B. Omer
- Department of Epidemiology of Microbial Diseases, Yale School of Public Health, New Haven, CT, USA
- Yale Institute of Global Health, New Haven, CT, United States
- Yale School of Medicine, New Haven, CT, United States
| | - Daniel M. Weinberger
- Department of Epidemiology of Microbial Diseases, Yale School of Public Health, New Haven, CT, USA
| |
Collapse
|
21
|
Zhu JQ, Sundh J, Spicer J, Chater N, Sanborn AN. The autocorrelated Bayesian sampler: A rational process for probability judgments, estimates, confidence intervals, choices, confidence judgments, and response times. Psychol Rev 2024; 131:456-493. [PMID: 37289507 PMCID: PMC11115360 DOI: 10.1037/rev0000427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Normative models of decision-making that optimally transform noisy (sensory) information into categorical decisions qualitatively mismatch human behavior. Indeed, leading computational models have only achieved high empirical corroboration by adding task-specific assumptions that deviate from normative principles. In response, we offer a Bayesian approach that implicitly produces a posterior distribution of possible answers (hypotheses) in response to sensory information. But we assume that the brain has no direct access to this posterior, but can only sample hypotheses according to their posterior probabilities. Accordingly, we argue that the primary problem of normative concern in decision-making is integrating stochastic hypotheses, rather than stochastic sensory information, to make categorical decisions. This implies that human response variability arises mainly from posterior sampling rather than sensory noise. Because human hypothesis generation is serially correlated, hypothesis samples will be autocorrelated. Guided by this new problem formulation, we develop a new process, the Autocorrelated Bayesian Sampler (ABS), which grounds autocorrelated hypothesis generation in a sophisticated sampling algorithm. The ABS provides a single mechanism that qualitatively explains many empirical effects of probability judgments, estimates, confidence intervals, choice, confidence judgments, response times, and their relationships. Our analysis demonstrates the unifying power of a perspective shift in the exploration of normative models. It also exemplifies the proposal that the "Bayesian brain" operates using samples not probabilities, and that variability in human behavior may primarily reflect computational rather than sensory noise. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | - Jake Spicer
- Department of Psychology, University of Warwick
| | - Nick Chater
- Warwick Business School, University of Warwick
| | | |
Collapse
|
22
|
Mariani F, De Santis F, Gubbiotti S. A dynamic power prior approach to non-inferiority trials for normal means. Pharm Stat 2024; 23:242-256. [PMID: 37964403 DOI: 10.1002/pst.2349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 07/31/2023] [Accepted: 10/23/2023] [Indexed: 11/16/2023]
Abstract
Non-inferiority trials compare new experimental therapies to standard ones (active control). In these experiments, historical information on the control treatment is often available. This makes Bayesian methodology appealing since it allows a natural way to exploit information from past studies. In the present paper, we suggest the use of previous data for constructing the prior distribution of the control effect parameter. Specifically, we consider a dynamic power prior that possibly allows to discount the level of borrowing in the presence of heterogeneity between past and current control data. The discount parameter of the prior is based on the Hellinger distance between the posterior distributions of the control parameter based, respectively, on historical and current data. We develop the methodology for comparing normal means and we handle the unknown variance assumption using MCMC. We also provide a simulation study to analyze the proposed test in terms of frequentist size and power, as it is usually requested by regulatory agencies. Finally, we investigate comparisons with some existing methods and we illustrate an application to a real case study.
Collapse
Affiliation(s)
- Francesco Mariani
- Dipartimento di Scienze Statistiche, Sapienza University of Rome, Rome, Italy
| | - Fulvio De Santis
- Dipartimento di Scienze Statistiche, Sapienza University of Rome, Rome, Italy
| | - Stefania Gubbiotti
- Dipartimento di Scienze Statistiche, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
23
|
Pepić A, Stark M, Friede T, Kopp-Schneider A, Calderazzo S, Reichert M, Wolf M, Wirth U, Schopf S, Zapf A. A diagnostic phase III/IV seamless design to investigate the diagnostic accuracy and clinical effectiveness using the example of HEDOS and HEDOS II. Stat Methods Med Res 2024; 33:433-448. [PMID: 38327081 PMCID: PMC10981198 DOI: 10.1177/09622802241227951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2024]
Abstract
The development process of medical devices can be streamlined by combining different study phases. Here, for a diagnostic medical device, we present the combination of confirmation of diagnostic accuracy (phase III) and evaluation of clinical effectiveness regarding patient-relevant endpoints (phase IV) using a seamless design. This approach is used in the Thyroid HEmorrhage DetectOr Study (HEDOS & HEDOS II) investigating a post-operative hemorrhage detector named ISAR-M THYRO® in patients after thyroid surgery. Data from the phase III trial are reused as external controls in the control group of the phase IV trial. An unblinded interim analysis is planned between the two study stages which includes a recalculation of the sample size for the phase IV part after completion of the first stage of the seamless design. The study concept presented here is the first seamless design proposed in the field of diagnostic studies. Hence, the aim of this work is to emphasize the statistical methodology as well as feasibility of the proposed design in relation to the planning and implementation of the seamless design. Seamless designs can accelerate the overall trial duration and increase its efficiency in terms of sample size and recruitment. However, careful planning addressing numerous methodological and procedural challenges is necessary for successful implementation as well as agreement with regulatory bodies.
Collapse
Affiliation(s)
- Amra Pepić
- Institute of Medical Biometry and Epidemiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | - Maria Stark
- Institute of Medical Biometry and Epidemiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | - Tim Friede
- Department of Medical Statistics, University Medical Center Göttingen, Göttingen, Germany
| | | | - Silvia Calderazzo
- Division of Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Michael Wolf
- CRI—The Clinical Research Institute, Munich, Germany
| | - Ulrich Wirth
- Clinic for General, Visceral and Transplant Surgery, Hospital of the Ludwig-Maximilians-University, Munich, Germany
| | - Stefan Schopf
- RoMed Klinik Bad Aibling, Academic University Hospital of the Technical University of Munich, Bad Aibling, Germany
| | - Antonia Zapf
- Institute of Medical Biometry and Epidemiology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| |
Collapse
|
24
|
Gwon Y, Ji Y, Abadi AM, Rau A, Berman JD, Leeper RD, Rennie J, Nagaya R, Bell JE. The effect of heterogeneous severe drought on all-cause and cardiovascular mortality in the Northern Rockies and Plains of the United States. THE SCIENCE OF THE TOTAL ENVIRONMENT 2024; 912:169033. [PMID: 38065492 DOI: 10.1016/j.scitotenv.2023.169033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 01/18/2024]
Abstract
Drought is a distinct and complicated climate hazard that regularly leads to severe economic impacts. Changes in the frequency and occurrence of drought due to anthropogenic climate change can lead to new and unanticipated outcomes. To better prepare for health outcomes, more research is needed to develop methodologies to understand potential consequences. This study suggests a new methodology for assessing the impact of monthly severe drought exposure on mortality in the Northern Rockies and Plains of the United States from 2000 to 2018. A two-stage model with the power prior approach was applied to integrate heterogeneous severe drought pattern and estimate overall risk ratios of all-cause and cardiovascular mortality related to multiple drought indices (the US Drought Monitor, 6- and 12-month Standardized Precipitation-Evapotranspiration Index, 6- and 12 month Evaporative Demand Drought Index). Under severe drought, the risk ratios of all-cause mortality are 1.050 (95 % Cr: 1.031 to 1.071, USDM), 1.041 (95 % Cr: 1.022 to 1.060, 6-SPEI), 1.009 (95 % Cr: 0.989 to 1.031, 12SPEI), 1.045 (95 % Cr: 1.022 to 1.067, 6-EDDI), and 1.035 (95 % Cr: 1.009 to 1.062, 12-EDDI); cardiovascular mortality are 1.057 (95 % Cr: 1.023 to 1.091, USDM), 1.028 (95 % Cr: 0.998 to 1.059, 6-SPEI), 1.005 (95 % Cr: 0.973 to 1.040, 12-SPEI), 1.042 (95 % Cr: 1.005 to 1.080, 6-EDDI), and 1.004 (95 % Cr: 0.959 to 1.049, 12-EDDI). Our results showed that (i) a model with properly accounted for heterogeneous exposure pattern had greater risk ratios if statistically significant; (ii) a mid-term (6-month) severe drought had higher risk ratios compared to longer-term (12-month) drought; and (iii) different severe droughts affect populations in a different way. These results expand the existing knowledge of drought relationship to increasing mortality in the United States. The findings from this study highlight the need for communities and policymakers to establish effective drought-prevention initiatives in this region.
Collapse
Affiliation(s)
- Yeongjin Gwon
- Department of Biostatistics, College of Public Health, University of Nebraska Medical Center, Omaha 68198, NE, USA; Daugherty Water for Food Global Institute, University of Nebraska, Lincoln 68588, NE, USA.
| | - Yuanyuan Ji
- Department of Biostatistics, College of Public Health, University of Nebraska Medical Center, Omaha 68198, NE, USA
| | - Azar M Abadi
- Environmental Health Sciences, School of Public Health, University of Alabama, Birmingham 35233, AL, USA
| | - Austin Rau
- Division of Environmental Health Sciences, School of Public Health, University of Minnesota, Minneapolis 55455, MN, USA
| | - Jesse D Berman
- Division of Environmental Health Sciences, School of Public Health, University of Minnesota, Minneapolis 55455, MN, USA
| | - Ronald D Leeper
- North Carolina Institute for Climate Studies, North Carolina State University, Raleigh 27695, NC, USA
| | - Jared Rennie
- National Centers for Environmental Information, National Oceanic Atmospheric Administration, Asheveille, 28801, NC, USA
| | - Richard Nagaya
- Department of Biostatistics, College of Public Health, University of Nebraska Medical Center, Omaha 68198, NE, USA
| | - Jesse E Bell
- Department of Environmental, Agriculture, Occupational and Health, College of Public Health, University of Nebraska Medical Center, Omaha 68198, NE, USA; Daugherty Water for Food Global Institute, University of Nebraska, Lincoln 68588, NE, USA; School of Natural Resources, University of Nebraska, Lincoln 68588, NE, USA
| |
Collapse
|
25
|
Guo B, Wang L, Yuan Y. Treatment Comparisons in Adaptive Platform Trials Adjusting for Temporal Drift. Stat Biopharm Res 2024; 16:361-370. [PMID: 39184873 PMCID: PMC11343491 DOI: 10.1080/19466315.2023.2292238] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 09/05/2023] [Accepted: 12/01/2023] [Indexed: 08/27/2024]
Abstract
An adaptive platform trial (APT) is a multi-arm trial in the context of a single disease where treatment arms are allowed to enter or leave the trial based on some decision rule. If a treatment enters the trial later than the control arm, there exist non-concurrent controls who were not randomized between the two arms under comparison. As APTs typically take long periods of time to conduct, temporal drift may occur, which requires the treatment comparisons to be adjusted for this temporal change. Under the causal inference framework, we propose two approaches for treatment comparisons in APTs that account for temporal drift, both based on propensity score weighting. In particular, to address unmeasured confounders, one approach is doubly robust in the sense that it remains valid so long as either the propensity score model is correctly specified or the time effect model is correctly specified. Simulation study shows that our proposed approaches have desirable operating characteristics with well controlled type I error rates and high power with or without unmeasured confounders.
Collapse
Affiliation(s)
- Beibei Guo
- Department of Experimental Statistics, Louisiana State University, Baton Rouge, LA 70803, USA
| | - Li Wang
- Department of Statistics, AbbVie Inc., North Chicago, Illinois, U.S.A
| | - Ying Yuan
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| |
Collapse
|
26
|
Chau J, Altan S, Burggraeve A, Coppenolle H, Kifle YW, Prokopcova H, Van Daele T, Sterckx H. A Bayesian Approach to Kinetic Modeling of Accelerated Stability Studies and Shelf Life Determination. AAPS PharmSciTech 2023; 24:250. [PMID: 38036798 DOI: 10.1208/s12249-023-02695-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 11/02/2023] [Indexed: 12/02/2023] Open
Abstract
Kinetic modeling of accelerated stability data serves an important purpose in the development of pharmaceutical products, providing support for shelf life claims and expediting the path to clinical implementation. In this context, a Bayesian kinetic modeling framework is considered, accommodating different types of nonlinear kinetics with temperature and humidity dependent rates of degradation and accounting for the humidity conditions within the packaging to predict the shelf life. In comparison to kinetic modeling based on nonlinear least-squares regression, the Bayesian approach allows for interpretable posterior inference, flexible error modeling and the opportunity to include prior information based on historical data or expert knowledge. While both frameworks perform comparably for high-quality data from well-designed studies, the Bayesian approach provides additional robustness when the data are sparse or of limited quality. This is illustrated by modeling accelerated stability data from two solid dosage forms and is further examined by means of artificial data subsets and simulated data.
Collapse
Affiliation(s)
| | - Stan Altan
- Statistics and Decision Sciences, Janssen Research, Raritan, New Jersey, USA
| | - Anneleen Burggraeve
- Chemical and Pharmaceutical Development & Supply, Janssen Research, Beerse, Belgium
| | - Hans Coppenolle
- Statistics and Decision Sciences, Janssen Research, Beerse, Belgium
| | | | - Hana Prokopcova
- Chemical and Pharmaceutical Development & Supply, Janssen Research, Beerse, Belgium
| | - Timothy Van Daele
- Chemical and Pharmaceutical Development & Supply, Janssen Research, Beerse, Belgium
| | - Hans Sterckx
- Chemical and Pharmaceutical Development & Supply, Janssen Research, Turnhoutseweg 30, 2340, Beerse, Belgium.
| |
Collapse
|
27
|
Travis J, Rothmann M, Thomson A. Perspectives on informative Bayesian methods in pediatrics. J Biopharm Stat 2023; 33:830-843. [PMID: 36710384 DOI: 10.1080/10543406.2023.2170405] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 01/15/2023] [Indexed: 01/31/2023]
Abstract
Bayesian methods have been proposed as a natural fit for pediatric extrapolation, as they allow the incorporation of relevant external data to reduce the required sample size and hence trial burden for the pediatric patient population. In this paper we will discuss our experience and perspectives with these methods in pediatric trials. We will present some of the background and thinking underlying pediatric extrapolation and discuss the use of Bayesian methods within this context. We will present two recent case examples illustrating the value of a Bayesian approach in this setting and present perspectives on some of the issues that we have encountered in these and other cases.
Collapse
Affiliation(s)
- James Travis
- Office of Biostatistics, Office of Translational Science, Center for the Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Mark Rothmann
- Office of Biostatistics, Office of Translational Science, Center for the Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Andrew Thomson
- Data Analytics and Methods Taskforce, European Medicines Agency, Amsterdam, NL
| |
Collapse
|
28
|
Cooner F, Ye J, Reaman G. Clinical trial considerations for pediatric cancer drug development. J Biopharm Stat 2023; 33:859-874. [PMID: 36749066 DOI: 10.1080/10543406.2023.2172424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 01/20/2023] [Indexed: 02/08/2023]
Abstract
Oncology has been one of the most active therapeutic areas in medicinal products development. Despite this fact, few drugs have been approved for use in pediatric cancer patients when compared to the number approved for adults with cancer. This disparity could be attributed to the fact that many oncology drugs have had orphan drug designation and were exempt from Pediatric Research Equity Act (PREA) requirements. On August 18, 2017, the RACE for Children Act, i.e. Research to Accelerate Cures and Equity Act, was signed into law as Title V of the 2017 FDA Reauthorization Act (FDARA) to amend the PREA. Pediatric investigation is now required if the drug or biological product is intended for the treatment of an adult cancer and directed at a molecular target that FDA determines to be "substantially relevant to the growth or progression of a pediatric cancer." This paper discusses the specific considerations in clinical trial designs and statistical methodologies to be implemented in oncology pediatric clinical programs.
Collapse
Affiliation(s)
- Freda Cooner
- Global Biostatistics, Amgen Inc, Thousand Oaks, CA, USA
| | - Jingjing Ye
- Global Statistics and Data Sciences (GSDS), BeiGene USA, Fulton, MD, USA
| | - Gregory Reaman
- Oncology Center of Excellence, Office of the Commissioner, U.S. FDA, Silver Spring, MD, USA
| |
Collapse
|
29
|
Holzhauer B, Adewuyi ET. "Super-covariates": Using predicted control group outcome as a covariate in randomized clinical trials. Pharm Stat 2023; 22:1062-1075. [PMID: 37553959 DOI: 10.1002/pst.2329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 07/01/2023] [Accepted: 07/14/2023] [Indexed: 08/10/2023]
Abstract
The power of randomized controlled clinical trials to demonstrate the efficacy of a drug compared with a control group depends not just on how efficacious the drug is, but also on the variation in patients' outcomes. Adjusting for prognostic covariates during trial analysis can reduce this variation. For this reason, the primary statistical analysis of a clinical trial is often based on regression models that besides terms for treatment and some further terms (e.g., stratification factors used in the randomization scheme of the trial) also includes a baseline (pre-treatment) assessment of the primary outcome. We suggest to include a "super-covariate"-that is, a patient-specific prediction of the control group outcome-as a further covariate (but not as an offset). We train a prognostic model or ensembles of such models on the individual patient (or aggregate) data of other studies in similar patients, but not the new trial under analysis. This has the potential to use historical data to increase the power of clinical trials and avoids the concern of type I error inflation with Bayesian approaches, but in contrast to them has a greater benefit for larger sample sizes. It is important for prognostic models behind "super-covariates" to generalize well across different patient populations in order to similarly reduce unexplained variability whether the trial(s) to develop the model are identical to the new trial or not. In an example in neovascular age-related macular degeneration we saw efficiency gains from the use of a "super-covariate".
Collapse
|
30
|
Cooney P, White A. Extending Beyond Bagust and Beale: Fully Parametric Piecewise Exponential Models for Extrapolation of Survival Outcomes in Health Technology Assessment. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2023; 26:1510-1517. [PMID: 37353057 DOI: 10.1016/j.jval.2023.06.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 05/31/2023] [Accepted: 06/06/2023] [Indexed: 06/25/2023]
Abstract
OBJECTIVES When extrapolating time-to-event data the Bagust and Beale (B&B) approach uses the Kaplan-Meier survival function until a manually chosen time point, after which a constant hazard is assumed. This study demonstrates an objective statistical approach to estimate this time point. METHODS We estimate piecewise exponential models (PEMs), whereby the hazard function is partitioned into segments each with constant hazards. The boundaries of these segments are known as change points. Our approach determines the location and number of change points in PEMs from which the hazard in the final segment is used to model long-term survival. We reviewed previous applications of the B&B approach in National Institute for Health and Care Excellence Technology Appraisals (TAs) completed between July 2011 and June 2017. The time points after which constant hazards were assumed were compared between PEMs and the B&B approaches. When further survival data were published following the original TA, we compared these updated estimates to predicted survival from the PEM and other parametric models adjusted for general population mortality. RESULTS Six of the 59 TAs in this review considered the B&B approach. There was general agreement between the location of time points identified through the PEM and the B&B approaches. In 2 of the identified TAs the best fitting model to the data was a no-change-point model. Of the 3 TAs for which further survival data became available, PEM provided the closest prediction for survival outcomes in 2 TAs. CONCLUSIONS PEMs are useful for survival extrapolation when a long-term constant hazard trend for the disease is clinically plausible.
Collapse
Affiliation(s)
- Philip Cooney
- School of Computer Science and Statistics, Trinity College Dublin, Dublin, Ireland.
| | - Arthur White
- School of Computer Science and Statistics, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
31
|
Egbon OA, Nascimento D, Louzada F. Prior elicitation for Gaussian spatial process: An application to TMS brain mapping. Stat Med 2023; 42:3956-3980. [PMID: 37665049 DOI: 10.1002/sim.9842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 06/09/2023] [Accepted: 06/20/2023] [Indexed: 09/05/2023]
Abstract
The power and commensurate prior distributions are informative prior distributions that incorporate historical data as prior knowledge in Bayesian analysis to improve inference about a phenomenon under study. Although these distributions have been developed for analyzing non-spatial data, little or no attention has been given to spatial geostatistical data. In this study, we extend these informative prior distributions to a Gaussian spatial process, which enables the elicitation of prior knowledge from historical geostatistical data for Bayesian analysis. Three informative prior distributions were developed for spatial modeling, and an efficient Markov Chain Monte Carlo algorithm was developed for performing Bayesian analysis. Simulation studies were used to assess the adequacy of the informative prior distributions. Hierarchical models combined with the developed informative prior distributions were applied to analyze transcranial magnetic stimulation (TMS) brain mapping data to gain insights into the spatial pattern of a patient's response to motor cortex stimulation. The study quantified the uncertainty in motor response and found that the primary motor cortex of the hand is responsible for most of the movement of the right first dorsal interosseous muscle. The findings provide a deeper understanding of the neural mechanisms underlying motor function and ultimately aid the improvement of treatment options for individuals with health issues.
Collapse
Affiliation(s)
- Osafu Augustine Egbon
- Institute of Mathematical and Computer Sciences, Universidade de São Paulo, São Carlos, Brazil
- Department of Statistics, Universidade Federal de São Carlos, São Carlos, Brazil
- Institute of Statistics, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Diego Nascimento
- Departamento de Matemáticas, Universidad de Atacama, Copiapó, Chile
| | - Francisco Louzada
- Institute of Mathematical and Computer Sciences, Universidade de São Paulo, São Carlos, Brazil
| |
Collapse
|
32
|
Pawel S, Aust F, Held L, Wagenmakers EJ. Power priors for replication studies. TEST-SPAIN 2023; 33:127-154. [PMID: 38585622 PMCID: PMC10991061 DOI: 10.1007/s11749-023-00888-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 08/31/2023] [Indexed: 04/09/2024]
Abstract
The ongoing replication crisis in science has increased interest in the methodology of replication studies. We propose a novel Bayesian analysis approach using power priors: The likelihood of the original study's data is raised to the power of α , and then used as the prior distribution in the analysis of the replication data. Posterior distribution and Bayes factor hypothesis tests related to the power parameter α quantify the degree of compatibility between the original and replication study. Inferences for other parameters, such as effect sizes, dynamically borrow information from the original study. The degree of borrowing depends on the conflict between the two studies. The practical value of the approach is illustrated on data from three replication studies, and the connection to hierarchical modeling approaches explored. We generalize the known connection between normal power priors and normal hierarchical models for fixed parameters and show that normal power prior inferences with a beta prior on the power parameter α align with normal hierarchical model inferences using a generalized beta prior on the relative heterogeneity variance I 2 . The connection illustrates that power prior modeling is unnatural from the perspective of hierarchical modeling since it corresponds to specifying priors on a relative rather than an absolute heterogeneity scale.
Collapse
Affiliation(s)
- Samuel Pawel
- Epidemiology, Biostatistics and Prevention Institute (EBPI), Center for Reproducible Science (CRS), University of Zurich, Zurich, Switzerland
| | - Frederik Aust
- Department of Psychological Methods, University of Amsterdam, Amsterdam, The Netherlands
| | - Leonhard Held
- Epidemiology, Biostatistics and Prevention Institute (EBPI), Center for Reproducible Science (CRS), University of Zurich, Zurich, Switzerland
| | - Eric-Jan Wagenmakers
- Department of Psychological Methods, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
33
|
Mackay EK, Springford A. Evaluating treatments in rare indications warrants a Bayesian approach. Front Pharmacol 2023; 14:1249611. [PMID: 37799966 PMCID: PMC10547867 DOI: 10.3389/fphar.2023.1249611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 09/11/2023] [Indexed: 10/07/2023] Open
Abstract
Evaluating efficacy and real-world effectiveness for novel therapies targeting rare mutations or patient subpopulations with unmet needs is a growing challenge in health economics and outcomes research (HEOR). In these settings it may be difficult to recruit enough patients to run adequately powered randomized clinical trials, resulting in greater reliance on single-arm trials or basket trial designs. Additionally, evidence networks for performing network meta-analysis may be sparse or disconnected when comparing available treatments in narrower patient populations. These challenges create an increased need for use of appropriate methods for handling small sample sizes, structural modelling assumptions and more nuanced decision rules to arrive at "best-available evidence" on comparative and non-comparative efficacy/effectiveness. We advocate for greater use of Bayesian methods to address these challenges as they can facilitate efficient and transparent borrowing of information across varied data sources under flexible modelling assumptions, probabilistic sensitivity analysis to assess model assumptions, and more nuanced decision-making where limited power reduces the utility of classical frequentist hypothesis testing. We illustrate how Bayesian methods have been recently used to overcome several challenges of rare indications in HEOR, including approaches to borrowing information from external data sources, evaluation of efficacy in basket trials, and incorporating non-randomized studies into network meta-analysis. Lastly, we provide several recommendations for HEOR practitioners on appropriate use of Bayesian methods to address challenges in the rare disease setting.
Collapse
|
34
|
Warren JL, Sundaram M, Pitzer VE, Omer SB, Weinberger DM. Incorporating efficacy data from initial trials into subsequent evaluations: Application to vaccines against respiratory syncytial virus. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.03.27.23287639. [PMID: 37034783 PMCID: PMC10081417 DOI: 10.1101/2023.03.27.23287639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Background When a randomized controlled trial fails to demonstrate statistically significant efficacy against the primary endpoint, a potentially costly new trial would need to be conducted to receive licensure. Incorporating data from previous trials might allow for the conduct of more efficient follow-up trials to demonstrate efficacy, speeding the availability of effective vaccines. Methods Based on the outcomes from a failed trial of a maternal vaccine against respiratory syncytial virus (RSV), we simulated data for a new Bayesian group-sequential trial. The data were analyzed either ignoring data from the previous trial (i.e., weakly informative prior distributions) or using prior distributions that incorporate the historical data into the analysis. We evaluated scenarios where the efficacy in the new trial was the same, greater than, or less than the efficacy in the original trial. For each scenario, we evaluated the statistical power and type I error rate for estimating the vaccine effect following interim analyses. Results If a stringent threshold is used to control the type I error rate, the analyses that incorporated historical data had a small advantage over trials that did not. If control of type I error is less important (e.g., in a post-licensure evaluation), the incorporation of historical data can provide a substantial boost in efficiency. Conclusions Due to the need to control the type I error rate in trials used to license a vaccine, the incorporation of historical data provides little additional benefit in terms of stopping the trial early. However, these statistical approaches could be promising in evaluations that use real-world evidence following licensure.
Collapse
Affiliation(s)
- Joshua L. Warren
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
| | - Maria Sundaram
- Marshfield Clinic Research Institute, Center for Clinical Epidemiology & Population Health, Marshfield, WI, USA
| | - Virginia E. Pitzer
- Department of Epidemiology of Microbial Diseases, Yale School of Public Health, New Haven, CT, USA
| | - Saad B. Omer
- Department of Epidemiology of Microbial Diseases, Yale School of Public Health, New Haven, CT, USA
- Yale Institute of Global Health, New Haven, CT, United States
- Yale School of Medicine, New Haven, CT, United States
| | - Daniel M. Weinberger
- Department of Epidemiology of Microbial Diseases, Yale School of Public Health, New Haven, CT, USA
| |
Collapse
|
35
|
Zhang H, Shen Y, Li J, Ye H, Chiang AY. Adaptively leveraging external data with robust meta-analytical-predictive prior using empirical Bayes. Pharm Stat 2023; 22:846-860. [PMID: 37220997 DOI: 10.1002/pst.2315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 02/02/2023] [Accepted: 05/03/2023] [Indexed: 05/25/2023]
Abstract
The robust meta-analytical-predictive (rMAP) prior is a popular method to robustly leverage external data. However, a mixture coefficient would need to be pre-specified based on the anticipated level of prior-data conflict. This can be very challenging at the study design stage. We propose a novel empirical Bayes robust MAP (EB-rMAP) prior to address this practical need and adaptively leverage external/historical data. Built on Box's prior predictive p-value, the EB-rMAP prior framework balances between model parsimony and flexibility through a tuning parameter. The proposed framework can be applied to binomial, normal, and time-to-event endpoints. Implementation of the EB-rMAP prior is also computationally efficient. Simulation results demonstrate that the EB-rMAP prior is robust in the presence of prior-data conflict while preserving statistical power. The proposed EB-rMAP prior is then applied to a clinical dataset that comprises 10 oncology clinical trials, including the prospective study.
Collapse
Affiliation(s)
- Hongtao Zhang
- Biostatistics and Research Decision Sciences, Merck & Co., Inc., North Wales, Pennsylvania, USA
| | - Yueqi Shen
- Department of Biostatistics, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Judy Li
- GBDS, Bristol Myers Squibb, San Diego, California, USA
| | - Han Ye
- College of Business, Lehigh University, Bethlehem, Pennsylvania, USA
| | - Alan Y Chiang
- Biometrics, Lyell Immunopharma, Seattle, Washington, USA
| |
Collapse
|
36
|
Hatswell AJ. Incorporating Prior Beliefs Into Meta-Analyses of Health-State Utility Values Using the Bayesian Power Prior. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2023; 26:1389-1397. [PMID: 37187235 DOI: 10.1016/j.jval.2023.04.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 04/17/2023] [Accepted: 04/28/2023] [Indexed: 05/17/2023]
Abstract
OBJECTIVES Health-state utility values (HSUVs) directly affect estimates of Quality-Adjusted Life-Years and thus the cost-utility estimates. In practice a single preferred value (SPV) is often selected for HSUVs, despite meta-analysis being an option when multiple (credible) HSUVs are available. Nevertheless, the SPV approach is often reasonable because meta-analysis implicitly considers all HSUVs as equally relevant. This article presents a method for the incorporation of weights to HSUV synthesis, allowing more relevant studies to have greater influence. METHODS Using 4 case studies in lung cancer, hemodialysis, compensated liver cirrhosis, and diabetic retinopathy blindness, a Bayesian Power Prior (BPP) approach is used to incorporate beliefs on study applicability, reflecting the authors' perceived suitability for UK decision making. Older studies, non-UK value sets, and vignette studies are thus downweighted (but not disregarded). BPP HSUV estimates were compared with a SPV, random effects meta-analysis, and fixed effects meta-analysis. Sensitivity analyses were conducted iteratively updating the case studies, using alternative weighting methods, and simulated data. RESULTS Across all case studies, SPVs did not accord with meta-analyzed values, and fixed effects meta-analysis produced unrealistically narrow CIs. Point estimates from random effects meta-analysis and BPP models were similar in the final models, although BPP reflected additional uncertainty as wider credible intervals, particularly when fewer studies were available. Differences in point estimates were seen in iterative updating, weighting approaches, and simulated data. CONCLUSIONS The concept of the BPP can be adapted for synthesizing HSUVs, incorporating expert opinion on relevance. Because of the downweighting of studies, the BPP reflected structural uncertainty as wider credible intervals, with all forms of synthesis showing meaningful differences compared with SPVs. These differences would have implications for both cost-utility point estimates and probabilistic analyses.
Collapse
Affiliation(s)
- Anthony J Hatswell
- Delta Hat Limited, Nottingham, England, UK; Department of Statistical Science, University College London, London, England, UK.
| |
Collapse
|
37
|
Williamson SF, Williams CJ, Lendrem BC, Wilson KJ. Sample size determination for point-of-care COVID-19 diagnostic tests: a Bayesian approach. Diagn Progn Res 2023; 7:17. [PMID: 37596684 PMCID: PMC10436636 DOI: 10.1186/s41512-023-00153-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 07/14/2023] [Indexed: 08/20/2023] Open
Abstract
BACKGROUND In a pandemic setting, it is critical to evaluate and deploy accurate diagnostic tests rapidly. This relies heavily on the sample size chosen to assess the test accuracy (e.g. sensitivity and specificity) during the diagnostic accuracy study. Too small a sample size will lead to imprecise estimates of the accuracy measures, whereas too large a sample size may delay the development process unnecessarily. This study considers use of a Bayesian method to guide sample size determination for diagnostic accuracy studies, with application to COVID-19 rapid viral detection tests. Specifically, we investigate whether utilising existing information (e.g. from preceding laboratory studies) within a Bayesian framework can reduce the required sample size, whilst maintaining test accuracy to the desired precision. METHODS The method presented is based on the Bayesian concept of assurance which, in this context, represents the unconditional probability that a diagnostic accuracy study yields sensitivity and/or specificity intervals with the desired precision. We conduct a simulation study to evaluate the performance of this approach in a variety of COVID-19 settings, and compare it to commonly used power-based methods. An accompanying interactive web application is available, which can be used by researchers to perform the sample size calculations. RESULTS Results show that the Bayesian assurance method can reduce the required sample size for COVID-19 diagnostic accuracy studies, compared to standard methods, by making better use of laboratory data, without loss of performance. Increasing the size of the laboratory study can further reduce the required sample size in the diagnostic accuracy study. CONCLUSIONS The method considered in this paper is an important advancement for increasing the efficiency of the evidence development pathway. It has highlighted that the trade-off between lab study sample size and diagnostic accuracy study sample size should be carefully considered, since establishing an adequate lab sample size can bring longer-term gains. Although emphasis is on its use in the COVID-19 pandemic setting, where we envisage it will have the most impact, it can be usefully applied in other clinical areas.
Collapse
Affiliation(s)
- S Faye Williamson
- Biostatistics Research Group, Population Health Sciences Institute, Newcastle University, Newcastle upon Tyne, UK.
| | - Cameron J Williams
- NIHR Newcastle In Vitro Diagnostic Cooperative, Newcastle University, Newcastle upon Tyne, UK
| | - B Clare Lendrem
- NIHR Newcastle In Vitro Diagnostic Cooperative, Newcastle University, Newcastle upon Tyne, UK
| | - Kevin J Wilson
- School of Mathematics, Statistics and Physics, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
38
|
Xu T, Shi H, Lin R. Bayesian single-to-double arm transition design using both short-term and long-term endpoints. Pharm Stat 2023; 22:588-604. [PMID: 36755420 PMCID: PMC11323481 DOI: 10.1002/pst.2292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 12/14/2022] [Accepted: 01/02/2023] [Indexed: 02/10/2023]
Abstract
The choice between single-arm designs versus randomized double-arm designs has been contentiously debated in the literature of phase II oncology trials. Recently, as a compromise, the single-to-double arm transition design was proposed, combining the two designs into one trial over two stages. Successful implementation of the two-stage transition design requires a suspension period at the end of the first stage to collect the response data of the already enrolled patients. When the evaluation of the primary efficacy endpoint is overly long, the between-stage suspension period may unfavorably prolong the trial duration and cause a delay in treating future eligible patients. To accelerate the trial, we propose a Bayesian single-to-double arm design with short-term endpoints (BSDS), where an intermediate short-term endpoint is used for making early termination decisions at the end of the single-arm stage, followed by an evaluation of the long-term endpoint at the end of the subsequent double-arm stage. Bayesian posterior probabilities are used as the primary decision-making tool at the end of the trial. Design calibration steps are proposed for this Bayesian monitoring process to control the frequentist operating characteristics and minimize the expected sample size. Extensive simulation studies have demonstrated that our design has comparable power and average sample size but a much shorter trial duration than conventional single-to-double arm design. Applications of the design are illustrated using two phase II oncology trials with binary endpoints.
Collapse
Affiliation(s)
- Tianlin Xu
- Department of Biostatistics and Data Science, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Haolun Shi
- Department of Statistics and Actuarial Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Ruitao Lin
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
39
|
Visalli A, Capizzi M, Ambrosini E, Kopp B, Vallesi A. P3-like signatures of temporal predictions: a computational EEG study. Exp Brain Res 2023:10.1007/s00221-023-06656-z. [PMID: 37354350 DOI: 10.1007/s00221-023-06656-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 06/18/2023] [Indexed: 06/26/2023]
Abstract
Many cognitive processes, ranging from perception to action, depend on the ability to predict the timing of forthcoming events. Yet, how the brain uses predictive models in the temporal domain is still an unsolved question. In previous work, we began to explore the neural correlates of temporal predictions by using a computational approach in which an ideal Bayesian observer learned the temporal probabilities of target onsets in a simple reaction time task. Because the task was specifically designed to disambiguate updating of predictive models and surprise, changes in temporal probabilities were explicitly cued. However, in the real world, we are usually incidentally exposed to changes in the statistics of the environment. Here, we thus aimed to further investigate the electroencephalographic (EEG) correlates of Bayesian belief updating and surprise associated with incidental learning of temporal probabilities. In line with our previous EEG study, results showed distinct P3-like modulations for updating and surprise. While surprise was indexed by an early fronto-central P3-like modulation, updating was associated with a later and more posterior P3 modulation. Moreover, updating was associated with a P2-like potential at centro-parietal electrodes, likely capturing integration processes between prior beliefs and likelihood of the observed event. These findings support previous evidence of trial-by-trial variability of P3 amplitudes as an index of dissociable inferential processes. Coupled with our previous findings, the present study strongly bolsters the view of the P3 as a key brain signature of temporal Bayesian inference. Data and scripts are shared on OSF: osf.io/sdy8j/.
Collapse
Affiliation(s)
- Antonino Visalli
- Department of Neuroscience, University of Padova, 35121, Padua, Italy.
- Padova Neuroscience Center, University of Padova, Padua, Italy.
- IRCCS San Camillo Hospital, 30126, Venice, Italy.
| | - M Capizzi
- Brain and Behavior Research Center (CIMCYC), Department of Experimental Psychology, University of Granada, Granada, Spain
| | - E Ambrosini
- Department of Neuroscience, University of Padova, 35121, Padua, Italy
- Padova Neuroscience Center, University of Padova, Padua, Italy
- Department of General Psychology, University of Padova, Padua, Italy
| | - B Kopp
- Department of Neurology, Hannover Medical School, 30625, Hannover, Germany
| | - Antonino Vallesi
- Department of Neuroscience, University of Padova, 35121, Padua, Italy.
- Padova Neuroscience Center, University of Padova, Padua, Italy.
| |
Collapse
|
40
|
Han Z, Zhang Q, Wang M, Ye K, Chen MH. On efficient posterior inference in normalized power prior Bayesian analysis. Biom J 2023; 65:e2200194. [PMID: 36960489 DOI: 10.1002/bimj.202200194] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 11/24/2022] [Accepted: 02/15/2023] [Indexed: 03/25/2023]
Abstract
The power prior has been widely used to discount the amount of information borrowed from historical data in the design and analysis of clinical trials. It is realized by raising the likelihood function of the historical data to a power parameterδ ∈ [ 0 , 1 ] $\delta \in [0, 1]$ , which quantifies the heterogeneity between the historical and the new study. In a fully Bayesian approach, a natural extension is to assign a hyperprior to δ such that the posterior of δ can reflect the degree of similarity between the historical and current data. To comply with the likelihood principle, an extra normalizing factor needs to be calculated and such prior is known as the normalized power prior. However, the normalizing factor involves an integral of a prior multiplied by a fractional likelihood and needs to be computed repeatedly over different δ during the posterior sampling. This makes its use prohibitive in practice for most elaborate models. This work provides an efficient framework to implement the normalized power prior in clinical studies. It bypasses the aforementioned efforts by sampling from the power prior withδ = 0 $\delta = 0$ andδ = 1 $\delta = 1$ only. Such a posterior sampling procedure can facilitate the use of a random δ with adaptive borrowing capability in general models. The numerical efficiency of the proposed method is illustrated via extensive simulation studies, a toxicological study, and an oncology study.
Collapse
Affiliation(s)
- Zifei Han
- School of Statistics, University of International Business and Economics, Beijing, China
| | - Qiang Zhang
- School of Statistics, University of International Business and Economics, Beijing, China
| | - Min Wang
- Department of Management Science and Statistics, The University of Texas at San Antonio, San Antonio, Texas, USA
| | - Keying Ye
- Department of Management Science and Statistics, The University of Texas at San Antonio, San Antonio, Texas, USA
| | - Ming-Hui Chen
- Department of Statistics, University of Connecticut, Storrs, Connecticut, USA
| |
Collapse
|
41
|
Brizzi F, Steiert B, Pang H, Diack C, Lomax M, Peck R, Morgan Z, Soubret A. A model-based approach for historical borrowing, with an application to neovascular age-related macular degeneration. Stat Methods Med Res 2023; 32:1064-1081. [PMID: 37082812 DOI: 10.1177/09622802231155597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/22/2023]
Abstract
Bayesian historical borrowing has recently attracted growing interest due to the increasing availability of historical control data, as well as improved computational methodology and software. In this article, we argue that the statistical models used for borrowing may be suboptimal when they do not adjust for differing factors across historical studies such as covariates, dosing regimen, etc. We propose an alternative approach to address these shortcomings. We start by constructing a historical model based on subject-level historical data to accurately characterize the control treatment by adjusting for known between trials differences. This model is subsequently used to predict the control arm response in the current trial, enabling the derivation of a model-informed prior for the treatment effect parameter of another (potentially simpler) model used to analyze the trial efficacy (i.e. the trial model). Our approach is applied to neovascular age-related macular degeneration trials, employing a cross-sectional regression trial model, and a longitudinal non-linear mixed-effects drug-disease-trial historical model. The latter model characterizes the relationship between clinical response, drug exposure and baseline covariates so that the derived model-informed prior seamlessly adapts to the trial population and can be extrapolated to a different dosing regimen. This approach can yield a more accurate prior for borrowing, thus optimizing gains in efficiency (e.g. increasing power or reducing the sample size) in future trials.
Collapse
Affiliation(s)
- Francesco Brizzi
- Predictive Modelling and Data Analytics, Roche Pharma Research & Early Development, Roche Innovation Center Basel, Switzerland
| | - Bernhard Steiert
- Predictive Modelling and Data Analytics, Roche Pharma Research & Early Development, Roche Innovation Center Basel, Switzerland
| | - Herbert Pang
- Methods Collaboration & Outreach (MCO) Enabling Platform, Genentech Inc., South San Francisco, USA
| | - Cheikh Diack
- Predictive Modelling and Data Analytics, Roche Pharma Research & Early Development, Roche Innovation Center Basel, Switzerland
| | - Mark Lomax
- Data & Statistical Sciences, F. Hoffman-La Roche Ltd, Welwyn Garden City, UK
| | - Robbie Peck
- Data & Statistical Sciences, Hoffmann-La Roche AG, Basel, Switzerland
| | - Zoe Morgan
- Data & Statistical Sciences, Hoffmann-La Roche AG, Basel, Switzerland
| | - Antoine Soubret
- Predictive Modelling and Data Analytics, Roche Pharma Research & Early Development, Roche Innovation Center Basel, Switzerland
| |
Collapse
|
42
|
Bon JJ, Bretherton A, Buchhorn K, Cramb S, Drovandi C, Hassan C, Jenner AL, Mayfield HJ, McGree JM, Mengersen K, Price A, Salomone R, Santos-Fernandez E, Vercelloni J, Wang X. Being Bayesian in the 2020s: opportunities and challenges in the practice of modern applied Bayesian statistics. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2023; 381:20220156. [PMID: 36970822 PMCID: PMC10041356 DOI: 10.1098/rsta.2022.0156] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 01/06/2023] [Indexed: 06/18/2023]
Abstract
Building on a strong foundation of philosophy, theory, methods and computation over the past three decades, Bayesian approaches are now an integral part of the toolkit for most statisticians and data scientists. Whether they are dedicated Bayesians or opportunistic users, applied professionals can now reap many of the benefits afforded by the Bayesian paradigm. In this paper, we touch on six modern opportunities and challenges in applied Bayesian statistics: intelligent data collection, new data sources, federated analysis, inference for implicit models, model transfer and purposeful software products. This article is part of the theme issue 'Bayesian inference: challenges, perspectives, and prospects'.
Collapse
Affiliation(s)
- Joshua J. Bon
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Adam Bretherton
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Katie Buchhorn
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Susanna Cramb
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Public Health and Social Work, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Christopher Drovandi
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Conor Hassan
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Adrianne L. Jenner
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Helen J. Mayfield
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Public Health, The University of Queensland, Saint Lucia, Queensland, Australia
| | - James M. McGree
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Kerrie Mengersen
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Aiden Price
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Robert Salomone
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Computer Science, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Edgar Santos-Fernandez
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Julie Vercelloni
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Xiaoyu Wang
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| |
Collapse
|
43
|
Duputel B, Stallard N, Montestruc F, Zohar S, Ursino M. Using dichotomized survival data to construct a prior distribution for a Bayesian seamless Phase II/III clinical trial. Stat Methods Med Res 2023; 32:963-977. [PMID: 36919403 PMCID: PMC10521165 DOI: 10.1177/09622802231160554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/16/2023]
Abstract
Master protocol designs allow for simultaneous comparison of multiple treatments or disease subgroups. Master protocols can also be designed as seamless studies, in which two or more clinical phases are considered within the same trial. They can be divided into two categories: operationally seamless, in which the two phases are separated into two independent studies, and inferentially seamless, in which the interim analysis is considered an adaptation of the study. Bayesian designs are scarcely studied. Our aim is to propose and compare Bayesian operationally seamless Phase II/III designs using a binary endpoint for the first stage and a time-to-event endpoint for the second stage. At the end of Phase II, arm selection is based on posterior (futility) and predictive (selection) probabilities. The results of the first phase are then incorporated into prior distributions of a time-to-event model. Simulation studies showed that Bayesian operationally seamless designs can approach the inferentially seamless counterpart, allowing for an increasing simulated power with respect to the operationally frequentist design.
Collapse
Affiliation(s)
- Benjamin Duputel
- Universitè Paris Citè, Sorbonne Université, Inserm, Centre de Recherche des Cordeliers, Paris, France
- Inria, HeKA, Paris, France
- eXYSTAT, Malakoff, France
| | - Nigel Stallard
- Warwick Clinical Trials Unit, Warwick Medical School, University of Warwick, Coventry, UK
| | | | - Sarah Zohar
- Universitè Paris Citè, Sorbonne Université, Inserm, Centre de Recherche des Cordeliers, Paris, France
- Inria, HeKA, Paris, France
| | - Moreno Ursino
- Universitè Paris Citè, Sorbonne Université, Inserm, Centre de Recherche des Cordeliers, Paris, France
- Inria, HeKA, Paris, France
- Unit of Clinical Epidemiology, Assistance Publique-Hôpitaux de Paris, CHU Robert Debrè, Paris, France
| |
Collapse
|
44
|
West BT, Wagner J, Coffey S, Elliott MR. Deriving Priors for Bayesian Prediction of Daily Response Propensity in Responsive Survey Design: Historical Data Analysis Versus Literature Review. JOURNAL OF SURVEY STATISTICS AND METHODOLOGY 2023; 11:367-392. [PMID: 37038601 PMCID: PMC10080219 DOI: 10.1093/jssam/smab036] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Responsive survey design (RSD) aims to increase the efficiency of survey data collection via live monitoring of paradata and the introduction of protocol changes when survey errors and increased costs seem imminent. Daily predictions of response propensity for all active sampled cases are among the most important quantities for live monitoring of data collection outcomes, making sound predictions of these propensities essential for the success of RSD. Because it relies on real-time updates of prior beliefs about key design quantities, such as predicted response propensities, RSD stands to benefit from Bayesian approaches. However, empirical evidence of the merits of these approaches is lacking in the literature, and the derivation of informative prior distributions is required for these approaches to be effective. In this paper, we evaluate the ability of two approaches to deriving prior distributions for the coefficients defining daily response propensity models to improve predictions of daily response propensity in a real data collection employing RSD. The first approach involves analyses of historical data from the same survey, and the second approach involves literature review. We find that Bayesian methods based on these two approaches result in higher-quality predictions of response propensity than more standard approaches ignoring prior information. This is especially true during the early-to-middle periods of data collection, when survey managers using RSD often consider interventions.
Collapse
Affiliation(s)
- Brady T West
- Address correspondence to Brady T. West, Survey Research Center, Institute for Social Research, University of Michigan-Ann Arbor, 426 Thompson Street, Ann Arbor, MI 48106, USA;
| | - James Wagner
- is a Research Associate Professor at the Survey Research Center, Institute for Social Research, University of Michigan-Ann Arbor, 426 Thompson Street, Ann Arbor, MI 48106, USA
| | - Stephanie Coffey
- is a Mathematical Statistician with the U.S. Census Bureau, 4600 Silver Hill Road, Washington, DC 20233, USA
| | - Michael R Elliott
- is a Research Professor at the Survey Research Center, Institute for Social Research and Professor at the Department of Biostatistics, University of Michigan-Ann Arbor, 426 Thompson Street, Ann Arbor, MI 48106, USA
| |
Collapse
|
45
|
Alt EM, Psioda MA, Ibrahim JG. A Bayesian approach to study design and analysis with type I error rate control for response variables of mixed types. Stat Med 2023; 42:1722-1740. [PMID: 36929939 DOI: 10.1002/sim.9696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 11/29/2022] [Accepted: 02/20/2023] [Indexed: 03/18/2023]
Abstract
There has been increased interest in the design and analysis of studies consisting of multiple response variables of mixed types. For example, in clinical trials, it is desirable to establish efficacy for a treatment effect in primary and secondary outcomes. In this article, we develop Bayesian approaches for hypothesis testing and study planning for data consisting of multiple response variables of mixed types with covariates. We assume that the responses are correlated via a Gaussian copula, and that the model for each response is, marginally, a generalized linear model (GLM). Taking a fully Bayesian approach, the proposed method enables inference based on the joint posterior distribution of the parameters. Under some mild conditions, we show that the joint distribution of the posterior probabilities under any Bayesian analysis converges to a Gaussian copula distribution as the sample size tends to infinity. Using this result, we develop an approach to control the type I error rate under multiple testing. Simulation results indicate that the method is more powerful than conducting marginal regression models and correcting for multiplicity using the Bonferroni-Holm Method. We also develop a Bayesian approach to sample size determination in the presence of response variables of mixed types, extending the concept of probability of success (POS) to multiple response variables of mixed types.
Collapse
Affiliation(s)
- Ethan M Alt
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Matthew A Psioda
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Joseph G Ibrahim
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| |
Collapse
|
46
|
Akalu Banbeta, Lesaffre E, Martina R, van Rosmalen J. Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder. Stat Biopharm Res 2023. [DOI: 10.1080/19466315.2023.2190933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
Affiliation(s)
- Akalu Banbeta
- I-Biostat, UHasselt, Hasselt, Belgium
- Department of Statistics, Jimma University, Jimma, Ethiopia
| | | | - Reynaldo Martina
- Faculty of Science, Technology, Engineering and Mathematics, Open University, Milton Keynes, UK
| | - Joost van Rosmalen
- Department of Biostatistics, Erasmus University Medical Center, Rotterdam, the Netherlands
- Department of Epidemiology, Erasmus University Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
47
|
Zhao Y, Li D, Liu R, Yuan Y. Bayesian optimal phase II designs with dual-criterion decision making. Pharm Stat 2023. [PMID: 36871961 DOI: 10.1002/pst.2296] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Revised: 11/29/2022] [Accepted: 02/06/2023] [Indexed: 03/07/2023]
Abstract
The conventional phase II trial design paradigm is to make the go/no-go decision based on the hypothesis testing framework. Statistical significance itself alone, however, may not be sufficient to establish that the drug is clinically effective enough to warrant confirmatory phase III trials. We propose the Bayesian optimal phase II trial design with dual-criterion decision making (BOP2-DC), which incorporates both statistical significance and clinical relevance into decision making. Based on the posterior probability that the treatment effect reaches the lower reference value (statistical significance) and the clinically meaningful value (clinical significance), BOP2-DC allows for go/consider/no-go decisions, rather than a binary go/no-go decision. BOP2-DC is highly flexible and accommodates various types of endpoints, including binary, continuous, time-to-event, multiple, and coprimary endpoints, in single-arm and randomized trials. The decision rule of BOP2-DC is optimized to maximize the probability of a go decision when the treatment is effective or minimize the expected sample size when the treatment is futile. Simulation studies show that the BOP2-DC design yields desirable operating characteristics. The software to implement BOP2-DC is freely available at www.trialdesign.org.
Collapse
Affiliation(s)
- Yujie Zhao
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Daniel Li
- Global Biometrics and Data Sciences, Bristol Myers Squibb, Berkeley Heights, New Jersey, USA
| | - Rong Liu
- Global Biometrics and Data Sciences, Bristol Myers Squibb, Berkeley Heights, New Jersey, USA
| | - Ying Yuan
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
48
|
Kaplan D, Chen J, Yavuz S, Lyu W. Bayesian Dynamic Borrowing of Historical Information with Applications to the Analysis of Large-Scale Assessments. PSYCHOMETRIKA 2023; 88:1-30. [PMID: 35687222 PMCID: PMC9185721 DOI: 10.1007/s11336-022-09869-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 02/18/2022] [Indexed: 06/15/2023]
Abstract
The purpose of this paper is to demonstrate and evaluate the use of Bayesian dynamic borrowing (Viele et al, in Pharm Stat 13:41-54, 2014) as a means of systematically utilizing historical information with specific applications to large-scale educational assessments. Dynamic borrowing via Bayesian hierarchical models is a special case of a general framework of historical borrowing where the degree of borrowing depends on the heterogeneity among historical data and current data. A joint prior distribution over the historical and current data sets is specified with the degree of heterogeneity across the data sets controlled by the variance of the joint distribution. We apply Bayesian dynamic borrowing to both single-level and multilevel models and compare this approach to other historical borrowing methods such as complete pooling, Bayesian synthesis, and power priors. Two case studies using data from the Program for International Student Assessment reveal the utility of Bayesian dynamic borrowing in terms of predictive accuracy. This is followed by two simulation studies that reveal the utility of Bayesian dynamic borrowing over simple pooling and power priors in cases where the historical data is heterogeneous compared to the current data based on bias, mean squared error, and predictive accuracy. In cases of homogeneous historical data, Bayesian dynamic borrowing performs similarly to data pooling, Bayesian synthesis, and power priors. In contrast, for heterogeneous historical data, Bayesian dynamic borrowing performed at least as well, if not better, than other methods of borrowing with respect to mean squared error, percent bias, and leave-one-out cross-validation.
Collapse
Affiliation(s)
- David Kaplan
- University of Wisconsin - Madison, Madison, USA.
| | | | - Sinan Yavuz
- University of Wisconsin - Madison, Madison, USA
| | - Weicong Lyu
- University of Wisconsin - Madison, Madison, USA
| |
Collapse
|
49
|
Li H, Yue LQ. Propensity score-based methods for causal inference and external data leveraging in regulatory settings: From basic ideas to implementation. Pharm Stat 2023. [PMID: 36794571 DOI: 10.1002/pst.2294] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 01/31/2023] [Accepted: 02/02/2023] [Indexed: 02/17/2023]
Abstract
The breakthrough propensity score methodology was formulated by Rosenbaum and Rubin in the 1980s for the mitigation of confounding bias in non-randomized comparative studies to facilitate causal inference for treatment effects. The methodology had been used mainly in epidemiological and social science studies that may often be exploratory, until it was adopted by FDA/CDRH in 2002 and applied in the evaluation of medical device pre-market confirmatory studies, including those with a control group extracted from a well-designed and executed registry database or historical clinical studies. Around 2013, following the Rubin outcome-free study design principle, the two-stage propensity score design framework was developed for medical device studies to safeguard study integrity and objectivity, thereby strengthening the interpretability of study results. Since 2018, the scope of the propensity score methodology has been broadened so that it can be used for the purpose of leveraging external data to augment a single-arm or randomized traditional clinical study. All these statistical approaches, collectively referred to as propensity score-based methods in this article, have been considered in the design of medical device regulatory studies and stimulated related research, as evidenced by the latest trends in journal publications. We will provide a tutorial on the propensity score-based methods from the basic idea to their implementation in regulatory settings for causal inference and external data leveraging, along with step-by-step descriptions of the procedures of the two-stage outcome-free design through examples, which can be used as templates for real study proposals.
Collapse
Affiliation(s)
- Heng Li
- Division of Biostatistics, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Lilly Q Yue
- Division of Biostatistics, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| |
Collapse
|
50
|
Schwarz P, Brockstedt E. How can we unleash the potential of external controls in clinical trials? Drug Discov Today 2023; 28:103492. [PMID: 36681236 DOI: 10.1016/j.drudis.2023.103492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 01/10/2023] [Accepted: 01/10/2023] [Indexed: 01/19/2023]
Affiliation(s)
- Philipp Schwarz
- BI X GmbH, Binger Str. 173, 55216 Ingelheim am Rhein, Germany
| | - Ekkehard Brockstedt
- Boehringer Ingelheim Pharma GmbH & Co. KG, Binger Str. 173, 55216 Ingelheim am Rhein, Germany.
| |
Collapse
|