1
|
Lee SY. Using Bayesian statistics in confirmatory clinical trials in the regulatory setting: a tutorial review. BMC Med Res Methodol 2024; 24:110. [PMID: 38714936 PMCID: PMC11077897 DOI: 10.1186/s12874-024-02235-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 04/24/2024] [Indexed: 05/12/2024] Open
Abstract
Bayesian statistics plays a pivotal role in advancing medical science by enabling healthcare companies, regulators, and stakeholders to assess the safety and efficacy of new treatments, interventions, and medical procedures. The Bayesian framework offers a unique advantage over the classical framework, especially when incorporating prior information into a new trial with quality external data, such as historical data or another source of co-data. In recent years, there has been a significant increase in regulatory submissions using Bayesian statistics due to its flexibility and ability to provide valuable insights for decision-making, addressing the modern complexity of clinical trials where frequentist trials are inadequate. For regulatory submissions, companies often need to consider the frequentist operating characteristics of the Bayesian analysis strategy, regardless of the design complexity. In particular, the focus is on the frequentist type I error rate and power for all realistic alternatives. This tutorial review aims to provide a comprehensive overview of the use of Bayesian statistics in sample size determination, control of type I error rate, multiplicity adjustments, external data borrowing, etc., in the regulatory environment of clinical trials. Fundamental concepts of Bayesian sample size determination and illustrative examples are provided to serve as a valuable resource for researchers, clinicians, and statisticians seeking to develop more complex and innovative designs.
Collapse
Affiliation(s)
- Se Yoon Lee
- Department of Statistics, Texas A &M University, 3143 TAMU, College Station, TX, 77843, USA.
| |
Collapse
|
2
|
Warren JL, Wang Q, Ciarleglio MM. A scaled kernel density estimation prior for dynamic borrowing of historical information with application to clinical trial design. Stat Med 2024; 43:1615-1626. [PMID: 38345148 PMCID: PMC11483151 DOI: 10.1002/sim.10032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 11/07/2023] [Accepted: 01/23/2024] [Indexed: 03/16/2024]
Abstract
Incorporating historical data into a current data analysis can improve estimation of parameters shared across both datasets and increase the power to detect associations of interest while reducing the time and cost of new data collection. Several methods for prior distribution elicitation have been introduced to allow for the data-driven borrowing of historical information within a Bayesian analysis of the current data. We propose scaled Gaussian kernel density estimation (SGKDE) prior distributions as potentially more flexible alternatives. SGKDE priors directly use posterior samples collected from a historical data analysis to approximate probability density functions, whose variances depend on the degree of similarity between the historical and current datasets, which are used as prior distributions in the current data analysis. We compare the performances of the SGKDE priors with some existing approaches using a simulation study. Data from a recently completed phase III clinical trial of a maternal vaccine for respiratory syncytial virus are used to further explore the properties of SGKDE priors when designing a new clinical trial while incorporating historical data. Overall, both studies suggest that the new approach results in improved parameter estimation and power in the current data analysis compared to the considered existing methods.
Collapse
Affiliation(s)
- Joshua L Warren
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA
| | - Qi Wang
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA
| | - Maria M Ciarleglio
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA
| |
Collapse
|
3
|
Gamalo M, Kim Y, Zhang F, Lin J. Composite Likelihoods with Bounded Weights in Extrapolation of Data. J Biopharm Stat 2023; 33:708-725. [PMID: 36662162 DOI: 10.1080/10543406.2022.2152835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 11/23/2022] [Indexed: 01/21/2023]
Abstract
Among many efforts to facilitate timely access to safe and effective medicines to children, increased attention has been given to extrapolation. Loosely, it is the leveraging of conclusions or available data from adults or older age groups to draw conclusions for the target pediatric population when it can be assumed that the course of the disease and the expected response to a medicinal product would be sufficiently similar in the pediatric and the reference population. Extrapolation then can be characterized as a statistical mapping of information from the reference (adults or older age groups) to the target pediatric population. The translation, or loosely mapping of information, can be through a composite likelihood approach where the likelihood of the reference population is weighted by exponentiation and that this exponent is related to the value of the mapped information in the target population. The weight is bounded above and below recognizing the fact that similarity (of the disease and the expected response) is still valid despite variability of response between the cohorts. Maximum likelihood approaches are then used for estimation of parameters, and asymptotic theory is used to derive distributions of estimates for use in inference. Hence, the estimation of effects in the target population borrows information from the reference population. In addition, this manuscript also talks about how this method is related to the Bayesian statistical paradigm.
Collapse
Affiliation(s)
- Margaret Gamalo
- Global Biometrics & Data Management, Pfizer Inc Pennsylvania, Collegeville, Pennsylvania, USA
| | - Yoonji Kim
- Department of Statistics, Ohio State University, Columbus, Ohio, USA
| | - Fan Zhang
- Global Biometrics & Data Management, Pfizer Inc, Groton, Connecticut, USA
| | - Junjing Lin
- Statistical and Quantitative Sciences, Takeda Pharmaceuticals, Cambridge, MA, USA
| |
Collapse
|
4
|
Han Z, Zhang Q, Wang M, Ye K, Chen MH. On efficient posterior inference in normalized power prior Bayesian analysis. Biom J 2023; 65:e2200194. [PMID: 36960489 DOI: 10.1002/bimj.202200194] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 11/24/2022] [Accepted: 02/15/2023] [Indexed: 03/25/2023]
Abstract
The power prior has been widely used to discount the amount of information borrowed from historical data in the design and analysis of clinical trials. It is realized by raising the likelihood function of the historical data to a power parameterδ ∈ [ 0 , 1 ] $\delta \in [0, 1]$ , which quantifies the heterogeneity between the historical and the new study. In a fully Bayesian approach, a natural extension is to assign a hyperprior to δ such that the posterior of δ can reflect the degree of similarity between the historical and current data. To comply with the likelihood principle, an extra normalizing factor needs to be calculated and such prior is known as the normalized power prior. However, the normalizing factor involves an integral of a prior multiplied by a fractional likelihood and needs to be computed repeatedly over different δ during the posterior sampling. This makes its use prohibitive in practice for most elaborate models. This work provides an efficient framework to implement the normalized power prior in clinical studies. It bypasses the aforementioned efforts by sampling from the power prior withδ = 0 $\delta = 0$ andδ = 1 $\delta = 1$ only. Such a posterior sampling procedure can facilitate the use of a random δ with adaptive borrowing capability in general models. The numerical efficiency of the proposed method is illustrated via extensive simulation studies, a toxicological study, and an oncology study.
Collapse
Affiliation(s)
- Zifei Han
- School of Statistics, University of International Business and Economics, Beijing, China
| | - Qiang Zhang
- School of Statistics, University of International Business and Economics, Beijing, China
| | - Min Wang
- Department of Management Science and Statistics, The University of Texas at San Antonio, San Antonio, Texas, USA
| | - Keying Ye
- Department of Management Science and Statistics, The University of Texas at San Antonio, San Antonio, Texas, USA
| | - Ming-Hui Chen
- Department of Statistics, University of Connecticut, Storrs, Connecticut, USA
| |
Collapse
|
5
|
Bon JJ, Bretherton A, Buchhorn K, Cramb S, Drovandi C, Hassan C, Jenner AL, Mayfield HJ, McGree JM, Mengersen K, Price A, Salomone R, Santos-Fernandez E, Vercelloni J, Wang X. Being Bayesian in the 2020s: opportunities and challenges in the practice of modern applied Bayesian statistics. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2023; 381:20220156. [PMID: 36970822 PMCID: PMC10041356 DOI: 10.1098/rsta.2022.0156] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 01/06/2023] [Indexed: 06/18/2023]
Abstract
Building on a strong foundation of philosophy, theory, methods and computation over the past three decades, Bayesian approaches are now an integral part of the toolkit for most statisticians and data scientists. Whether they are dedicated Bayesians or opportunistic users, applied professionals can now reap many of the benefits afforded by the Bayesian paradigm. In this paper, we touch on six modern opportunities and challenges in applied Bayesian statistics: intelligent data collection, new data sources, federated analysis, inference for implicit models, model transfer and purposeful software products. This article is part of the theme issue 'Bayesian inference: challenges, perspectives, and prospects'.
Collapse
Affiliation(s)
- Joshua J. Bon
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Adam Bretherton
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Katie Buchhorn
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Susanna Cramb
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Public Health and Social Work, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Christopher Drovandi
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Conor Hassan
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Adrianne L. Jenner
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Helen J. Mayfield
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Public Health, The University of Queensland, Saint Lucia, Queensland, Australia
| | - James M. McGree
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Kerrie Mengersen
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Aiden Price
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Robert Salomone
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Computer Science, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Edgar Santos-Fernandez
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Julie Vercelloni
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Xiaoyu Wang
- Centre for Data Science, Queensland University of Technology, Brisbane, Queensland, Australia
- School of Mathematical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| |
Collapse
|
6
|
Bayesian Statistics for Medical Devices: Progress Since 2010. Ther Innov Regul Sci 2023; 57:453-463. [PMID: 36869194 PMCID: PMC9984131 DOI: 10.1007/s43441-022-00495-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 12/24/2022] [Indexed: 03/05/2023]
Abstract
The use of Bayesian statistics to support regulatory evaluation of medical devices began in the late 1990s. We review the literature, focusing on recent developments of Bayesian methods, including hierarchical modeling of studies and subgroups, borrowing strength from prior data, effective sample size, Bayesian adaptive designs, pediatric extrapolation, benefit-risk decision analysis, use of real-world evidence, and diagnostic device evaluation. We illustrate how these developments were utilized in recent medical device evaluations. In Supplementary Material, we provide a list of medical devices for which Bayesian statistics were used to support approval by the US Food and Drug Administration (FDA), including those since 2010, the year the FDA published their guidance on Bayesian statistics for medical devices. We conclude with a discussion of current and future challenges and opportunities for Bayesian statistics, including artificial intelligence/machine learning (AI/ML) Bayesian modeling, uncertainty quantification, Bayesian approaches using propensity scores, and computational challenges for high dimensional data and models.
Collapse
|