1
|
Wu C, Hao J, Xin Y, Song R, Li W, Zuo L, Zhang X, Cai Y, Wu H, Hui W. Poor sample size reporting quality and insufficient sample size in economic evaluations conducted alongside pragmatic trials: a cross-sectional survey. J Clin Epidemiol 2024; 176:111535. [PMID: 39307404 DOI: 10.1016/j.jclinepi.2024.111535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 09/02/2024] [Accepted: 09/16/2024] [Indexed: 10/25/2024]
Abstract
OBJECTIVES Economic evaluations based on well-designed and -conducted pragmatic randomized controlled trials (pRCTs) can provide valuable evidence on the cost-effectiveness of interventions, enhancing the relevance and applicability of findings to healthcare decision-making. However, economic evaluation outcomes are seldom taken into consideration during the process of sample size calculation in pragmatic trials. The reporting quality of sample size and information on its calculation in economic evaluations that are well-suited to pRCTs remain unknown. This study aims to assess the reporting quality of sample size and estimate the power values of economic evaluations in pRCTs. STUDY DESIGN AND SETTING We conducted a cross-sectional survey using data of pRCTs available from PubMed and OVID from 1 January 2010 to 24 April 2022. Two groups of independent reviewers identified articles; three groups of reviewers each extracted the data. Descriptive statistics presented the general characteristics of included studies. Statistical power analyses were performed on clinical and economic outcomes with sufficient data. RESULTS The electronic search identified 715 studies and 152 met the inclusion criteria. Of these, 26 were available for power analysis. Only 9 out of 152 trials (5.9%) considered economic outcomes when estimating sample size, and only one adjusted the sample size accordingly. Power values for trial-based economic evaluations and clinical trials ranged from 2.56% to 100% and 3.21%-100%, respectively. Regardless of the perspectives, in 14 out of the 26 studies (53.8%), the power values of economic evaluations for quality-adjusted life years (QALYs) were lower than those of clinical trials for primary endpoints (PEs). In 11 out of the 24 (45.8%) and in 8 out of the 13 (61.5%) studies, power values of economic evaluations for QALYs were lower than those of clinical trials for PEs from the healthcare and societal perspectives, respectively. Power values of economic evaluations for non-QALYs from the healthcare and societal perspectives were potentially higher than those of clinical trials in 3 out of the 4 studies (75%). The power values for economic outcomes in Q1 were not higher than those for other journal impact factor quartile categories. CONCLUSION Theoretically, pragmatic trials with concurrent economic evaluations can provide real-world evidence for healthcare decision makers. However, in pRCT-based economic evaluations, limited consideration, and inadequate reporting of sample-size calculations for economic outcomes could negatively affect the results' reliability and generalisability. We thus recommend that future pragmatic trials with economic evaluations should report how sample sizes are determined or adjusted based on the economic outcomes in their protocols to enhance their transparency and evidence quality.
Collapse
Affiliation(s)
- Changjin Wu
- School of Public Health, Chongqing Medical University, Chongqing, China
| | - Jun Hao
- Medical Research and Biometrics Centre, National Clinical Research Centre for Cardiovascular Diseases, Fuwai Hospital, National Centre for Cardiovascular Diseases, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China; Department of Clinical Sciences, Liverpool School of Tropical Medicine, Liverpool, UK; Institute for Global Health, University College London, London, UK
| | - Yu Xin
- Department of Science and Technology, West China Hospital, Sichuan University, Chengdu, China
| | - Ruomeng Song
- Department of Health Service Management, School of Health Management, China Medical University, Shenyang, China
| | - Wentan Li
- Department of Health Service Management, School of Health Management, China Medical University, Shenyang, China
| | - Ling Zuo
- Department of Pulmonary and Critical Care Medicine, West China Hospital, Sichuan University /West China School of Nursing, Sichuan University, Chengdu, China; Integrated Care Management Centre, Outpatient Department, West China Hospital, Sichuan University, Chengdu, China
| | - Xiyan Zhang
- Department of Health Service Management, School of Health Management, China Medical University, Shenyang, China
| | - Yuanyi Cai
- Department of Health Service Management, School of Health Management, China Medical University, Shenyang, China
| | - Huazhang Wu
- Department of Health Service Management, School of Health Management, China Medical University, Shenyang, China
| | - Wen Hui
- Department of Science and Technology, West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
2
|
Mheissen S, Khan H, Aldandan M, Koletsi D. Unaccounted clustering assumptions still compromise inferences in cluster randomized trials in orthodontic research. Korean J Orthod 2024; 54:374-391. [PMID: 39582333 PMCID: PMC11602250 DOI: 10.4041/kjod24.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 07/10/2024] [Accepted: 07/21/2024] [Indexed: 11/26/2024] Open
Abstract
Objective This meta-epidemiological study aimed to determine whether optimal sample size calculation was applied in orthodontic cluster randomized trials (CRTs). Methods Orthodontic randomized clinical trials with a cluster design, published between January 1, 2017 to December 31, 2023, in leading orthodontic journals were sourced. Study selection was undertaken by two independent authors. The study characteristics and variables required for sample size calculation were also extracted by the authors. The design effect for each trial was calculated using an intra-cluster correlation coefficient of 0.1 and the number of teeth in each cluster to recalculate the sample size. Descriptive statistics for the study characteristics, summary values for the design effect, and sample sizes were provided. Results One-hundred and five CRTs were deemed eligible for inclusion. Of these, 100 reported sample size calculation. Nine CRTs (9.0%) did not report any effect measures for the sample size calculation, and a few did not report any power assumptions or significance levels or thresholds. Regarding the specific variables for the cluster design, only one CRT reported a design effect and adjusted the sample size accordingly. Recalculations indicated that the sample size of orthodontic CRTs should be increased by a median of 50% to maintain the same statistical power and significance level. Conclusions Sample size calculations in orthodontic cluster trials were suboptimal. Greater awareness of the cluster design and variables is required to calculate the sample size adequately, to reduce the practice of underpowered studies.
Collapse
Affiliation(s)
| | - Haris Khan
- CMH Institute of Dentistry Lahore, National University of Medical Sciences, Lahore, Pakistan
| | | | - Despina Koletsi
- Clinic of Orthodontics and Pediatric Dentistry, Center of Dental Medicine, University of Zurich, Zurich, Switzerland
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA
| |
Collapse
|
3
|
Biggs J, Challenger JD, Hellewell J, Churcher TS, Cook J. A systematic review of sample size estimation accuracy on power in malaria cluster randomised trials measuring epidemiological outcomes. BMC Med Res Methodol 2024; 24:238. [PMID: 39407101 PMCID: PMC11476958 DOI: 10.1186/s12874-024-02361-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 10/01/2024] [Indexed: 10/20/2024] Open
Abstract
INTRODUCTION Cluster randomised trials (CRTs) are the gold standard for measuring the community-wide impacts of malaria control tools. CRTs rely on well-defined sample size estimations to detect statistically significant effects of trialled interventions, however these are often predicted poorly by triallists. Here, we review the accuracy of predicted parameters used in sample size calculations for malaria CRTs with epidemiological outcomes. METHODS We searched for published malaria CRTs using four online databases in March 2022. Eligible trials included those with malaria-specific epidemiological outcomes which randomised at least six geographical clusters to study arms. Predicted and observed sample size parameters were extracted by reviewers for each trial. Pair-wise Spearman's correlation coefficients (rs) were calculated to assess the correlation between predicted and observed control-arm outcome measures and effect sizes (relative percentage reductions) between arms. Among trials which retrospectively calculated an estimate of heterogeneity in cluster outcomes, we recalculated study power according to observed trial estimates. RESULTS Of the 1889 records identified and screened, 108 articles were eligible and comprised of 71 malaria CRTs. Among 91.5% (65/71) of trials that included sample size calculations, most estimated cluster heterogeneity using the coefficient of variation (k) (80%, 52/65) which were often predicted without using prior data (67.7%, 44/65). Predicted control-arm prevalence moderately correlated with observed control-arm prevalence (rs: 0.44, [95%CI: 0.12,0.68], p-value < 0.05], with 61.2% (19/31) of prevalence estimates overestimated. Among the minority of trials that retrospectively calculated cluster heterogeneity (20%, 13/65), empirical values contrasted with those used in sample size estimations and often compromised study power. Observed effect sizes were often smaller than had been predicted at the sample size stage (72.9%, 51/70) and were typically higher in the first, compared to the second, year of trials. Overall, effect sizes achieved by malaria interventions tested in trials decreased between 1995 and 2021. CONCLUSIONS Study findings reveal sample size parameters in malaria CRTs were often inaccurate and resulted in underpowered studies. Future trials must strive to obtain more representative epidemiological sample size inputs to ensure interventions against malaria are adequately evaluated. REGISTRATION This review is registered with PROSPERO (CRD42022315741).
Collapse
Affiliation(s)
- Joseph Biggs
- Medical Research Council (MRC) International Statistics and Epidemiology Group, Department of Infectious Disease Epidemiology and International Health, London School of Hygiene and Tropical Medicine, London, UK.
| | - Joseph D Challenger
- Medical Research Council (MRC) Centre for Global Infectious Disease Analysis, Department of Infectious Disease Epidemiology, Faculty of Medicine, Imperial College London, London, UK
| | - Joel Hellewell
- Medical Research Council (MRC) Centre for Global Infectious Disease Analysis, Department of Infectious Disease Epidemiology, Faculty of Medicine, Imperial College London, London, UK
| | - Thomas S Churcher
- Medical Research Council (MRC) Centre for Global Infectious Disease Analysis, Department of Infectious Disease Epidemiology, Faculty of Medicine, Imperial College London, London, UK
| | - Jackie Cook
- Medical Research Council (MRC) International Statistics and Epidemiology Group, Department of Infectious Disease Epidemiology and International Health, London School of Hygiene and Tropical Medicine, London, UK
| |
Collapse
|
4
|
Tong G, Tong J, Jiang Y, Esserman D, Harhay MO, Warren JL. Hierarchical Bayesian modeling of heterogeneous outcome variance in cluster randomized trials. Clin Trials 2024; 21:451-460. [PMID: 38197388 PMCID: PMC11233424 DOI: 10.1177/17407745231222018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
BACKGROUND Heterogeneous outcome correlations across treatment arms and clusters have been increasingly acknowledged in cluster randomized trials with binary endpoints, where analytical methods have been developed to study such heterogeneity. However, cluster-specific outcome variances and correlations have yet to be studied for cluster randomized trials with continuous outcomes. METHODS This article proposes models fitted in the Bayesian setting with hierarchical variance structure to quantify heterogeneous variances across clusters and explain it with cluster-level covariates when the outcome is continuous. The models can also be extended to analyzing heterogeneous variances in individually randomized group treatment trials, with arm-specific cluster-level covariates, or in partially nested designs. Simulation studies are carried out to validate the performance of the newly introduced models across different settings. RESULTS Simulations showed that overall the newly introduced models have good performance, reporting low bias and approximately 95% coverage for the intraclass correlation coefficients and regression parameters in the variance model. When variances are heterogeneous, our proposed models had improved model fit over models with homogeneous variances. When used to analyze data from the Kerala Diabetes Prevention Program study, our models identified heterogeneous variances and intraclass correlation coefficients across clusters and examined cluster-level characteristics associated with such heterogeneity. CONCLUSION We proposed new hierarchical Bayesian variance models to accommodate cluster-specific variances in cluster randomized trials. The newly developed methods inform the understanding of how an intervention strategy is implemented and disseminated differently across clusters and can help improve future trial design.
Collapse
Affiliation(s)
- Guangyu Tong
- Department of Internal Medicine, Yale School of Medicine, New Haven, Connecticut, USA
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA
- Center for Methods in Implementation and Prevention Science, Yale School of Public Health, New Haven, Connecticut, USA
| | - Jiaqi Tong
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA
- Center for Methods in Implementation and Prevention Science, Yale School of Public Health, New Haven, Connecticut, USA
| | - Yi Jiang
- Department of Biostatistics, Penn State College of Medicine, Hershey, Pennsylvania, USA
| | - Denise Esserman
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA
- Yale Center for Analytical Science, Yale School of Public Health, New Haven, Connecticut, USA
| | - Michael O Harhay
- Clinical Trials Methods and Outcomes Lab, Palliative and Advanced Illness Research (PAIR) Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Joshua L Warren
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA
| |
Collapse
|
5
|
Ren Y, Jia Y, Yang M, Yao M, Wang Y, Mei F, Li Q, Li L, Li G, Huang Y, Zhang Y, Xu J, Zou K, Tan J, Sun X. Sample size calculations for randomized controlled trials with repeatedly measured continuous variables as primary outcomes need improvements: a cross-sectional study. J Clin Epidemiol 2024; 166:111235. [PMID: 38072178 DOI: 10.1016/j.jclinepi.2023.111235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 11/07/2023] [Accepted: 12/04/2023] [Indexed: 01/04/2024]
Abstract
OBJECTIVES Randomized controlled trials (RCTs) with repeatedly measured continuous variables as primary outcomes are common. Although statistical methodologies for calculating sample sizes in such trials have been extensively investigated, their practical application remains unclear. This study aims to provide an overview of sample size calculation methods for different research questions (e.g., key time point treatment effect, treatment effect change over time) and evaluate the adequacy of current practices in trial design. STUDY DESIGN AND SETTING We conducted a comprehensive search of PubMed to identify RCTs published in core journals in 2019 that utilized repeatedly measured continuous variables as their primary outcomes. Data were extracted using a predefined questionnaire including general study characteristics, primary outcomes, detailed sample size calculation methods, and methods for analyzing the primary outcome. We re-estimated the sample size for trials that provided all relevant parameters. RESULTS A total of 168 RCTs were included, with a median of four repeated measurements (interquartile range 3-6) per outcome. In 48 (28.6%) trials, the primary outcome used for sample size calculation differed from the one used in defining the primary outcomes. There were 90 (53.6%) trials exhibited inconsistencies between the hypotheses specified for sample size calculation and those specified for primary analysis. The statistical methods used for sample size calculation in 158 (94.0%) trials did not align with those used for primary analysis. Additionally, only 6 (3.6%) trials accounted for the number of repeated measurements, and 7 (4.2%) trials considered the correlation among these measurements when calculating the sample size. Furthermore, of the 128 (76.2%) trials that considered loss to follow-up, 33 (25.8%) used an incorrect formula (i.e., N∗(1+lose rate) for sample size adjustment. In 53 (49.5%) out of 107 trials, the re-estimated sample size was larger than the reported sample size. CONCLUSION The practice of sample size calculation for RCTs with repeatedly measured continuous variables as primary outcomes displayed significant deficiencies, with a notable proportion of trials failed to report essential parameters about repeated measurement required for sample size calculation. Our findings highlight the urgent need to use optimal sample size methods that align with the research hypothesis, primary analysis method, and the form of the primary outcome.
Collapse
Affiliation(s)
- Yan Ren
- Institute of Integrated Traditional Chinese and Western Medicine, Chinese Evidence-based Medicine Center, West China Hospital, Sichuan University, Chengdu, China; NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, Chengdu, China; Sichuan Center of Technology Innovation for Real World Data, Chengdu, China
| | - Yulong Jia
- Institute of Integrated Traditional Chinese and Western Medicine, Chinese Evidence-based Medicine Center, West China Hospital, Sichuan University, Chengdu, China; NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, Chengdu, China; Sichuan Center of Technology Innovation for Real World Data, Chengdu, China
| | - Min Yang
- Department of Epidemiology and Biostatistics, West China School of Public Health, Sichuan University, Chengdu, China; Faculty of Health, Design and Art, Swinburne Technology University, Victory, Australia
| | - Minghong Yao
- Institute of Integrated Traditional Chinese and Western Medicine, Chinese Evidence-based Medicine Center, West China Hospital, Sichuan University, Chengdu, China; NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, Chengdu, China; Sichuan Center of Technology Innovation for Real World Data, Chengdu, China
| | - Yuning Wang
- Institute of Integrated Traditional Chinese and Western Medicine, Chinese Evidence-based Medicine Center, West China Hospital, Sichuan University, Chengdu, China; NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, Chengdu, China; Sichuan Center of Technology Innovation for Real World Data, Chengdu, China
| | - Fan Mei
- Institute of Integrated Traditional Chinese and Western Medicine, Chinese Evidence-based Medicine Center, West China Hospital, Sichuan University, Chengdu, China; NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, Chengdu, China; Sichuan Center of Technology Innovation for Real World Data, Chengdu, China
| | - Qianrui Li
- Institute of Integrated Traditional Chinese and Western Medicine, Chinese Evidence-based Medicine Center, West China Hospital, Sichuan University, Chengdu, China; Department of Nuclear Medicine, West China Hospital of Sichuan University, Chengdu, China
| | - Ling Li
- Institute of Integrated Traditional Chinese and Western Medicine, Chinese Evidence-based Medicine Center, West China Hospital, Sichuan University, Chengdu, China; NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, Chengdu, China; Sichuan Center of Technology Innovation for Real World Data, Chengdu, China
| | - Guowei Li
- Center for Clinical Epidemiology and Methodology (CCEM), Guangdong Second Provincial General Hospital, Guangzhou, China
| | - Yunxiang Huang
- Institute of Integrated Traditional Chinese and Western Medicine, Chinese Evidence-based Medicine Center, West China Hospital, Sichuan University, Chengdu, China; NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, Chengdu, China; Sichuan Center of Technology Innovation for Real World Data, Chengdu, China
| | - Yuanjin Zhang
- Institute of Integrated Traditional Chinese and Western Medicine, Chinese Evidence-based Medicine Center, West China Hospital, Sichuan University, Chengdu, China; NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, Chengdu, China; Sichuan Center of Technology Innovation for Real World Data, Chengdu, China
| | - Jiayue Xu
- Institute of Integrated Traditional Chinese and Western Medicine, Chinese Evidence-based Medicine Center, West China Hospital, Sichuan University, Chengdu, China; NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, Chengdu, China; Sichuan Center of Technology Innovation for Real World Data, Chengdu, China
| | - Kang Zou
- Institute of Integrated Traditional Chinese and Western Medicine, Chinese Evidence-based Medicine Center, West China Hospital, Sichuan University, Chengdu, China; NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, Chengdu, China; Sichuan Center of Technology Innovation for Real World Data, Chengdu, China
| | - Jing Tan
- Institute of Integrated Traditional Chinese and Western Medicine, Chinese Evidence-based Medicine Center, West China Hospital, Sichuan University, Chengdu, China; NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, Chengdu, China; Sichuan Center of Technology Innovation for Real World Data, Chengdu, China.
| | - Xin Sun
- Institute of Integrated Traditional Chinese and Western Medicine, Chinese Evidence-based Medicine Center, West China Hospital, Sichuan University, Chengdu, China; NMPA Key Laboratory for Real World Data Research and Evaluation in Hainan, Chengdu, China; Sichuan Center of Technology Innovation for Real World Data, Chengdu, China; Department of Epidemiology and Biostatistics, West China School of Public Health, Sichuan University, Chengdu, China.
| |
Collapse
|
6
|
Hemming K, Taljaard M. Key considerations for designing, conducting and analysing a cluster randomized trial. Int J Epidemiol 2023; 52:1648-1658. [PMID: 37203433 PMCID: PMC10555937 DOI: 10.1093/ije/dyad064] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Accepted: 05/02/2023] [Indexed: 05/20/2023] Open
Abstract
Not only do cluster randomized trials require a larger sample size than individually randomized trials, they also face many additional complexities. The potential for contamination is the most commonly used justification for using cluster randomization, but the risk of contamination should be carefully weighed against the more serious problem of questionable scientific validity in settings with post-randomization identification or recruitment of participants unblinded to the treatment allocation. In this paper we provide some simple guidelines to help researchers conduct cluster trials in a way that minimizes potential biases and maximizes statistical efficiency. The overarching theme of this guidance is that methods that apply to individually randomized trials rarely apply to cluster randomized trials. We recommend that cluster randomization be only used when necessary-balancing the benefits of cluster randomization with its increased risks of bias and increased sample size. Researchers should also randomize at the lowest possible level-balancing the risks of contamination with ensuring an adequate number of randomization units-as well as exploring other options for statistically efficient designs. Clustering should always be allowed for in the sample size calculation; and the use of restricted randomization (and adjustment in the analysis for covariates used in the randomization) should be considered. Where possible, participants should be recruited before randomizing clusters and, when recruiting (or identifying) participants post-randomization, recruiters should be masked to the allocation. In the analysis, the target of inference should align with the research question, and adjustment for clustering and small sample corrections should be used when the trial includes less than about 40 clusters.
Collapse
Affiliation(s)
- Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
- School of Epidemiology, Public Health and Preventive Medicine, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
7
|
Martin J, Middleton L, Hemming K. Minimisation for the design of parallel cluster-randomised trials: An evaluation of balance in cluster-level covariates and numbers of clusters allocated to each arm. Clin Trials 2023; 20:111-120. [PMID: 36661245 DOI: 10.1177/17407745221149104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
BACKGROUND Cluster-randomised trials often use some form of restricted randomisation, such as stratified- or covariate-constrained randomisation. Minimisation has the potential to balance on more covariates than blocked stratification and can be implemented sequentially unlike covariate-constrained randomisation. Yet, unlike stratification, minimisation has no inbuilt guard to maintain close to a 1:1 allocation. A departure from a 1:1 allocation can be unappealing in a setting with a small number of allocation units such as cluster randomisation which typically include about 30 clusters. METHODS Using simulation (10,000 per scenario), we evaluate the performance of a range of minimisation procedures on the likelihood of a 1:1 allocation of clusters (10-80 clusters) to treatment arms, along with its performance on covariate imbalance. The range of minimisation procedures includes varying: the proportion of clusters allocated to the least imbalanced arm (known as the stochastic element) - between 0.7 and 1, percentage of first clusters allocated completely at random (known as the bed-in period) - between 0% and 20% and adding 'number of clusters allocated to each arm' as a covariate in the minimisation algorithm. We additionally include a comparison of stratifying and then minimising within key strata (such as country within a multi country cluster trial) as a potential aid to increasing balance. RESULTS Minimisation is unlikely to result in an exact 1:1 allocation unless the stochastic element is set higher than 0.9. For example, with 20 clusters, 2 binary covariates and setting the stochastic element to 0.7: only 41% of the possible randomisations over the 10,000 simulations achieved a 1:1 allocation. While typical sizes of imbalance were small (a difference of two clusters per arm), allocations as extreme as of 10:10 were observed. Adding the 'number of clusters' into the minimisation algorithm reduces this risk slightly, but covariate imbalance increases slightly. Stratifying and then minimising within key strata improve balance within strata but increase imbalance across all clusters, both on the number of clusters and covariate imbalance. CONCLUSION In cluster trials, where there are typically about 30 allocation units, when using minimisation, unless the stochastic element is set very high, there is a high risk of not achieving a 1:1 allocation, and a small but nonetheless real risk of an extreme departure from a 1:1 allocation. Stratification with minimisation within key strata (such as country) improves the balance within strata although compromises overall balance.
Collapse
Affiliation(s)
- James Martin
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| | - Lee Middleton
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| | - Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| |
Collapse
|
8
|
Nevins P, Nicholls SG, Ouyang Y, Carroll K, Hemming K, Weijer C, Taljaard M. Reporting of and explanations for under-recruitment and over-recruitment in pragmatic trials: a secondary analysis of a database of primary trial reports published from 2014 to 2019. BMJ Open 2022; 12:e067656. [PMID: 36600344 PMCID: PMC9743401 DOI: 10.1136/bmjopen-2022-067656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 11/16/2022] [Indexed: 12/13/2022] Open
Abstract
OBJECTIVES To describe the extent to which pragmatic trials underachieved or overachieved their target sample sizes, examine explanations and identify characteristics associated with under-recruitment and over-recruitment. STUDY DESIGN AND SETTING Secondary analysis of an existing database of primary trial reports published during 2014-2019, registered in ClinicalTrials.gov, self-labelled as pragmatic and with target and achieved sample sizes available. RESULTS Of 372 eligible trials, the prevalence of under-recruitment (achieving <90% of target sample size) was 71 (19.1%) and of over-recruitment (>110% of target) was 87 (23.4%). Under-recruiting trials commonly acknowledged that they did not achieve their targets (51, 71.8%), with the majority providing an explanation, but only 11 (12.6%) over-recruiting trials acknowledged recruitment excess. The prevalence of under-recruitment in individually randomised versus cluster randomised trials was 41 (17.0%) and 30 (22.9%), respectively; prevalence of over-recruitment was 39 (16.2%) vs 48 (36.7%), respectively. Overall, 101 025 participants were recruited to trials that did not achieve at least 90% of their target sample size. When considering trials with over-recruitment, the total number of participants recruited in excess of the target was a median (Q1-Q3) 319 (75-1478) per trial for an overall total of 555 309 more participants than targeted. In multinomial logistic regression, cluster randomisation and lower journal impact factor were significantly associated with both under-recruitment and over-recruitment, while using exclusively routinely collected data and educational/behavioural interventions were significantly associated with over-recruitment; we were unable to detect significant associations with obtaining consent, publication year, country of recruitment or public engagement. CONCLUSIONS A clear explanation for under-recruitment or over-recruitment in pragmatic trials should be provided to encourage transparency in research, and to inform recruitment to future trials with comparable designs. The issues and ethical implications of over-recruitment should be more widely recognised by trialists, particularly when designing cluster randomised trials.
Collapse
Affiliation(s)
- Pascale Nevins
- Department of Chemistry and Biomolecular Sciences, University of Ottawa Faculty of Science, Ottawa, Ontario, Canada
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| | - Stuart G Nicholls
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| | - Yongdong Ouyang
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada
| | - Kelly Carroll
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| | - Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| | - Charles Weijer
- Departments of Medicine, Epidemiology & Biostatistics, and Philosophy, Western University, London, Ontario, Canada
| | - Monica Taljaard
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
9
|
Abstract
BACKGROUND This article identifies the most influential methods reports for group-randomized trials and related designs published through 2020. Many interventions are delivered to participants in real or virtual groups or in groups defined by a shared interventionist so that there is an expectation for positive correlation among observations taken on participants in the same group. These interventions are typically evaluated using a group- or cluster-randomized trial, an individually randomized group treatment trial, or a stepped wedge group- or cluster-randomized trial. These trials face methodological issues beyond those encountered in the more familiar individually randomized controlled trial. METHODS PubMed was searched to identify candidate methods reports; that search was supplemented by reports known to the author. Candidate reports were reviewed by the author to include only those focused on the designs of interest. Citation counts and the relative citation ratio, a new bibliometric tool developed at the National Institutes of Health, were used to identify influential reports. The relative citation ratio measures influence at the article level by comparing the citation rate of the reference article to the citation rates of the articles cited by other articles that also cite the reference article. RESULTS In total, 1043 reports were identified that were published through 2020. However, 55 were deemed to be the most influential based on their relative citation ratio or their citation count using criteria specific to each of the three designs, with 32 group-randomized trial reports, 7 individually randomized group treatment trial reports, and 16 stepped wedge group-randomized trial reports. Many of the influential reports were early publications that drew attention to the issues that distinguish these designs from the more familiar individually randomized controlled trial. Others were textbooks that covered a wide range of issues for these designs. Others were "first reports" on analytic methods appropriate for a specific type of data (e.g. binary data, ordinal data), for features commonly encountered in these studies (e.g. unequal cluster size, attrition), or for important variations in study design (e.g. repeated measures, cohort versus cross-section). Many presented methods for sample size calculations. Others described how these designs could be applied to a new area (e.g. dissemination and implementation research). Among the reports with the highest relative citation ratios were the CONSORT statements for each design. CONCLUSIONS Collectively, the influential reports address topics of great interest to investigators who might consider using one of these designs and need guidance on selecting the most appropriate design for their research question and on the best methods for design, analysis, and sample size.
Collapse
Affiliation(s)
- David M Murray
- Office of Disease Prevention, National Institutes of Health, North Bethesda, MD, USA
| |
Collapse
|
10
|
Kristunas C, Grayling M, Gray LJ, Hemming K. Mind the gap: covariate constrained randomisation can protect against substantial power loss in parallel cluster randomised trials. BMC Med Res Methodol 2022; 22:111. [PMID: 35413793 PMCID: PMC9006416 DOI: 10.1186/s12874-022-01588-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 03/21/2022] [Indexed: 11/25/2022] Open
Abstract
Background Cluster randomised trials often randomise a small number of units, putting them at risk of poor balance of covariates across treatment arms. Covariate constrained randomisation aims to reduce this risk by removing the worst balanced allocations from consideration. This is known to provide only a small gain in power over that averaged under simple randomisation and is likely influenced by the number and prognostic effect of the covariates. We investigated the performance of covariate constrained randomisation in comparison to the worst balanced allocations, and considered the impact on the power of the prognostic effect and number of covariates adjusted for in the analysis. Methods Using simulation, we examined the Monte Carlo type I error rate and power of cross-sectional, two-arm parallel cluster-randomised trials with a continuous outcome and four binary cluster-level covariates, using either simple or covariate constrained randomisation. Data were analysed using a small sample corrected linear mixed-effects model, adjusted for some or all of the binary covariates. We varied the number of clusters, intra-cluster correlation, number and prognostic effect of covariates balanced in the randomisation and adjusted in the analysis, and the size of the candidate set from which the allocation was selected. For each scenario, 20,000 simulations were conducted. Results When compared to the worst balanced allocations, covariate constrained randomisation with an adjusted analysis provided gains in power of up to 20 percentage points. Even with analysis-based adjustment for those covariates balanced in the randomisation, the type I error rate was not maintained when the intracluster correlation is very small (0.001). Generally, greater power was achieved when more prognostic covariates are restricted in the randomisation and as the size of the candidate set decreases. However, adjustment for weakly prognostic covariates lead to a loss in power of up to 20 percentage points. Conclusions When compared to the worst balanced allocations, covariate constrained randomisation provides moderate to substantial improvements in power. However, the prognostic effect of the covariates should be carefully considered when selecting them for inclusion in the randomisation. Supplementary Information The online version contains supplementary material available at 10.1186/s12874-022-01588-8.
Collapse
Affiliation(s)
- Caroline Kristunas
- Department of Health Sciences, University of Leicester, Leicester, UK. .,Institute of Clinical Sciences, University of Birmingham, Birmingham, UK.
| | - Michael Grayling
- Population Health Sciences Institute, Newcastle University, Newcastle upon Tyne, UK
| | - Laura J Gray
- Department of Health Sciences, University of Leicester, Leicester, UK
| | - Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| |
Collapse
|
11
|
Statistical analysis of publicly funded cluster randomised controlled trials: a review of the National Institute for Health Research Journals Library. Trials 2022; 23:115. [PMID: 35120567 PMCID: PMC8817506 DOI: 10.1186/s13063-022-06025-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Accepted: 01/13/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND In cluster randomised controlled trials (cRCTs), groups of individuals (rather than individuals) are randomised to minimise the risk of contamination and/or efficiently use limited resources or solve logistic and administrative problems. A major concern in the primary analysis of cRCT is the use of appropriate statistical methods to account for correlation among outcomes from a particular group/cluster. This review aimed to investigate the statistical methods used in practice for analysing the primary outcomes in publicly funded cluster randomised controlled trials, adherence to the CONSORT (Consolidated Standards of Reporting Trials) reporting guidelines for cRCTs and the recruitment abilities of the cluster trials design. METHODS We manually searched the United Kingdom's National Institute for Health Research (NIHR) online Journals Library, from 1 January 1997 to 15 July 2021 chronologically for reports of cRCTs. Information on the statistical methods used in the primary analyses was extracted. One reviewer conducted the search and extraction while the two other independent reviewers supervised and validated 25% of the total trials reviewed. RESULTS A total of 1942 reports, published online in the NIHR Journals Library were screened for eligibility, 118 reports of cRCTs met the initial inclusion criteria, of these 79 reports containing the results of 86 trials with 100 primary outcomes analysed were finally included. Two primary outcomes were analysed at the cluster-level using a generalized linear model. At the individual-level, the generalized linear mixed model was the most used statistical method (80%, 80/100), followed by regression with robust standard errors (7%) then generalized estimating equations (6%). Ninety-five percent (95/100) of the primary outcomes in the trials were analysed with appropriate statistical methods that accounted for clustering while 5% were not. The mean observed intracluster correlation coefficient (ICC) was 0.06 (SD, 0.12; range, - 0.02 to 0.63), and the median value was 0.02 (IQR, 0.001-0.060), although 42% of the observed ICCs for the analysed primary outcomes were not reported. CONCLUSIONS In practice, most of the publicly funded cluster trials adjusted for clustering using appropriate statistical method(s), with most of the primary analyses done at the individual level using generalized linear mixed models. However, the inadequate analysis and poor reporting of cluster trials published in the UK is still happening in recent times, despite the availability of the CONSORT reporting guidelines for cluster trials published over a decade ago.
Collapse
|
12
|
Parker K, Nunns M, Xiao Z, Ford T, Ukoumunne OC. Characteristics and practices of school-based cluster randomised controlled trials for improving health outcomes in pupils in the United Kingdom: a methodological systematic review. BMC Med Res Methodol 2021; 21:152. [PMID: 34311695 PMCID: PMC8311976 DOI: 10.1186/s12874-021-01348-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 07/14/2021] [Indexed: 02/06/2023] Open
Abstract
Background Cluster randomised trials (CRTs) are increasingly used to evaluate non-pharmacological interventions for improving child health. Although methodological challenges of CRTs are well documented, the characteristics of school-based CRTs with pupil health outcomes have not been systematically described. Our objective was to describe methodological characteristics of these studies in the United Kingdom (UK). Methods MEDLINE was systematically searched from inception to 30th June 2020. Included studies used the CRT design in schools and measured primary outcomes on pupils. Study characteristics were described using descriptive statistics. Results Of 3138 articles identified, 64 were included. CRTs with pupil health outcomes have been increasingly used in the UK school setting since the earliest included paper was published in 1993; 37 (58%) studies were published after 2010. Of the 44 studies that reported information, 93% included state-funded schools. Thirty six (56%) were exclusively in primary schools and 24 (38%) exclusively in secondary schools. Schools were randomised in 56 studies, classrooms in 6 studies, and year groups in 2 studies. Eighty percent of studies used restricted randomisation to balance cluster-level characteristics between trial arms, but few provided justification for their choice of balancing factors. Interventions covered 11 different health areas; 53 (83%) included components that were necessarily administered to entire clusters. The median (interquartile range) number of clusters and pupils recruited was 31.5 (21 to 50) and 1308 (604 to 3201), respectively. In half the studies, at least one cluster dropped out. Only 26 (41%) studies reported the intra-cluster correlation coefficient (ICC) of the primary outcome from the analysis; this was often markedly different to the assumed ICC in the sample size calculation. The median (range) ICC for school clusters was 0.028 (0.0005 to 0.21). Conclusions The increasing pool of school-based CRTs examining pupil health outcomes provides methodological knowledge and highlights design challenges. Data from these studies should be used to identify the best school-level characteristics for balancing the randomisation. Better information on the ICC of pupil health outcomes is required to aid the planning of future CRTs. Improved reporting of the recruitment process will help to identify barriers to obtaining representative samples of schools.
Collapse
Affiliation(s)
- Kitty Parker
- NIHR Applied Research Collaboration South West Peninsula (PenARC), University of Exeter, Room 2.16, South Cloisters, St Luke's Campus, 79 Heavitree Rd, Exeter, EX1 2LU, UK.
| | - Michael Nunns
- College of Medicine and Health, University of Exeter, St Luke's Campus, Heavitree Road, Exeter, EX1 2LU, UK
| | - ZhiMin Xiao
- School of Health and Social Care, University of Essex, Colchester, CO4 3SQ, UK
| | - Tamsin Ford
- Department of Psychiatry, University of Cambridge, L5 Clifford Allbutt Building, Cambridge Biomedical Campus Box 58, Cambridge, CB2 0AH, UK
| | - Obioha C Ukoumunne
- NIHR Applied Research Collaboration South West Peninsula (PenARC), University of Exeter, Room 2.16, South Cloisters, St Luke's Campus, 79 Heavitree Rd, Exeter, EX1 2LU, UK
| |
Collapse
|
13
|
The role and challenges of cluster randomised trials for global health. LANCET GLOBAL HEALTH 2021; 9:e701-e710. [PMID: 33865475 DOI: 10.1016/s2214-109x(20)30541-6] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 12/07/2020] [Accepted: 12/10/2020] [Indexed: 12/13/2022]
Abstract
Evaluating whether an intervention works when trialled in groups of individuals can pose complex challenges for clinical research. Cluster randomised controlled trials involve the random allocation of groups or clusters of individuals to receive an intervention, and they are commonly used in global health research. In this paper, we describe the potential reasons for the increasing popularity of cluster trials in low-income and middle-income countries. We also draw on key areas of global health research for an assessment of common trial planning practices, and we address their methodological shortcomings and pitfalls. Lastly, we discuss alternative approaches for population-level intervention trials that could be useful for research undertaken in low-income and middle-income countries for situations in which the use of cluster randomisation might not be appropriate.
Collapse
|
14
|
Hemming K, Taljaard M, Moerbeek M, Forbes A. Contamination: How much can an individually randomized trial tolerate? Stat Med 2021; 40:3329-3351. [PMID: 33960514 DOI: 10.1002/sim.8958] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 02/02/2021] [Accepted: 03/03/2021] [Indexed: 01/09/2023]
Abstract
Cluster randomization results in an increase in sample size compared to individual randomization, referred to as an efficiency loss. This efficiency loss is typically presented under an assumption of no contamination in the individually randomized trial. An alternative comparator is the sample size needed under individual randomization to detect the attenuated treatment effect due to contamination. A general framework is provided for determining the extent of contamination that can be tolerated in an individually randomized trial before a cluster randomized design yields a larger sample size. Results are presented for a variety of cluster trial designs including parallel arm, stepped-wedge and cluster crossover trials. Results reinforce what is expected: individually randomized trials can tolerate a surprisingly large amount of contamination before they become less efficient than cluster designs. We determine the point at which the contamination means an individual randomized design to detect an attenuated effect requires a larger sample size than cluster randomization under a nonattenuated effect. This critical rate is a simple function of the design effect for clustering and the design effect for multiple periods as well as design effects for stratification or repeated measures under individual randomization. These findings are important for pragmatic comparisons between a novel treatment and usual care as any bias due to contamination will only attenuate the true treatment effect. This is a bias that operates in a predictable direction. Yet, cluster randomized designs with post-randomization recruitment without blinding, are at high risk of bias due to the differential recruitment across treatment arms. This sort of bias operates in an unpredictable direction. Thus, with knowledge that cluster randomized trials are generally at a greater risk of biases that can operate in a nonpredictable direction, results presented here suggest that even in situations where there is a risk of contamination, individual randomization might still be the design of choice.
Collapse
Affiliation(s)
- Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| | | | - Mirjam Moerbeek
- Department of Methodology and Statistics, Faculty of Social and Behavioural Sciences, Utrecht University, Utrecht, The Netherlands
| | - Andrew Forbes
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
| |
Collapse
|
15
|
Jones BG, Streeter AJ, Baker A, Moyeed R, Creanor S. Bayesian statistics in the design and analysis of cluster randomised controlled trials and their reporting quality: a methodological systematic review. Syst Rev 2021; 10:91. [PMID: 33789717 PMCID: PMC8015172 DOI: 10.1186/s13643-021-01637-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Accepted: 03/11/2021] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND In a cluster randomised controlled trial (CRCT), randomisation units are "clusters" such as schools or GP practices. This has methodological implications for study design and statistical analysis, since clustering often leads to correlation between observations which, if not accounted for, can lead to spurious conclusions of efficacy/effectiveness. Bayesian methodology offers a flexible, intuitive framework to deal with such issues, but its use within CRCT design and analysis appears limited. This review aims to explore and quantify the use of Bayesian methodology in the design and analysis of CRCTs, and appraise the quality of reporting against CONSORT guidelines. METHODS We sought to identify all reported/published CRCTs that incorporated Bayesian methodology and papers reporting development of new Bayesian methodology in this context, without restriction on publication date or location. We searched Medline and Embase and the Cochrane Central Register of Controlled Trials (CENTRAL). Reporting quality metrics according to the CONSORT extension for CRCTs were collected, as well as demographic data, type and nature of Bayesian methodology used, journal endorsement of CONSORT guidelines, and statistician involvement. RESULTS Twenty-seven publications were included, six from an additional hand search. Eleven (40.7%) were reports of CRCT results: seven (25.9%) were primary results papers and four (14.8%) reported secondary results. Thirteen papers (48.1%) reported Bayesian methodological developments, the remaining three (11.1%) compared different methods. Four (57.1%) of the primary results papers described the method of sample size calculation; none clearly accounted for clustering. Six (85.7%) clearly accounted for clustering in the analysis. All results papers reported use of Bayesian methods in the analysis but none in the design or sample size calculation. CONCLUSIONS The popularity of the CRCT design has increased rapidly in the last twenty years but this has not been mirrored by an uptake of Bayesian methodology in this context. Of studies using Bayesian methodology, there were some differences in reporting quality compared to CRCTs in general, but this study provided insufficient data to draw firm conclusions. There is an opportunity to further develop Bayesian methodology for the design and analysis of CRCTs in order to expand the accessibility, availability, and, ultimately, use of this approach.
Collapse
Affiliation(s)
- Benjamin G Jones
- Medical Statistics, Faculty of Health: Medicine, Dentistry and Human Sciences, University of Plymouth, Room N15, ITTC Building 1, Plymouth Science Park, Plymouth, Devon, PL6 8BX, UK. .,NIHR ARC South West Peninsula (PenARC), College of Medicine and Health, University of Exeter, Exeter, Devon, UK.
| | - Adam J Streeter
- Medical Statistics, Faculty of Health: Medicine, Dentistry and Human Sciences, University of Plymouth, Room N15, ITTC Building 1, Plymouth Science Park, Plymouth, Devon, PL6 8BX, UK.,Klinische Epidemiologie, Institut für Epidemiologie und Sozialmedizin, Westfälische Wilhelms-Universität Münster, Münster, Germany
| | - Amy Baker
- Medical Statistics, Faculty of Health: Medicine, Dentistry and Human Sciences, University of Plymouth, Room N15, ITTC Building 1, Plymouth Science Park, Plymouth, Devon, PL6 8BX, UK
| | - Rana Moyeed
- School of Computing, Electronics and Mathematics, Faculty of Science and Engineering, University of Plymouth, Plymouth, Devon, UK
| | - Siobhan Creanor
- Medical Statistics, Faculty of Health: Medicine, Dentistry and Human Sciences, University of Plymouth, Room N15, ITTC Building 1, Plymouth Science Park, Plymouth, Devon, PL6 8BX, UK.,Peninsula Clinical Trials Unit, Faculty of Health: Medicine, Dentistry and Human Sciences, University of Plymouth, Plymouth, Devon, UK.,Exeter Clinical Trials Unit, College of Medicine and Health, University of Exeter, Exeter, Devon, UK
| |
Collapse
|
16
|
Intra-cluster correlation coefficients in primary care patients with type 2 diabetes and hypertension. Trials 2020; 21:530. [PMID: 32546189 PMCID: PMC7298818 DOI: 10.1186/s13063-020-04349-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Accepted: 04/25/2020] [Indexed: 11/21/2022] Open
Abstract
Introduction There are few sources of published data on intra-cluster correlation coefficients (ICCs) amongst patients with type 2 diabetes (T2D) and/or hypertension in primary care, particularly in low- and middle-income countries. ICC values are necessary for determining the sample sizes of cluster randomized trials. Hence, we aim to report the ICC values for a range of measures from a cluster-based interventional study conducted in Malaysia. Method Baseline data from a large study entitled Evaluation of Enhanced Primary Health Care interventions in public health clinics (EnPHC-EVA: Facility) were used in this analysis. Data from 40 public primary care clinics were collected through retrospective chart reviews and a patient exit survey. We calculated the ICCs for processes of care, clinical outcomes and patient experiences in patients with T2D and/or hypertension using the analysis of variance approach. Results Patient experience had the highest ICC values compared to processes of care and clinical outcomes. The ICC values ranged from 0.01 to 0.48 for processes of care. Generally, the ICC values for processes of care for patients with hypertension only are higher than those for T2D patients, with or without hypertension. However, both groups of patients have similar ICCs for antihypertensive medications use. In addition, similar ICC values were observed for clinical outcomes, ranging from 0.01 to 0.09. For patient experience, the ICCs were between 0.03 (proportion of patients who are willing to recommend the clinic to their friends and family) and 0.25 (for Patient Assessment of Chronic Illness Care item 9, Given a copy of my treatment plan). Conclusion The reported ICCs and their respective 95% confidence intervals for T2D and hypertension will be useful for estimating sample sizes and improving efficiency of cluster trials conducted in the primary care setting, particularly for low- and middle-income countries.
Collapse
|
17
|
Clyne B, Boland F, Murphy N, Murphy E, Moriarty F, Barry A, Wallace E, Devine T, Smith SM, Devane D, Murphy A, Fahey T. Quality, scope and reporting standards of randomised controlled trials in Irish Health Research: an observational study. Trials 2020; 21:494. [PMID: 32513240 PMCID: PMC7278139 DOI: 10.1186/s13063-020-04396-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Accepted: 05/08/2020] [Indexed: 12/13/2022] Open
Abstract
Background Despite efforts to improve the accuracy and transparency of the design, conduct, and reporting of randomised controlled trials (RCTs), deficiencies remain. Such deficiencies contribute to significant, avoidable waste of health research investment and impede reproducibility. This study aimed to synthesise and critically analyse changes over time in the conduct and reporting of internationally published evidence on patient and/or population health-oriented RCTs conducted in one country. Methods This observational study drew on systematic review methods. We searched six databases for published RCTs (database inception to December 2018) where ≥ 80% of participants were recruited in the Republic of Ireland. RCTs of interventions targeted at patients, providers and/or policy makers intended to improve health, healthcare or health research were included. For each study, screening, data extraction and methodological quality appraisal were conducted by one member of the author team. Results From 17,560 titles and abstracts, 752 unique RCTs were published in 745 papers between 1968 and 2018, with a steady year-on-year increase since 1968. The number of participants was in the range of 2–8628. The majority were parallel design (86%) and classified as treatment evaluation. Of the 418 RCTs published since the introduction of mandatory clinical trial registration by the International Committee of Medical Journal Editors in 2005, 32% (n = 134) provided a trial registration number. This increased to 47% when taking studies published between 2013 and 2018 (n = 232). Since the 1996 publication of the CONSORT statement, 16% of included RCTs made specific reference to a standardised reporting guideline and this increased to 31% for more recent studies published between 2013 and 2018. Overall, 7% (n = 53) of studies referred to a published study protocol, increasing to 20% for studies published between 2013 and 2018. Conclusion Evidence from this single-country study of RCTs published in the international literature suggests that both the number overall, the number registered and the number referencing reporting guidelines have increased steadily over time. Despite widespread endorsement of reporting standards, reporting of RCTs remains suboptimal in domains such as compliance with the CONSORT statement and prospective trial registration. Researchers, funders and journal editors, nationally and internationally, should continue to focus on improving reporting and examining avoidable waste of health research investment.
Collapse
Affiliation(s)
- Barbara Clyne
- HRB Centre for Primary Care Research, Department of General Practice, Royal College of Surgeons in Ireland, 123 St Stephens Green, Dublin 2, Ireland.
| | - Fiona Boland
- HRB Centre for Primary Care Research, Department of General Practice, Royal College of Surgeons in Ireland, 123 St Stephens Green, Dublin 2, Ireland
| | - Norah Murphy
- HRB Centre for Primary Care Research, Department of General Practice, Royal College of Surgeons in Ireland, 123 St Stephens Green, Dublin 2, Ireland
| | - Edel Murphy
- Public and Patient Involvement (PPI) Ignite, NUI Galway, Galway, Ireland
| | - Frank Moriarty
- HRB Centre for Primary Care Research, Department of General Practice, Royal College of Surgeons in Ireland, 123 St Stephens Green, Dublin 2, Ireland
| | - Alan Barry
- HRB Centre for Primary Care Research, Department of General Practice, Royal College of Surgeons in Ireland, 123 St Stephens Green, Dublin 2, Ireland
| | - Emma Wallace
- HRB Centre for Primary Care Research, Department of General Practice, Royal College of Surgeons in Ireland, 123 St Stephens Green, Dublin 2, Ireland
| | - Tatyana Devine
- HRB Centre for Primary Care Research, Department of General Practice, Royal College of Surgeons in Ireland, 123 St Stephens Green, Dublin 2, Ireland
| | - Susan M Smith
- HRB Centre for Primary Care Research, Department of General Practice, Royal College of Surgeons in Ireland, 123 St Stephens Green, Dublin 2, Ireland
| | - Declan Devane
- HRB-Trials Methodology Research Network, School of Nursing & Midwifery, NUI Galway, Galway, Ireland
| | - Andrew Murphy
- HRB Primary Care Clinical Trials Network Ireland, Department of General Practice, NUI Galway, Galway, Ireland
| | - Tom Fahey
- HRB Centre for Primary Care Research, Department of General Practice, Royal College of Surgeons in Ireland, 123 St Stephens Green, Dublin 2, Ireland
| |
Collapse
|
18
|
Mahoney A, Karatzias T, Halliday K, Dougal N. How important are Phase 1 interventions for complex interpersonal trauma? A pilot randomized control trial of a group psychoeducational intervention. Clin Psychol Psychother 2020; 27:597-610. [DOI: 10.1002/cpp.2447] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2019] [Revised: 03/17/2020] [Accepted: 03/17/2020] [Indexed: 01/03/2023]
Affiliation(s)
- Adam Mahoney
- Psychology Department Glasgow Caledonian University Glasglow UK
- Psychology Department HMP & YOI Cornton Vale Stirling UK
| | - Thanos Karatzias
- School of Health & Social Science Edinburgh Napier University Edinburgh UK
| | | | - Nadine Dougal
- School of Health & Social Science Edinburgh Napier University Edinburgh UK
| |
Collapse
|
19
|
[Influence of impact factor on reporting sample size calculations in publications on studies exemplified by AMD treatment : Cross-sectional investigation on the presence of sample size calculations in publications of RCTs on AMD treatment in journals with low and high impact factors]. Ophthalmologe 2020; 117:125-131. [PMID: 31201561 DOI: 10.1007/s00347-019-0924-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
BACKGROUND For scientific and ethical reasons randomized controlled clinical trials (RCTs) should be based on a sample size calculation. The CONSORT statement, an established publication guideline for transparent study reporting, requires a sample size calculation in every study publication. OBJECTIVE The availability of sample size calculations in RCT publications on treatment of age-related macular degeneration (AMD) was investigated. The primary hypothesis of this investigation compared the prevalence of reported sample size calculations between journals with higher (≥5) versus lower (<5) impact factors (IF). MATERIAL AND METHODS It was examined whether information on sample size calculation was available in a series of 97 publications of RTCs on AMD treatment published between 2004 and 2014. RESULTS Only 46 out of 97 (47%) study publications provided information on the reason for the number of patients enrolled. The comparison of publications from journals with an IF ≥ 5 (63%, 30) and from journals with an IF < 5 (40%, 67) showed a statistically significant difference of 23% in the frequencies of available sample size calculations (95% confidence interval, CI 2%; 44%). Of the publications published before 2010, 43% reported a sample size calculation versus 51% of the publications afterwards. CONCLUSION Publications in journals with higher IF more frequently reported a sample size calculation. More than 50% of the publications did not report any sample size calculation. Authors and reviewers of publications should pay more attention to the explicit reporting of sample size calculations.
Collapse
|
20
|
Murray DM, Taljaard M, Turner EL, George SM. Essential Ingredients and Innovations in the Design and Analysis of Group-Randomized Trials. Annu Rev Public Health 2019; 41:1-19. [PMID: 31869281 DOI: 10.1146/annurev-publhealth-040119-094027] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This article reviews the essential ingredients and innovations in the design and analysis of group-randomized trials. The methods literature for these trials has grown steadily since they were introduced to the biomedical research community in the late 1970s, and we summarize those developments. We review, in addition to the group-randomized trial, methods for two closely related designs, the individually randomized group treatment trial and the stepped-wedge group-randomized trial. After describing the essential ingredients for these designs, we review the most important developments in the evolution of their methods using a new bibliometric tool developed at the National Institutes of Health. We then discuss the questions to be considered when selecting from among these designs or selecting the traditional randomized controlled trial. We close with a review of current methods for the analysis of data from these designs, a case study to illustrate each design, and a brief summary.
Collapse
Affiliation(s)
- David M Murray
- Office of Disease Prevention, National Institutes of Health, North Bethesda, Maryland 20892, USA; ,
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, The Ottawa Hospital, Civic Campus, Ottawa, Ontario K1Y 4E9, Canada; .,School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario K1Y 4E9, Canada
| | - Elizabeth L Turner
- Department of Biostatistics and Bioinformatics, and Duke Global Health Institute, Duke University, Durham, North Carolina 27710, USA;
| | - Stephanie M George
- Office of Disease Prevention, National Institutes of Health, North Bethesda, Maryland 20892, USA; ,
| |
Collapse
|
21
|
Copsey B, Thompson JY, Vadher K, Ali U, Dutton SJ, Fitzpatrick R, Lamb SE, Cook JA. Sample size calculations are poorly conducted and reported in many randomized trials of hip and knee osteoarthritis: results of a systematic review. J Clin Epidemiol 2018; 104:52-61. [DOI: 10.1016/j.jclinepi.2018.08.013] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Revised: 07/20/2018] [Accepted: 08/17/2018] [Indexed: 12/22/2022]
|
22
|
Eichner FA, Groenwold RHH, Grobbee DE, Oude Rengerink K. Systematic review showed that stepped-wedge cluster randomized trials often did not reach their planned sample size. J Clin Epidemiol 2018; 107:89-100. [PMID: 30458261 DOI: 10.1016/j.jclinepi.2018.11.013] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 10/15/2018] [Accepted: 11/14/2018] [Indexed: 12/14/2022]
Abstract
OBJECTIVE To determine how often stepped-wedge cluster randomized controlled trials reach their planned sample size, and what reasons are reported for choosing a stepped-wedge trial design. STUDY DESIGN AND SETTING We conducted a PubMed literature search (period 2012 to 2017) and included articles describing the results of a stepped-wedge cluster randomized trial. We calculated the percentage of studies reaching their prespecified number of participants and clusters, and we summarized the reasons for choosing the stepped-wedge trial design as well as difficulties during enrollment. RESULTS Forty-six individual stepped-wedge studies from a total of 53 articles were included in our review. Of the 35 studies, for which recruitment rate could be calculated, 69% recruited their planned number of participants, with 80% having recruited the planned number of clusters. Ethical reasons were the most common motivation for choosing the stepped-wedge trial design. Most important difficulties during study conduct were dropout of clusters and delayed implementation of the intervention. CONCLUSION About half of recently published stepped-wedge trials reached their planned sample size indicating that recruitment is also a major problem in these trials. Still, the stepped-wedge trial design can yield practical, ethical, and methodological advantages.
Collapse
Affiliation(s)
- Felizitas A Eichner
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, the Netherlands.
| | - Rolf H H Groenwold
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, the Netherlands; Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Diederick E Grobbee
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Katrien Oude Rengerink
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
23
|
Hemming K, Taljaard M, McKenzie JE, Hooper R, Copas A, Thompson JA, Dixon-Woods M, Aldcroft A, Doussau A, Grayling M, Kristunas C, Goldstein CE, Campbell MK, Girling A, Eldridge S, Campbell MJ, Lilford RJ, Weijer C, Forbes AB, Grimshaw JM. Reporting of stepped wedge cluster randomised trials: extension of the CONSORT 2010 statement with explanation and elaboration. BMJ 2018; 363:k1614. [PMID: 30413417 PMCID: PMC6225589 DOI: 10.1136/bmj.k1614] [Citation(s) in RCA: 244] [Impact Index Per Article: 34.9] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/20/2018] [Indexed: 12/14/2022]
Affiliation(s)
- Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham B15 2TT, UK
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
| | - Joanne E McKenzie
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
| | - Richard Hooper
- Centre for Primary Care and Public Health, Queen Mary University of London, London, UK
| | - Andrew Copas
- London Hub for Trials Methodology Research, MRC Clinical Trials Unit at University College London, London, UK
| | - Jennifer A Thompson
- London Hub for Trials Methodology Research, MRC Clinical Trials Unit at University College London, London, UK
- Department for Infectious Disease Epidemiology, London School of Hygiene and Tropical Medicine, London, UK
| | - Mary Dixon-Woods
- The Healthcare Improvement Studies Institute, University of Cambridge, Cambridge Biomedical Campus, Cambridge, UK
| | | | - Adelaide Doussau
- Biomedical Ethics Unit, McGill University School of Medicine, Montreal, QC, Canada
| | | | | | - Cory E Goldstein
- Rotman Institute of Philosophy, Western University, London, ON, Canada
| | | | - Alan Girling
- Institute of Applied Health Research, University of Birmingham, Birmingham B15 2TT, UK
| | - Sandra Eldridge
- Centre for Primary Care and Public Health, Queen Mary University of London, London, UK
| | | | | | - Charles Weijer
- Rotman Institute of Philosophy, Western University, London, ON, Canada
| | - Andrew B Forbes
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
| | - Jeremy M Grimshaw
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
- Department of Medicine, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
24
|
Murray DM, Pals SL, George SM, Kuzmichev A, Lai GY, Lee JA, Myles RL, Nelson SM. Design and analysis of group-randomized trials in cancer: A review of current practices. Prev Med 2018; 111:241-247. [PMID: 29551717 PMCID: PMC5930119 DOI: 10.1016/j.ypmed.2018.03.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2017] [Revised: 01/31/2018] [Accepted: 03/09/2018] [Indexed: 02/07/2023]
Abstract
The purpose of this paper is to summarize current practices for the design and analysis of group-randomized trials involving cancer-related risk factors or outcomes and to offer recommendations to improve future trials. We searched for group-randomized trials involving cancer-related risk factors or outcomes that were published or online in peer-reviewed journals in 2011-15. During 2016-17, in Bethesda MD, we reviewed 123 articles from 76 journals to characterize their design and their methods for sample size estimation and data analysis. Only 66 (53.7%) of the articles reported appropriate methods for sample size estimation. Only 63 (51.2%) reported exclusively appropriate methods for analysis. These findings suggest that many investigators do not adequately attend to the methodological challenges inherent in group-randomized trials. These practices can lead to underpowered studies, to an inflated type 1 error rate, and to inferences that mislead readers. Investigators should work with biostatisticians or other methodologists familiar with these issues. Funders and editors should ensure careful methodological review of applications and manuscripts. Reviewers should ensure that studies are properly planned and analyzed. These steps are needed to improve the rigor and reproducibility of group-randomized trials. The Office of Disease Prevention (ODP) at the National Institutes of Health (NIH) has taken several steps to address these issues. ODP offers an online course on the design and analysis of group-randomized trials. ODP is working to increase the number of methodologists who serve on grant review panels. ODP has developed standard language for the Application Guide and the Review Criteria to draw investigators' attention to these issues. Finally, ODP has created a new Research Methods Resources website to help investigators, reviewers, and NIH staff better understand these issues.
Collapse
Affiliation(s)
- David M Murray
- Office of Disease Prevention, Division of Program Coordination Planning and Strategic Initiatives, Office of the Director, National Institutes of Health, Bethesda, MD, United States.
| | - Sherri L Pals
- Health Informatics, Data Management, and Statistics Branch, Division of Global HIV and Tuberculosis, Center for Global Health, US Centers for Disease Control and Prevention, Atlanta, GA, United States
| | - Stephanie M George
- Office of Disease Prevention, Division of Program Coordination Planning and Strategic Initiatives, Office of the Director, National Institutes of Health, Bethesda, MD, United States
| | - Andrey Kuzmichev
- Office of the Surgeon General, Office of the Assistant Secretary for Health, Department of Health and Human Services, United States
| | - Gabriel Y Lai
- Environmental Epidemiology Branch, Division of Cancer Control and Population Sciences, National Cancer Institute, National Institutes of Health, Rockville, MD, United States
| | - Jocelyn A Lee
- Project Genomics Evidence Neoplasia Information Exchange (GENIE), Executive Office, American Association for Cancer Research, Philadelphia, PA, United States
| | - Ranell L Myles
- Office of Disease Prevention, Division of Program Coordination Planning and Strategic Initiatives, Office of the Director, National Institutes of Health, Bethesda, MD, United States
| | - Shakira M Nelson
- Scientific Programs, American Association for Cancer Research, Philadelphia, PA, United States
| |
Collapse
|
25
|
Abstract
OBJECTIVES To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. SETTING Any, not limited to healthcare settings. PARTICIPANTS Any taking part in an SW-CRT published up to March 2016. PRIMARY AND SECONDARY OUTCOME MEASURES The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. RESULTS Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22-0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. CONCLUSIONS Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed.
Collapse
Affiliation(s)
| | - Tom Morris
- Leicester Clinical Trials Unit, University of Leicester, Leicester, UK
| | - Laura Gray
- Department of Health Sciences, University of Leicester, Leicester, UK
| |
Collapse
|
26
|
Goulão B, MacLennan G, Ramsay C. The split-plot design was useful for evaluating complex, multilevel interventions, but there is need for improvement in its design and report. J Clin Epidemiol 2017; 96:120-125. [PMID: 29113938 DOI: 10.1016/j.jclinepi.2017.10.019] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2017] [Revised: 09/20/2017] [Accepted: 10/30/2017] [Indexed: 01/08/2023]
Abstract
OBJECTIVES To describe the sample size calculation, analysis and reporting of split-plot (S-P) randomized controlled trials in health care (trials that use two units of randomization: one at a cluster-level and one at a level lower than the cluster). STUDY DESIGN AND SETTING We carried out a comprehensive search in the EMBASE database from 1946 to 2016. Health care trials with a S-P design in human subjects were included. Three authors screened and assessed the studies, and the data were extracted on methodology and reporting standards based on CONSORT. RESULTS Eighteen S-P studies were included, with authors using nine different designations to describe them. Units of randomization were unclear in nine abstracts. Explicit rationale for choosing the design was not given. Ten studies presented a sample size calculation accounting for clustering; the analyses were coherent with that. Flow of participant diagrams was presented but was incomplete in 14 articles. CONCLUSION S-P designs can be useful complex designs but challenging to report. Researchers need to clearly describe the rationale, sample size calculation, and participant flow. We provide a suggested CONSORT style participant flow diagram to aid reporting. There is need for more research regarding sample size calculation for S-P.
Collapse
Affiliation(s)
- Beatriz Goulão
- Health Services Research Unit, University of Aberdeen, 3rd Floor, Health Sciences Building, Foresterhill, Aberdeen AB25 2ZD, UK.
| | - Graeme MacLennan
- Health Services Research Unit, University of Aberdeen, 3rd Floor, Health Sciences Building, Foresterhill, Aberdeen AB25 2ZD, UK
| | - Craig Ramsay
- Health Services Research Unit, University of Aberdeen, 3rd Floor, Health Sciences Building, Foresterhill, Aberdeen AB25 2ZD, UK
| |
Collapse
|
27
|
Siebenhofer A, Paulitsch MA, Pregartner G, Berghold A, Jeitler K, Muth C, Engler J. Cluster-randomized controlled trials evaluating complex interventions in general practices are mostly ineffective: a systematic review. J Clin Epidemiol 2017; 94:85-96. [PMID: 29111470 DOI: 10.1016/j.jclinepi.2017.10.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Revised: 09/14/2017] [Accepted: 10/17/2017] [Indexed: 01/03/2023]
Abstract
OBJECTIVES The aim of this study was to evaluate how frequently complex interventions are shown to be superior to routine care in general practice-based cluster-randomized controlled studies (c-RCTs) and to explore whether potential differences explain results that come out in favor of a complex intervention. STUDY DESIGN AND SETTING We performed an unrestricted search in the Central Register of Controlled Trials, MEDLINE, and EMBASE. Included were all c-RCTs that included a patient-relevant primary outcome in a general practice setting with at least 1-year follow-up. We extracted effect sizes, P-values, intracluster correlation coefficients (ICCs), and 22 quality aspects. RESULTS We identified 29 trials with 99 patient-relevant primary outcomes. After adjustment for multiple testing on a trial level, four outcomes (4%) in four studies (14%) remained statistically significant. Of the 11 studies that reported ICCs, in 8, the ICC was equal to or smaller than the assumed ICC. In 16 of the 17 studies with available sample size calculation, effect sizes were smaller than anticipated. CONCLUSION More than 85% of the c-RCTs failed to demonstrate a beneficial effect on a predefined primary endpoint. All but one study were overly optimistic with regard to the expected treatment effect. This highlights the importance of weighing up the potential merit of new treatments and planning prospectively, when designing clinical studies in a general practice setting.
Collapse
Affiliation(s)
- Andrea Siebenhofer
- Institute of General Practice, Goethe University Frankfurt am Main, Theodor-Stern-Kai 7, Frankfurt am Main 60590, Germany; Institute of General Practice and Evidence-based Health Services Research, Medical University of Graz, Auenbruggerplatz 2/9/IV, Graz 8036, Austria.
| | - Michael A Paulitsch
- Institute of General Practice, Goethe University Frankfurt am Main, Theodor-Stern-Kai 7, Frankfurt am Main 60590, Germany
| | - Gudrun Pregartner
- Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, Auenbruggerplatz 2, Graz 8036, Austria
| | - Andrea Berghold
- Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, Auenbruggerplatz 2, Graz 8036, Austria
| | - Klaus Jeitler
- Institute of General Practice and Evidence-based Health Services Research, Medical University of Graz, Auenbruggerplatz 2/9/IV, Graz 8036, Austria; Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, Auenbruggerplatz 2, Graz 8036, Austria
| | - Christiane Muth
- Institute of General Practice, Goethe University Frankfurt am Main, Theodor-Stern-Kai 7, Frankfurt am Main 60590, Germany
| | - Jennifer Engler
- Institute of General Practice, Goethe University Frankfurt am Main, Theodor-Stern-Kai 7, Frankfurt am Main 60590, Germany
| |
Collapse
|
28
|
Turner EL, Li F, Gallis JA, Prague M, Murray DM. Review of Recent Methodological Developments in Group-Randomized Trials: Part 1-Design. Am J Public Health 2017; 107:907-915. [PMID: 28426295 PMCID: PMC5425852 DOI: 10.2105/ajph.2017.303706] [Citation(s) in RCA: 113] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/05/2017] [Indexed: 11/04/2022]
Abstract
In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have highlighted the developments of the past 13 years in design with a companion article to focus on developments in analysis. As a pair, these articles update the 2004 review. We have discussed developments in the topics of the earlier review (e.g., clustering, matching, and individually randomized group-treatment trials) and in new topics, including constrained randomization and a range of randomized designs that are alternatives to the standard parallel-arm GRT. These include the stepped-wedge GRT, the pseudocluster randomized trial, and the network-randomized GRT, which, like the parallel-arm GRT, require clustering to be accounted for in both their design and analysis.
Collapse
Affiliation(s)
- Elizabeth L Turner
- Elizabeth L. Turner and John A. Gallis are with the Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, and the Duke Global Health Institute, Duke University. Fan Li is with the Department of Biostatistics and Bioinformatics, Duke University. Melanie Prague is with the Department of Biostatistics, Harvard T. H. Chan School of Public Health, Boston, MA, and Inria, project team SISTM, Bordeaux, France. David M. Murray is with the Office of Disease Prevention, Division of Program Coordination and Strategic Planning, and the Office of the Director, National Institutes of Health, Rockville, MD
| | - Fan Li
- Elizabeth L. Turner and John A. Gallis are with the Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, and the Duke Global Health Institute, Duke University. Fan Li is with the Department of Biostatistics and Bioinformatics, Duke University. Melanie Prague is with the Department of Biostatistics, Harvard T. H. Chan School of Public Health, Boston, MA, and Inria, project team SISTM, Bordeaux, France. David M. Murray is with the Office of Disease Prevention, Division of Program Coordination and Strategic Planning, and the Office of the Director, National Institutes of Health, Rockville, MD
| | - John A Gallis
- Elizabeth L. Turner and John A. Gallis are with the Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, and the Duke Global Health Institute, Duke University. Fan Li is with the Department of Biostatistics and Bioinformatics, Duke University. Melanie Prague is with the Department of Biostatistics, Harvard T. H. Chan School of Public Health, Boston, MA, and Inria, project team SISTM, Bordeaux, France. David M. Murray is with the Office of Disease Prevention, Division of Program Coordination and Strategic Planning, and the Office of the Director, National Institutes of Health, Rockville, MD
| | - Melanie Prague
- Elizabeth L. Turner and John A. Gallis are with the Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, and the Duke Global Health Institute, Duke University. Fan Li is with the Department of Biostatistics and Bioinformatics, Duke University. Melanie Prague is with the Department of Biostatistics, Harvard T. H. Chan School of Public Health, Boston, MA, and Inria, project team SISTM, Bordeaux, France. David M. Murray is with the Office of Disease Prevention, Division of Program Coordination and Strategic Planning, and the Office of the Director, National Institutes of Health, Rockville, MD
| | - David M Murray
- Elizabeth L. Turner and John A. Gallis are with the Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, and the Duke Global Health Institute, Duke University. Fan Li is with the Department of Biostatistics and Bioinformatics, Duke University. Melanie Prague is with the Department of Biostatistics, Harvard T. H. Chan School of Public Health, Boston, MA, and Inria, project team SISTM, Bordeaux, France. David M. Murray is with the Office of Disease Prevention, Division of Program Coordination and Strategic Planning, and the Office of the Director, National Institutes of Health, Rockville, MD
| |
Collapse
|
29
|
QUALITY OF SAMPLE SIZE ESTIMATION IN TRIALS OF MEDICAL DEVICES: HIGH-RISK DEVICES FOR NEUROLOGICAL CONDITIONS AS EXAMPLE. Int J Technol Assess Health Care 2017; 33:103-110. [PMID: 28502271 DOI: 10.1017/s0266462317000265] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
BACKGROUND The aim of this study was to assess the quality of reporting sample size calculation and underlying design assumptions in pivotal trials of high-risk medical devices (MDs) for neurological conditions. METHODS Systematic review of research protocols for publicly registered randomized controlled trials (RCTs). In the absence of a published protocol, principal investigators were contacted for additional data. To be included, trials had to investigate a high-risk MD, registered between 2005 and 2015, with indications stroke, headache disorders, and epilepsy as case samples within central nervous system diseases. Extraction of key methodological parameters for sample size calculation was performed independently and peer-reviewed. RESULTS In a final sample of seventy-one eligible trials, we collected data from thirty-one trials. Eighteen protocols were obtained from the public domain or principal investigators. Data availability decreased during the extraction process, with almost all data available for stroke-related trials. Of the thirty-one trials with sample size information available, twenty-six reported a predefined calculation and underlying assumptions. Justification was given in twenty and evidence for parameter estimation in sixteen trials. Estimates were most often based on previous research, including RCTs and observational data. Observational data were predominantly represented by retrospective designs. Other references for parameter estimation indicated a lower level of evidence. CONCLUSIONS Our systematic review of trials on high-risk MDs confirms previous research, which has documented deficiencies regarding data availability and a lack of reporting on sample size calculation. More effort is needed to ensure both relevant sources, that is, original research protocols, to be publicly available and reporting requirements to be standardized.
Collapse
|
30
|
Hemming K, Taljaard M, Forbes A. Analysis of cluster randomised stepped wedge trials with repeated cross-sectional samples. Trials 2017; 18:101. [PMID: 28259174 PMCID: PMC5336660 DOI: 10.1186/s13063-017-1833-7] [Citation(s) in RCA: 109] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2016] [Accepted: 02/06/2017] [Indexed: 11/10/2022] Open
Abstract
Background The stepped wedge cluster randomised trial (SW-CRT) is increasingly being used to evaluate policy or service delivery interventions. However, there is a dearth of trials literature addressing analytical approaches to the SW-CRT. Perhaps as a result, a significant number of published trials have major methodological shortcomings, including failure to adjust for secular trends at the analysis stage. Furthermore, the commonly used analytical framework proposed by Hussey and Hughes makes several assumptions. Methods We highlight the assumptions implicit in the basic SW-CRT analytical model proposed by Hussey and Hughes. We consider how simple modifications of the basic model, using both random and fixed effects, can be used to accommodate deviations from the underlying assumptions. We consider the implications of these modifications for the intracluster correlation coefficients. In a case study, the importance of adjusting for the secular trend is illustrated. Results The basic SW-CRT model includes a fixed effect for time, implying a common underlying secular trend across steps and clusters. It also includes a single term for treatment, implying a constant shift in this trend under the treatment. When these assumptions are not realistic, simple modifications can be implemented to allow the secular trend to vary across clusters and the treatment effect to vary across clusters or time. In our case study, the naïve treatment effect estimate (adjusted for clustering but unadjusted for time) suggests a beneficial effect. However, after adjusting for the underlying secular trend, we demonstrate a reversal of the treatment effect. Conclusion Due to the inherent confounding of the treatment effect with time, analysis of a SW-CRT should always account for secular trends or risk-biased estimates of the treatment effect. Furthermore, the basic model proposed by Hussey and Hughes makes a number of important assumptions. Consideration needs to be given to the appropriate model choice at the analysis stage. We provide a Stata code to implement the proposed analyses in the illustrative case study. Electronic supplementary material The online version of this article (doi:10.1186/s13063-017-1833-7) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham, B15 2TT, UK.
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, 1053 Carling Avenue, Ottawa, ON, K1Y4E9, Canada.,Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Andrew Forbes
- School of Public Health and Preventive Medicine, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
31
|
Arnup SJ, Forbes AB, Kahan BC, Morgan KE, McKenzie JE. The quality of reporting in cluster randomised crossover trials: proposal for reporting items and an assessment of reporting quality. Trials 2016; 17:575. [PMID: 27923384 PMCID: PMC5142135 DOI: 10.1186/s13063-016-1685-6] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Accepted: 11/04/2016] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The cluster randomised crossover (CRXO) design is gaining popularity in trial settings where individual randomisation or parallel group cluster randomisation is not feasible or practical. Our aim is to stimulate discussion on the content of a reporting guideline for CRXO trials and to assess the reporting quality of published CRXO trials. METHODS We undertook a systematic review of CRXO trials. Searches of MEDLINE, EMBASE, and CINAHL Plus as well as citation searches of CRXO methodological articles were conducted to December 2014. Reporting quality was assessed against both modified items from 2010 CONSORT and 2012 cluster trials extension and other proposed quality measures. RESULTS Of the 3425 records identified through database searching, 83 trials met the inclusion criteria. Trials were infrequently identified as "cluster randomis(z)ed crossover" in title (n = 7, 8%) or abstract (n = 21, 25%), and a rationale for the design was infrequently provided (n = 20, 24%). Design parameters such as the number of clusters and number of periods were well reported. Discussion of carryover took place in only 17 trials (20%). Sample size methods were only reported in 58% (n = 48) of trials. A range of approaches were used to report baseline characteristics. The analysis method was not adequately reported in 23% (n = 19) of trials. The observed within-cluster within-period intracluster correlation and within-cluster between-period intracluster correlation for the primary outcome data were not reported in any trial. The potential for selection, performance, and detection bias could be evaluated in 30%, 81%, and 70% of trials, respectively. CONCLUSIONS There is a clear need to improve the quality of reporting in CRXO trials. Given the unique features of a CRXO trial, it is important to develop a CONSORT extension. Consensus amongst trialists on the content of such a guideline is essential.
Collapse
Affiliation(s)
- Sarah J Arnup
- School of Public Health and Preventive Medicine, Monash University, The Alfred Centre, Melbourne, Victoria, 3004, Australia
| | - Andrew B Forbes
- School of Public Health and Preventive Medicine, Monash University, The Alfred Centre, Melbourne, Victoria, 3004, Australia
| | - Brennan C Kahan
- Pragmatic Clinical Trials Unit, Queen Mary University of London, 58 Turner St, London, E1 2AB, UK
| | - Katy E Morgan
- Medical Statistics Department, London School of Hygiene and Tropical Medicine, Keppel Street, London, WC1E 7HT, UK
| | - Joanne E McKenzie
- School of Public Health and Preventive Medicine, Monash University, The Alfred Centre, Melbourne, Victoria, 3004, Australia.
| |
Collapse
|
32
|
Sheng C, Peng W, Chen Z, Cao Y, Gong W, Xia ZA, Wang Y, Su N, Wang Z. Impact of 2, 3, 5, 4'-tetrahydroxystilbene-2-O-β-D-glucoside on cognitive deficits in animal models of Alzheimer's disease: a systematic review. BMC COMPLEMENTARY AND ALTERNATIVE MEDICINE 2016; 16:320. [PMID: 27565551 PMCID: PMC5002158 DOI: 10.1186/s12906-016-1313-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2015] [Accepted: 08/23/2016] [Indexed: 12/17/2022]
Abstract
BACKGROUND The efficacy of 2, 3, 5, 4'-tetrahydroxystilbene-2-O-β-D-glucoside (TSG) treatment on cognitive decline in individuals with Alzheimer's disease (AD) has not been investigated. Therefore, we systematically reviewed the effect of TSG on cognitive deficits in a rodent model of AD. METHODS We identified eligible studies published from January 1980 to April 2015 by searching seven electronic databases. We assessed the study quality, evaluated the efficacy of TSG treatment, and performed a stratified meta-analysis and meta-regression analysis to assess the influence of study design on TSG efficacy. RESULTS Among a total of 381 publications, 18 fulfilled our inclusion criteria. The overall methodological quality of these studies was poor. The meta-analysis revealed a statistically significant benefit of TSG on acquisition memory (standardized mean difference [SMD] = -1.46 (95 % CI: -1.81 to -1.10, P < 0.0001) and retention memory (SMD =1.93 (95 % CI: 1.40 to 2.46, P < 0.0001) in experimental models of AD. The stratified analysis revealed a significantly higher effect size for both acquisition and retention memory in studies that used mixed sex models and a significantly higher effect size for acquisition memory in studies that used transgenic models. CONCLUSIONS Our meta-analysis highlights a significantly better treatment effect in rodent AD models that received TSG that in those that did not. These findings indicate a potential therapeutic role of TSG in AD therapy. However, additional well-designed and detailed experimental studies are needed to evaluate the safety of TSG.
Collapse
|
33
|
Improving Power and Sample Size Calculation in Rehabilitation Trial Reports: A Methodological Assessment. Arch Phys Med Rehabil 2016; 97:1195-201. [DOI: 10.1016/j.apmr.2016.02.013] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2015] [Revised: 02/16/2016] [Accepted: 02/16/2016] [Indexed: 11/19/2022]
|
34
|
Arnup SJ, Forbes AB, Kahan BC, Morgan KE, McKenzie JE. Appropriate statistical methods were infrequently used in cluster-randomized crossover trials. J Clin Epidemiol 2016; 74:40-50. [DOI: 10.1016/j.jclinepi.2015.11.013] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2015] [Revised: 10/22/2015] [Accepted: 11/20/2015] [Indexed: 10/22/2022]
|
35
|
Martin J, Taljaard M, Girling A, Hemming K. Systematic review finds major deficiencies in sample size methodology and reporting for stepped-wedge cluster randomised trials. BMJ Open 2016; 6:e010166. [PMID: 26846897 PMCID: PMC4746455 DOI: 10.1136/bmjopen-2015-010166] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Revised: 11/09/2015] [Accepted: 12/03/2015] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. METHODS We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. RESULTS We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5-6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. DISCUSSION The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs.
Collapse
Affiliation(s)
- James Martin
- School of Health and Population Sciences, University of Birmingham, Birmingham, UK
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
- Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Alan Girling
- School of Health and Population Sciences, University of Birmingham, Birmingham, UK
| | - Karla Hemming
- School of Health and Population Sciences, University of Birmingham, Birmingham, UK
| |
Collapse
|
36
|
Sheng C, Peng W, Xia ZA, Wang Y, Chen Z, Su N, Wang Z. The impact of ginsenosides on cognitive deficits in experimental animal studies of Alzheimer's disease: a systematic review. Altern Ther Health Med 2015; 15:386. [PMID: 26497388 PMCID: PMC4619356 DOI: 10.1186/s12906-015-0894-y] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2015] [Accepted: 10/04/2015] [Indexed: 02/01/2023]
Abstract
Background The efficacy of ginsenoside treatment on cognitive decline in individuals with Alzheimer’s disease (AD) has yet to be investigated. In this protocal, we conducted a systematic review to evaluate the effect of ginsenosides on cognitive deficits in experimental rodent AD models. Methods We identified eligible studies by searching seven electronic databases spanning from January 1980 to October 2014. We assessed the study quality, evaluated the efficacy of ginsenoside treatment, and performed a stratified meta-analysis and meta-regression analysis to assess the influence of the study design on ginsenoside efficacy. Results Twelve studies fulfilled our inclusion criteria from a total of 283 publications. The overall methodological quality of these studies was poor. The meta-analysis revealed that ginsenosides have a statistically significant positive effect on cognitive performance in experimental AD models. The stratified analysis revealed that ginsenoside Rg1 had the greatest effect on acquisition and retention memory in AD models. The effect size was significantly higher for both acquisition and retention memory in studies that used female animals compared with male animals. Conclusions We conclude that ginsenosides might reduce cognitive deficits in AD models. However, additional well-designed and well-reported animal studies are needed to inform further clinical investigations.
Collapse
|
37
|
Hemming K, Taljaard M. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach. J Clin Epidemiol 2015; 69:137-46. [PMID: 26344808 PMCID: PMC4687983 DOI: 10.1016/j.jclinepi.2015.08.015] [Citation(s) in RCA: 141] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2014] [Revised: 07/21/2015] [Accepted: 08/28/2015] [Indexed: 10/31/2022]
Abstract
OBJECTIVES To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. STUDY DESIGN AND SETTING We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. RESULTS For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. CONCLUSION Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario.
Collapse
Affiliation(s)
- Karla Hemming
- School of Health and Population Sciences, University of Birmingham, Birmingham B15 2TT, UK.
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, 1053 Carling Avenue, Ottawa, Ontario K1Y4E9, Canada; Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
38
|
Rodríguez-Wong L, Pozos-Guillen A, Silva-Herzog D, Chavarría-Bolaños D. Efficacy of mepivacaine-tramadol combination on the success of inferior alveolar nerve blocks in patients with symptomatic irreversible pulpitis: a randomized clinical trial. Int Endod J 2015; 49:325-33. [DOI: 10.1111/iej.12463] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2015] [Accepted: 04/30/2015] [Indexed: 11/28/2022]
Affiliation(s)
- L. Rodríguez-Wong
- Endodontic Postgraduate Program; Faculty of Dentistry; San Luis Potosi University; San Luis Potosí México
| | - A. Pozos-Guillen
- Basic Science Laboratory; Faculty of Dentistry; San Luis Potosi University; San Luis Potosí México
| | - D. Silva-Herzog
- Endodontic Postgraduate Program; Faculty of Dentistry; San Luis Potosi University; San Luis Potosí México
| | - D. Chavarría-Bolaños
- Diagnostic and Surgical Sciences Department; Faculty of Dentistry; Costa Rica University; San Jose Costa Rica
| |
Collapse
|