51
|
Murray DM, Taljaard M, Turner EL, George SM. Essential Ingredients and Innovations in the Design and Analysis of Group-Randomized Trials. Annu Rev Public Health 2019; 41:1-19. [PMID: 31869281 DOI: 10.1146/annurev-publhealth-040119-094027] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This article reviews the essential ingredients and innovations in the design and analysis of group-randomized trials. The methods literature for these trials has grown steadily since they were introduced to the biomedical research community in the late 1970s, and we summarize those developments. We review, in addition to the group-randomized trial, methods for two closely related designs, the individually randomized group treatment trial and the stepped-wedge group-randomized trial. After describing the essential ingredients for these designs, we review the most important developments in the evolution of their methods using a new bibliometric tool developed at the National Institutes of Health. We then discuss the questions to be considered when selecting from among these designs or selecting the traditional randomized controlled trial. We close with a review of current methods for the analysis of data from these designs, a case study to illustrate each design, and a brief summary.
Collapse
Affiliation(s)
- David M Murray
- Office of Disease Prevention, National Institutes of Health, North Bethesda, Maryland 20892, USA; ,
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, The Ottawa Hospital, Civic Campus, Ottawa, Ontario K1Y 4E9, Canada; .,School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario K1Y 4E9, Canada
| | - Elizabeth L Turner
- Department of Biostatistics and Bioinformatics, and Duke Global Health Institute, Duke University, Durham, North Carolina 27710, USA;
| | - Stephanie M George
- Office of Disease Prevention, National Institutes of Health, North Bethesda, Maryland 20892, USA; ,
| |
Collapse
|
52
|
Blanco N, Harris AD, Magder LS, Jernigan JA, Reddy SC, O’Hagan J, Hatfield KM, Pineles L, Perencevich E, O’Hara LM. Sample Size Estimates for Cluster-Randomized Trials in Hospital Infection Control and Antimicrobial Stewardship. JAMA Netw Open 2019; 2:e1912644. [PMID: 31584684 PMCID: PMC6784749 DOI: 10.1001/jamanetworkopen.2019.12644] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
IMPORTANCE An important step in designing, executing, and evaluating cluster-randomized trials (CRTs) is understanding the correlation and thus nonindependence that exists among individuals in a cluster. In hospital epidemiology, there is a shortage of CRTs that have published their intraclass correlation coefficient or coefficient of variation (CV), making prospective sample size calculations difficult for investigators. OBJECTIVES To estimate the number of hospitals needed to power parallel CRTs of interventions to reduce health care-associated infection outcomes and to demonstrate how different parameters such as CV and expected effect size are associated with the sample size estimates in practice. DESIGN, SETTING, AND PARTICIPANTS This longitudinal cohort study estimated parameters for sample size calculations using national rates developed by the Centers for Disease Control and Prevention for methicillin-resistant Staphylococcus aureus (MRSA) bacteremia, central-line-associated bloodstream infections (CLABSI), catheter-associated urinary tract infections (CAUTI), and Clostridium difficile infections (CDI) from 2016. For MRSA and vancomycin-resistant enterococci (VRE) acquisition, outcomes were estimated using data from 2012 from the Benefits of Universal Glove and Gown study. Data were collected from June 2017 through September 2018 and analyzed from September 2018 through January 2019. MAIN OUTCOMES AND MEASURES Calculated number of clusters needed for adequate power to detect an intervention effect using a 2-group parallel CRT. RESULTS To study an intervention with a 30% decrease in daily rates, 73 total clusters were needed (37 in the intervention group and 36 in the control group) for MRSA bacteremia, 82 for CAUTI, 60 for CLABSI, and 31 for CDI. If a 10% decrease in rates was expected, 768 clusters were needed for MRSA bacteremia, 875 for CAUTI, 631 for CLABSI, and 329 for CDI. For MRSA or VRE acquisition, 50 or 40 total clusters, respectively, were required to observe a 30% decrease, whereas 540 or 426 clusters, respectively, were required to detect a 10% decrease. CONCLUSIONS AND RELEVANCE This study suggests that large sample sizes are needed to appropriately power parallel CRTs targeting infection prevention outcomes. Sample sizes are most associated with expected effect size and CV of hospital rates.
Collapse
Affiliation(s)
- Natalia Blanco
- Department of Epidemiology and Public Health, University of Maryland School of Medicine, Baltimore
| | - Anthony D. Harris
- Department of Epidemiology and Public Health, University of Maryland School of Medicine, Baltimore
| | - Laurence S. Magder
- Department of Epidemiology and Public Health, University of Maryland School of Medicine, Baltimore
| | - John A. Jernigan
- Division of Healthcare Quality Promotion, Centers for Disease Control and Prevention, Atlanta, Georgia
| | - Sujan C. Reddy
- Division of Healthcare Quality Promotion, Centers for Disease Control and Prevention, Atlanta, Georgia
| | - Justin O’Hagan
- Division of Healthcare Quality Promotion, Centers for Disease Control and Prevention, Atlanta, Georgia
| | - Kelly M. Hatfield
- Division of Healthcare Quality Promotion, Centers for Disease Control and Prevention, Atlanta, Georgia
| | - Lisa Pineles
- Department of Epidemiology and Public Health, University of Maryland School of Medicine, Baltimore
| | - Eli Perencevich
- Department of Internal Medicine, Carver College of Medicine, University of Iowa, Iowa City
| | - Lyndsay M. O’Hara
- Department of Epidemiology and Public Health, University of Maryland School of Medicine, Baltimore
| |
Collapse
|
53
|
Borhan S, Mallick R, Pillay M, Kathard H, Thabane L. Sensitivity of methods for analyzing continuous outcome from stratified cluster randomized trials - an empirical comparison study. Contemp Clin Trials Commun 2019; 15:100405. [PMID: 31338480 PMCID: PMC6627034 DOI: 10.1016/j.conctc.2019.100405] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Revised: 06/20/2019] [Accepted: 07/03/2019] [Indexed: 12/14/2022] Open
Abstract
The assessment of the sensitivity of statistical methods has received little attention in cluster randomized trials (CRTs), especially for stratified CRT when the outcome of interest is continuous. We empirically examined the sensitivity of five methods for analyzing the continuous outcome from a stratified CRT - aimed to investigate the efficacy of the Classroom Communication Resource (CCR) compared to usual care to improve the peer attitude towards children who stutter among grade 7 students. Schools – the clusters, were divided into quintile based on their socio-political resources, and then stratified by quintile. The schools were then randomized to CCR and usual care groups in each stratum. The primary outcome was Stuttering Resource Outcomes Measure. Five methods, including the primary method, were used in this study to examine the effect of CCR. The individual-level methods were: (i) linear regression; (ii) mixed-effects method; (iii) GEE with exchangeable correlation structure (primary method of analysis). And the cluster-level methods were: (iv) cluster-level linear regression; and (v) meta-regression. These methods were also compared with or without adjustment for stratification. Ten schools were stratified by quintile, and then randomized to CCR (223 students) and usual care (231 students) groups. The direction of the estimated differences was same for all the methods except meta-regression. The widths of the 95% confidence intervals were narrower when adjusted for stratification. The overall conclusion from all the methods was similar but slightly differed in terms of effect estimate and widths of confidence intervals. Trialregistration Clinicaltrials.gov, NCT03111524. Registered on 9 March 2017.
Collapse
Affiliation(s)
- Sayem Borhan
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada.,Biostatistics Unit, Research Institute of St Joseph's Healthcare, Hamilton, ON, Canada.,Department of Family Medicine, McMaster University, Hamilton, ON, Canada
| | - Rizwana Mallick
- University of Cape Town, Rondebosch, Cape Town, South Africa
| | | | - Harsha Kathard
- University of Cape Town, Rondebosch, Cape Town, South Africa
| | - Lehana Thabane
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada.,Biostatistics Unit, Research Institute of St Joseph's Healthcare, Hamilton, ON, Canada.,Department of Pediatrics and Anesthesia, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
54
|
Davis K, Minckas N, Bond V, Clark CJ, Colbourn T, Drabble SJ, Hesketh T, Hill Z, Morrison J, Mweemba O, Osrin D, Prost A, Seeley J, Shahmanesh M, Spindler EJ, Stern E, Turner KM, Mannell J. Beyond interviews and focus groups: a framework for integrating innovative qualitative methods into randomised controlled trials of complex public health interventions. Trials 2019; 20:329. [PMID: 31171041 PMCID: PMC6555705 DOI: 10.1186/s13063-019-3439-8] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 05/11/2019] [Indexed: 12/22/2022] Open
Abstract
Background Randomised controlled trials (RCTs) are widely used for establishing evidence of the effectiveness of interventions, yet public health interventions are often complex, posing specific challenges for RCTs. Although there is increasing recognition that qualitative methods can and should be integrated into RCTs, few frameworks and practical guidance highlight which qualitative methods should be integrated and for what purposes. As a result, qualitative methods are often poorly or haphazardly integrated into existing trials, and researchers rely heavily on interviews and focus group discussions. To improve current practice, we propose a framework for innovative qualitative research methods that can help address the challenges of RCTs for complex public health interventions. Methods We used a stepped approach to develop a practical framework for researchers. This consisted of (1) a systematic review of the innovative qualitative methods mentioned in the health literature, (2) in-depth interviews with 23 academics from different methodological backgrounds working on RCTs of public health interventions in 11 different countries, and (3) a framework development and group consensus-building process. Results The findings are presented in accordance with the CONSORT (Consolidated Standards of Reporting Trials) Statement categories for ease of use. We identify the main challenges of RCTs for public health interventions alongside each of the CONSORT categories, and potential innovative qualitative methods that overcome each challenge are listed as part of a Framework for the Integration of Innovative Qualitative Methods into RCTs of Complex Health Interventions. Innovative qualitative methods described in the interviews include rapid ethnographic appraisals, document analysis, diary methods, interactive voice responses and short message service, community mapping, spiral walks, pair interviews and visual participatory analysis. Conclusions The findings of this study point to the usefulness of observational and participatory methods for trials of complex public health interventions, offering a novel contribution to the broader literature about the need for mixed methods approaches. Integrating a diverse toolkit of qualitative methods can enable appropriate adjustments to the intervention or process (or both) of data collection during RCTs, which in turn can create more sustainable and effective interventions. However, such integration will require a cultural shift towards the adoption of method-neutral research approaches, transdisciplinary collaborations, and publishing regimes.
Collapse
Affiliation(s)
- Katy Davis
- Institute for Global Health, University College London, 30 Guilford Street, London, WC1N 1EH, UK
| | - Nicole Minckas
- Institute for Global Health, University College London, 30 Guilford Street, London, WC1N 1EH, UK
| | - Virginia Bond
- Department of Global Health and Development, Faculty of Public Health and Policy, London School of Hygiene and Tropical Medicine, 15-17 Tavistock Place, London, WC1H 9SH, UK.,Zambart House, School of Public Health, University of Zambia, Box 50697, Lusaka, 10101, Zambia
| | - Cari Jo Clark
- Rollins School of Public Health, Emory University, 1518 Clifton Road NE, Claudia Nance Rollins Building, 7033, Atlanta, GA, 30322, USA
| | - Tim Colbourn
- Institute for Global Health, University College London, 30 Guilford Street, London, WC1N 1EH, UK
| | - Sarah J Drabble
- School of Health and Related Research, University of Sheffield, 30 Regent St, Sheffield, S1 4DA, UK
| | - Therese Hesketh
- Institute for Global Health, University College London, 30 Guilford Street, London, WC1N 1EH, UK
| | - Zelee Hill
- Institute for Global Health, University College London, 30 Guilford Street, London, WC1N 1EH, UK
| | - Joanna Morrison
- Institute for Global Health, University College London, 30 Guilford Street, London, WC1N 1EH, UK
| | - Oliver Mweemba
- Department of Health Promotion and Education, School of Public Health, Ridgeway Campus University of Zambia, Box 50110, Lusaka, 10101, Zambia
| | - David Osrin
- Institute for Global Health, University College London, 30 Guilford Street, London, WC1N 1EH, UK
| | - Audrey Prost
- Institute for Global Health, University College London, 30 Guilford Street, London, WC1N 1EH, UK
| | - Janet Seeley
- Department of Global Health and Development, Faculty of Public Health and Policy, London School of Hygiene and Tropical Medicine, 15-17 Tavistock Place, London, WC1H 9SH, UK
| | - Maryam Shahmanesh
- Institute for Global Health, University College London, 30 Guilford Street, London, WC1N 1EH, UK
| | - Esther J Spindler
- Heilbrunn Department of Population and Family Health, Columbia University Mailman School of Public Health, 722 W 168th St, New York, 10032, NY, USA
| | - Erin Stern
- Department of Global Health and Development, Faculty of Public Health and Policy, London School of Hygiene and Tropical Medicine, 15-17 Tavistock Place, London, WC1H 9SH, UK
| | - Katrina M Turner
- Population Health Sciences, University of Bristol, 39 Whatley Road, Bristol, BS8 2PS, UK.,The National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care West (NIHR CLAHRC West), University Hospitals Bristol NHS Foundation Trust, Bristol, UK
| | - Jenevieve Mannell
- Institute for Global Health, University College London, 30 Guilford Street, London, WC1N 1EH, UK.
| |
Collapse
|
55
|
Consolidated Standards of Reporting Trials (CONSORT) extensions covered most types of randomized controlled trials, but the potential workload for authors was high. J Clin Epidemiol 2019; 113:168-175. [PMID: 31153976 DOI: 10.1016/j.jclinepi.2019.05.030] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 05/17/2019] [Accepted: 05/25/2019] [Indexed: 11/20/2022]
Abstract
OBJECTIVES Our aim was to determine the coverage of randomized controlled trials (RCTs) by the Consolidated Standards of Reporting Trial (CONSORT) Statement and its extensions and to evaluate the potential workload for authors to adhere to the guidelines. STUDY DESIGN AND SETTING We identified CONSORT extensions from the CONSORT Web site. We randomly selected a sample of 1,000 RCTs indexed in PubMed in 2016 and recorded whether they were covered by CONSORT extensions for specific study designs or interventions. We evaluated the potential workload for authors by counting the number of documents and pages they have to consult to have a full understanding of the guidelines. RESULTS We identified 14 extensions. Only one extension was updated concurrently with the main CONSORT in 2010, three were updated after 2-7 years, and three are still based on CONSORT 2001. Overall, 81% of RCTs are covered by relevant CONSORT guidelines; missing extensions for specific study designs were under development at the time of the search (Nov 2018). However, 6 of 10 extensions covered <2% of the trials. A median [Q1-Q3] of 4 [4-5] documents and 67 [57-78] pages should be consulted. CONCLUSION Most RCTs indexed in PubMed are covered by the CONSORT Statement and extensions, but the potential workload for authors could be high.
Collapse
|
56
|
Hemming K, Carroll K, Thompson J, Forbes A, Taljaard M. Quality of stepped-wedge trial reporting can be reliably assessed using an updated CONSORT: crowd-sourcing systematic review. J Clin Epidemiol 2019; 107:77-88. [PMID: 30500405 DOI: 10.1016/j.jclinepi.2018.11.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Revised: 10/03/2018] [Accepted: 11/19/2018] [Indexed: 12/29/2022]
Abstract
OBJECTIVES The Consolidated Standards of Reporting Trials extension for the stepped-wedge cluster randomized trial (SW-CRT) is a recently published reporting guideline for SW-CRTs. We assess the quality of reporting of a recent sample of SW-CRTs. STUDY DESIGN AND SETTING Quality of reporting was asssessed according to the 26 items in the new guideline using a novel crowd sourcing methodology conducted independently and in duplicate, with random assignment, by 50 reviewers. We assessed reliability of the quality assessments, proposing this as a novel way to assess robustness of items in reporting guidelines. RESULTS Several items were well reported. Some items were very poorly reported, including several items that have unique requirements for the SW-CRT, such as the rationale for use of the design, description of the design, identification and recruitment of participants within clusters, and concealment of cluster allocation (not reported in more than 50% of the reports). Agreement across items was moderate (median percentage agreement was 76% [IQR 64 to 86]). Agreement was low for several items including the description of the trial design and why trial ended or stopped for example. CONCLUSIONS When reporting SW-CRTs, authors should pay particular attention to ensure clear reporting on the exact format of the design with justification, as well as how clusters and individuals were identified for inclusion in the study, and whether this was done before or after randomization of the clusters, which are crucial for risk of bias assessments. Some items, including why the trial ended, might either not be relevant to SW-CRTs or might be unclearly described in the statement.
Collapse
Affiliation(s)
- Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK.
| | - Kelly Carroll
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, 501 Smyth Road, Ottawa, Ontario, Canada
| | - Jennifer Thompson
- Tropical Epidemiology Group, London School of Hygiene and Tropical Medicine, London, UK
| | - Andrew Forbes
- Biostatistics, Monash University, Melbourne, Australia
| | - Monica Taljaard
- School of Epidemiology, Public Health and Preventive Medicine, University of Ottawa, Ottawa, Canada
| |
Collapse
|
57
|
Hemming K, Taljaard M, McKenzie JE, Hooper R, Copas A, Thompson JA, Dixon-Woods M, Aldcroft A, Doussau A, Grayling M, Kristunas C, Goldstein CE, Campbell MK, Girling A, Eldridge S, Campbell MJ, Lilford RJ, Weijer C, Forbes AB, Grimshaw JM. Reporting of stepped wedge cluster randomised trials: extension of the CONSORT 2010 statement with explanation and elaboration. BMJ 2018; 363:k1614. [PMID: 30413417 PMCID: PMC6225589 DOI: 10.1136/bmj.k1614] [Citation(s) in RCA: 244] [Impact Index Per Article: 34.9] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/20/2018] [Indexed: 12/14/2022]
Affiliation(s)
- Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham B15 2TT, UK
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
| | - Joanne E McKenzie
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
| | - Richard Hooper
- Centre for Primary Care and Public Health, Queen Mary University of London, London, UK
| | - Andrew Copas
- London Hub for Trials Methodology Research, MRC Clinical Trials Unit at University College London, London, UK
| | - Jennifer A Thompson
- London Hub for Trials Methodology Research, MRC Clinical Trials Unit at University College London, London, UK
- Department for Infectious Disease Epidemiology, London School of Hygiene and Tropical Medicine, London, UK
| | - Mary Dixon-Woods
- The Healthcare Improvement Studies Institute, University of Cambridge, Cambridge Biomedical Campus, Cambridge, UK
| | | | - Adelaide Doussau
- Biomedical Ethics Unit, McGill University School of Medicine, Montreal, QC, Canada
| | | | | | - Cory E Goldstein
- Rotman Institute of Philosophy, Western University, London, ON, Canada
| | | | - Alan Girling
- Institute of Applied Health Research, University of Birmingham, Birmingham B15 2TT, UK
| | - Sandra Eldridge
- Centre for Primary Care and Public Health, Queen Mary University of London, London, UK
| | | | | | - Charles Weijer
- Rotman Institute of Philosophy, Western University, London, ON, Canada
| | - Andrew B Forbes
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
| | - Jeremy M Grimshaw
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
- Department of Medicine, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
58
|
Grischott T. The Shiny Balancer - software and imbalance criteria for optimally balanced treatment allocation in small RCTs and cRCTs. BMC Med Res Methodol 2018; 18:108. [PMID: 30326827 PMCID: PMC6192202 DOI: 10.1186/s12874-018-0551-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Accepted: 08/27/2018] [Indexed: 12/04/2022] Open
Abstract
Background In randomised controlled trials with only few randomisation units, treatment allocation may be challenging if balanced distributions of many covariates or baseline outcome measures are desired across all treatment groups. Both traditional approaches, stratified randomisation and allocation by minimisation, have their own limitations. A third method for achieving balance consists of randomly choosing from a preselected list of sufficiently balanced allocations. As with minimisation, this method requires that heterogeneity between treatment groups is measured by specified imbalance metrics. Although certain imbalance measures are more commonly used than others, to the author's knowledge there is no generally accepted “gold standard”, neither for categorical and even less so for continuous variables. Methods An intuitive and easily accessible web-based software tool was developed which allows for balancing multiple variables of different types and using various imbalance metrics. Different metrics were compared in a simulation study. Results Using simulated data, it could be shown that for categorical variables, χ2-based imbalance measures seem to be viable alternatives to the established “quadratic imbalance” metric. For continuous variables, using the area between the empirical cumulative distribution functions or the largest difference in the three pairs of quartiles is recommended to measure imbalance. Another imbalance metric suggested in the literature for continuous variables, the (symmetrised) Kullback-Leibler divergence, should be used with caution. Conclusion The Shiny Balancer offers the possibility to visually explore the balancing properties of several well established or newly suggested imbalance metrics, and its use is particularly advocated in clinical studies with few randomisation units, as it is typically the case in cluster randomised trials. Electronic supplementary material The online version of this article (10.1186/s12874-018-0551-5) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Thomas Grischott
- Institute of Primary Care, University and University Hospital of Zurich, Pestalozzistrasse 24, CH-8091, Zurich, Switzerland.
| |
Collapse
|
59
|
Westgate PM. A readily available improvement over method of moments for intra-cluster correlation estimation in the context of cluster randomized trials and fitting a GEE-type marginal model for binary outcomes. Clin Trials 2018; 16:41-51. [PMID: 30295512 DOI: 10.1177/1740774518803635] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
BACKGROUND/AIMS Cluster randomized trials are popular in health-related research due to the need or desire to randomize clusters of subjects to different trial arms as opposed to randomizing each subject individually. As outcomes from subjects within the same cluster tend to be more alike than outcomes from subjects within other clusters, an exchangeable correlation arises that is measured via the intra-cluster correlation coefficient. Intra-cluster correlation coefficient estimation is especially important due to the increasing awareness of the need to publish such values from studies in order to help guide the design of future cluster randomized trials. Therefore, numerous methods have been proposed to accurately estimate the intra-cluster correlation coefficient, with much attention given to binary outcomes. As marginal models are often of interest, we focus on intra-cluster correlation coefficient estimation in the context of fitting such a model with binary outcomes using generalized estimating equations. Traditionally, intra-cluster correlation coefficient estimation with generalized estimating equations has been based on the method of moments, although such estimators can be negatively biased. Furthermore, alternative estimators that work well, such as the analysis of variance estimator, are not as readily applicable in the context of practical data analyses with generalized estimating equations. Therefore, in this article we assess, in terms of bias, the readily available residual pseudo-likelihood approach to intra-cluster correlation coefficient estimation with the GLIMMIX procedure of SAS (SAS Institute, Cary, NC). Furthermore, we study a possible corresponding approach to confidence interval construction for the intra-cluster correlation coefficient. METHODS We utilize a simulation study and application example to assess bias in intra-cluster correlation coefficient estimates obtained from GLIMMIX using residual pseudo-likelihood. This estimator is contrasted with method of moments and analysis of variance estimators which are standards of comparison. The approach to confidence interval construction is assessed by examining coverage probabilities. RESULTS Overall, the residual pseudo-likelihood estimator performs very well. It has considerably less bias than moment estimators, which are its competitor for general generalized estimating equation-based analyses, and therefore, it is a major improvement in practice. Furthermore, it works almost as well as analysis of variance estimators when they are applicable. Confidence intervals have near-nominal coverage when the intra-cluster correlation coefficient estimate has negligible bias. CONCLUSION Our results show that the residual pseudo-likelihood estimator is a good option for intra-cluster correlation coefficient estimation when conducting a generalized estimating equation-based analysis of binary outcome data arising from cluster randomized trials. The estimator is practical in that it is simply a result from fitting a marginal model with GLIMMIX, and a confidence interval can be easily obtained. An additional advantage is that, unlike most other options for performing generalized estimating equation-based analyses, GLIMMIX provides analysts the option to utilize small-sample adjustments that ensure valid inference.
Collapse
Affiliation(s)
- Philip M Westgate
- Department of Biostatistics, College of Public Health, University of Kentucky, Lexington, KY, USA
| |
Collapse
|
60
|
Taljaard M, Weijer C, Grimshaw JM, Ali A, Brehaut JC, Campbell MK, Carroll K, Edwards S, Eldridge S, Forrest CB, Giraudeau B, Goldstein CE, Graham ID, Hemming K, Hey SP, Horn AR, Jairath V, Klassen TP, London AJ, Marlin S, Marshall JC, McIntyre L, McKenzie JE, Nicholls SG, Alison Paprica P, Zwarenstein M, Fergusson DA. Developing a framework for the ethical design and conduct of pragmatic trials in healthcare: a mixed methods research protocol. Trials 2018; 19:525. [PMID: 30261933 PMCID: PMC6161426 DOI: 10.1186/s13063-018-2895-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Accepted: 08/31/2018] [Indexed: 01/03/2023] Open
Abstract
Background There is a widely recognized need for more pragmatic trials that evaluate interventions in real-world settings to inform decision-making by patients, providers, and health system leaders. Increasing availability of electronic health records, centralized research ethics review, and novel trial designs, combined with support and resources from governments worldwide for patient-centered research, have created an unprecedented opportunity to advance the conduct of pragmatic trials, which can ultimately improve patient health and health system outcomes. Such trials raise ethical issues that have not yet been fully addressed, with existing literature concentrating on regulations in specific jurisdictions rather than arguments grounded in ethical principles. Proposed solutions (e.g. using different regulations in “learning healthcare systems”) are speculative with no guarantee of improvement over existing oversight procedures. Most importantly, the literature does not reflect a broad vision of protecting the core liberty and welfare interests of research participants. Novel ethical guidance is required. We have assembled a team of ethicists, trialists, methodologists, social scientists, knowledge users, and community members with the goal of developing guidance for the ethical design and conduct of pragmatic trials. Methods Our project will combine empirical and conceptual work and a consensus development process. Empirical work will: (1) identify a comprehensive list of ethical issues through interviews with a small group of key informants (e.g. trialists, ethicists, chairs of research ethics committees); (2) document current practices by reviewing a random sample of pragmatic trials and surveying authors; (3) elicit views of chairs of research ethics committees through surveys in Canada, UK, USA, France, and Australia; and (4) elicit views and experiences of community members and health system leaders through focus groups and surveys. Conceptual work will consist of an ethical analysis of identified issues and the development of new ethical solutions, outlining principles, policy options, and rationales. The consensus development process will involve an independent expert panel to develop a final guidance document. Discussion Planned output includes manuscripts, educational materials, and tailored guidance documents to inform and support researchers, research ethics committees, journal editors, regulators, and funders in the ethical design and conduct of pragmatic trials.
Collapse
Affiliation(s)
- Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute (OHRI), The Ottawa Hospital, Civic Campus, 1053 Carling Avenue, Ottawa, ON, K1Y 4E9, Canada. .,School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada.
| | - Charles Weijer
- Rotman Institute of Philosophy, Western University, 1151 Richmond Street, London, ON, N6A 5B7, Canada
| | - Jeremy M Grimshaw
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada.,Clinical Epidemiology Program, Ottawa Hospital Research Institute (OHRI), The Ottawa Hospital, General Campus, 501 Smyth Road, Ottawa, ON, K1H 8L6, Canada.,Department of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Adnan Ali
- Patient and Family Advisory Council, The Ottawa Hospital, Ottawa, ON, Canada
| | - Jamie C Brehaut
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada.,Clinical Epidemiology Program, Ottawa Hospital Research Institute (OHRI), The Ottawa Hospital, General Campus, 501 Smyth Road, Ottawa, ON, K1H 8L6, Canada
| | - Marion K Campbell
- Health Services Research Unit, University of Aberdeen, Health Sciences Building, Foresterhill, Aberdeen, AB25 2ZD, UK
| | - Kelly Carroll
- Clinical Epidemiology Program, Ottawa Hospital Research Institute (OHRI), The Ottawa Hospital, General Campus, 501 Smyth Road, Ottawa, ON, K1H 8L6, Canada
| | - Sarah Edwards
- Department of Science and Technology Studies, University College London, 22 Gordon Square, King's Cross, London, WC1H 0AW, UK
| | - Sandra Eldridge
- Centre for Primary Care and Public Health, Queen Mary University of London, 58 Turner Street, London, E1 2AB, UK
| | - Christopher B Forrest
- Applied Clinical Research Center, Children's Hospital of Philadelphia, 2716 South Street, Philadelphia, PA, 19146, USA
| | - Bruno Giraudeau
- Université de Tours, Université de Nantes, INSERM, SPHERE U1246, Tours, France.,INSERM CIC1415, CHRU de Tours, Tours, France
| | - Cory E Goldstein
- Rotman Institute of Philosophy, Western University, 1151 Richmond Street, London, ON, N6A 5B7, Canada
| | - Ian D Graham
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada.,Clinical Epidemiology Program, Ottawa Hospital Research Institute (OHRI), The Ottawa Hospital, General Campus, 501 Smyth Road, Ottawa, ON, K1H 8L6, Canada
| | - Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham, B15 2TT, UK
| | - Spencer Phillips Hey
- Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women's Hospital, 1620 Tremont Street, Boston, MA, 02120, USA.,Center for Bioethics, Harvard Medical School, Boston, MA, USA
| | - Austin R Horn
- Rotman Institute of Philosophy, Western University, 1151 Richmond Street, London, ON, N6A 5B7, Canada
| | - Vipul Jairath
- Division of Gastroenterology, Department of Medicine, Western University, London, ON, Canada.,Division of Epidemiology and Biostatistics, Western University, University Hospital, 339 Windermere Road, London, ON, N6A 5A5, Canada
| | - Terry P Klassen
- Children's Hospital Research Institute of Manitoba, 513-715 McDermot Avenue, Winnipeg, MB, R3E 3P, Canada
| | - Alex John London
- Department of Philosophy and Center for Ethics and Policy, Carnegie Mellon University, 150A Baker Hall, Pittsburgh, PA, 15213-3890, USA
| | - Susan Marlin
- Clinical Trials Ontario, 661 University Avenue, MaRS Centre, West Tower, Toronto, ON, M5G 1M1, Canada
| | - John C Marshall
- St. Michael's Hospital, Department of Surgery, University of Toronto, 30 Bond Street, Toronto, ON, M5B 1W8, Canada
| | - Lauralyn McIntyre
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada.,Clinical Epidemiology Program, Ottawa Hospital Research Institute (OHRI), The Ottawa Hospital, General Campus, 501 Smyth Road, Ottawa, ON, K1H 8L6, Canada.,Department of Medicine (Division of Critical Care), University of Ottawa, Ottawa, ON, Canada
| | - Joanne E McKenzie
- School of Public Health and Preventive Medicine, Monash University, 553 St Kilda Road, Melbourne, VIC, 3004, Australia
| | - Stuart G Nicholls
- Clinical Epidemiology Program, Ottawa Hospital Research Institute (OHRI), The Ottawa Hospital, Civic Campus, 1053 Carling Avenue, Ottawa, ON, K1Y 4E9, Canada
| | - P Alison Paprica
- Institute of Health Policy, Management and Evaluation, University of Toronto, Health Sciences Building, 155 College Street, Toronto, ON, M5T 3M6, Canada
| | - Merrick Zwarenstein
- Centre for Studies in Family Medicine, Department of Family Medicine Schulich School of Medicine & Dentistry Western University, 1151 Richmond Street, London, ON, N6A 3K7, Canada
| | - Dean A Fergusson
- Clinical Epidemiology Program, Ottawa Hospital Research Institute (OHRI), The Ottawa Hospital, Civic Campus, 1053 Carling Avenue, Ottawa, ON, K1Y 4E9, Canada.,School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada.,Department of Medicine, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
61
|
Affiliation(s)
- Lars W Andersen
- Research Center for Emergency Medicine, Department of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark
| | - Asger Granfeldt
- Department of Intensive Care Medicine, Aarhus University Hospital, Aarhus, Denmark
| |
Collapse
|
62
|
Pérez MC, Minoyan N, Ridde V, Sylvestre MP, Johri M. Comparison of registered and published intervention fidelity assessment in cluster randomised trials of public health interventions in low- and middle-income countries: systematic review. Trials 2018; 19:410. [PMID: 30064484 PMCID: PMC6069979 DOI: 10.1186/s13063-018-2796-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2018] [Accepted: 07/09/2018] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND Cluster randomised trials (CRTs) are a key instrument to evaluate public health interventions. Fidelity assessment examines study processes to gauge whether an intervention was delivered as initially planned. Evaluation of implementation fidelity (IF) is required to establish whether the measured effects of a trial are due to the intervention itself and may be particularly important for CRTs of complex interventions conducted in low- and middle-income countries (LMICs). However, current CRT reporting guidelines offer no guidance on IF assessment. The objective of this review was to study current practices concerning the assessment of IF in CRTs of public health interventions in LMICs. METHODS CRTs of public health interventions in LMICs that planned or reported IF assessment in either the trial protocol or the main trial report were included. The MEDLINE/PubMed, CINAHL and EMBASE databases were queried from January 2012 to May 2016. To ensure availability of a study protocol, CRTs reporting a registration number in the abstract were included. Relevant data were extracted from each study protocol and trial report by two researchers using a predefined screening sheet. Risk of bias for individual studies was assessed. RESULTS We identified 90 CRTs of public health interventions in LMICs with a study protocol in a publicly available trial registry published from January 2012 to May 2016. Among these 90 studies, 25 (28%) did not plan or report assessing IF; the remaining 65 studies (72%) addressed at least one IF dimension. IF assessment was planned in 40% (36/90) of trial protocols and reported in 71.1% (64/90) of trial reports. The proportion of overall agreement between the trial protocol and trial report concerning occurrence of IF assessment was 66.7% (60/90). Most studies had low to moderate risk of bias. CONCLUSIONS IF assessment is not currently a systematic practice in CRTs of public health interventions carried out in LMICs. In the absence of IF assessment, it may be difficult to determine if CRT results are due to the intervention design, to its implementation, or to unknown or external factors that may influence results. CRT reporting guidelines should promote IF assessment. TRIAL REGISTRATION Protocol published and available at: https://doi.org/10.1186/s13643-016-0351-0.
Collapse
Affiliation(s)
- Myriam Cielo Pérez
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), 900, rue Saint-Denis, Pavillon R, Tour Saint-Antoine Porte S03.414, Montréal, Québec, H2X 0A9, Canada.,Département de médicine sociale et préventive, École de santé publique (ESPUM), Université de Montréal, 7101, avenue du Parc, 3e étage, Montréal, Québec, H3N 1X9, Canada
| | - Nanor Minoyan
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), 900, rue Saint-Denis, Pavillon R, Tour Saint-Antoine Porte S03.414, Montréal, Québec, H2X 0A9, Canada.,Département de médicine sociale et préventive, École de santé publique (ESPUM), Université de Montréal, 7101, avenue du Parc, 3e étage, Montréal, Québec, H3N 1X9, Canada
| | - Valéry Ridde
- Institut de Recherche en Santé Publique Université de Montréal (IRSPUM), Pavillon 7101 Avenue du Parc, P.O. Box 6128, Centre-ville Station, Montréal, Québec, H3C 3J7, Canada.,Institut de Recherche Pour le Développement (IRD), Le Sextant 44, bd de Dunkerque, CS 90009 13572, Cedex 02, Marseille, France
| | - Marie-Pierre Sylvestre
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), 900, rue Saint-Denis, Pavillon R, Tour Saint-Antoine Porte S03.414, Montréal, Québec, H2X 0A9, Canada.,Département de médicine sociale et préventive, École de santé publique (ESPUM), Université de Montréal, 7101, avenue du Parc, 3e étage, Montréal, Québec, H3N 1X9, Canada
| | - Mira Johri
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), 900, rue Saint-Denis, Pavillon R, Tour Saint-Antoine Porte S03.414, Montréal, Québec, H2X 0A9, Canada. .,Département de gestion, d'évaluation, et de politique de santé, École de santé publique, Université de Montréal, 7101, avenue du Parc, 3e étage, Montréal, Québec, H3N 1X9, Canada.
| |
Collapse
|
63
|
Murray DM, Pals SL, George SM, Kuzmichev A, Lai GY, Lee JA, Myles RL, Nelson SM. Design and analysis of group-randomized trials in cancer: A review of current practices. Prev Med 2018; 111:241-247. [PMID: 29551717 PMCID: PMC5930119 DOI: 10.1016/j.ypmed.2018.03.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2017] [Revised: 01/31/2018] [Accepted: 03/09/2018] [Indexed: 02/07/2023]
Abstract
The purpose of this paper is to summarize current practices for the design and analysis of group-randomized trials involving cancer-related risk factors or outcomes and to offer recommendations to improve future trials. We searched for group-randomized trials involving cancer-related risk factors or outcomes that were published or online in peer-reviewed journals in 2011-15. During 2016-17, in Bethesda MD, we reviewed 123 articles from 76 journals to characterize their design and their methods for sample size estimation and data analysis. Only 66 (53.7%) of the articles reported appropriate methods for sample size estimation. Only 63 (51.2%) reported exclusively appropriate methods for analysis. These findings suggest that many investigators do not adequately attend to the methodological challenges inherent in group-randomized trials. These practices can lead to underpowered studies, to an inflated type 1 error rate, and to inferences that mislead readers. Investigators should work with biostatisticians or other methodologists familiar with these issues. Funders and editors should ensure careful methodological review of applications and manuscripts. Reviewers should ensure that studies are properly planned and analyzed. These steps are needed to improve the rigor and reproducibility of group-randomized trials. The Office of Disease Prevention (ODP) at the National Institutes of Health (NIH) has taken several steps to address these issues. ODP offers an online course on the design and analysis of group-randomized trials. ODP is working to increase the number of methodologists who serve on grant review panels. ODP has developed standard language for the Application Guide and the Review Criteria to draw investigators' attention to these issues. Finally, ODP has created a new Research Methods Resources website to help investigators, reviewers, and NIH staff better understand these issues.
Collapse
Affiliation(s)
- David M Murray
- Office of Disease Prevention, Division of Program Coordination Planning and Strategic Initiatives, Office of the Director, National Institutes of Health, Bethesda, MD, United States.
| | - Sherri L Pals
- Health Informatics, Data Management, and Statistics Branch, Division of Global HIV and Tuberculosis, Center for Global Health, US Centers for Disease Control and Prevention, Atlanta, GA, United States
| | - Stephanie M George
- Office of Disease Prevention, Division of Program Coordination Planning and Strategic Initiatives, Office of the Director, National Institutes of Health, Bethesda, MD, United States
| | - Andrey Kuzmichev
- Office of the Surgeon General, Office of the Assistant Secretary for Health, Department of Health and Human Services, United States
| | - Gabriel Y Lai
- Environmental Epidemiology Branch, Division of Cancer Control and Population Sciences, National Cancer Institute, National Institutes of Health, Rockville, MD, United States
| | - Jocelyn A Lee
- Project Genomics Evidence Neoplasia Information Exchange (GENIE), Executive Office, American Association for Cancer Research, Philadelphia, PA, United States
| | - Ranell L Myles
- Office of Disease Prevention, Division of Program Coordination Planning and Strategic Initiatives, Office of the Director, National Institutes of Health, Bethesda, MD, United States
| | - Shakira M Nelson
- Scientific Programs, American Association for Cancer Research, Philadelphia, PA, United States
| |
Collapse
|
64
|
Abstract
BACKGROUND Treatment non-adherence in randomised trials refers to situations where some participants do not receive their allocated treatment as intended. For cluster randomised trials, where the unit of randomisation is a group of participants, non-adherence may occur at the cluster or individual level. When non-adherence occurs, randomisation no longer guarantees that the relationship between treatment receipt and outcome is unconfounded, and the power to detect the treatment effects in intention-to-treat analysis may be reduced. Thus, recording adherence and estimating the causal treatment effect adequately are of interest for clinical trials. OBJECTIVES To assess the extent of reporting of non-adherence issues in published cluster trials and to establish which methods are currently being used for addressing non-adherence, if any, and whether clustering is accounted for in these. METHODS We systematically reviewed 132 cluster trials published in English in 2011 previously identified through a search in PubMed. RESULTS One-hundred and twenty three cluster trials were included in this systematic review. Non-adherence was reported in 56 cluster trials. Among these, 19 reported a treatment efficacy estimate: per protocol in 15 and as treated in 4. No study discussed the assumptions made by these methods, their plausibility or the sensitivity of the results to deviations from these assumptions. LIMITATIONS The year of publication of the cluster trials included in this review (2011) could be considered a limitation of this study; however, no new guidelines regarding the reporting and the handling of non-adherence for cluster trials have been published since. In addition, a single reviewer undertook the data extraction. To mitigate this, a second reviewer conducted a validation of the extraction process on 15 randomly selected reports. Agreement was satisfactory (93%). CONCLUSION Despite the recommendations of the Consolidated Standards of Reporting Trials statement extension to cluster randomised trials, treatment adherence is under-reported. Among the trials providing adherence information, there was substantial variation in how adherence was defined, handled and reported. Researchers should discuss the assumptions required for the results to be interpreted causally and whether these are scientifically plausible in their studies. Sensitivity analyses to study the robustness of the results to departures from these assumptions should be performed.
Collapse
Affiliation(s)
- Schadrac C Agbla
- Department of Medical Statistics, London School of Hygiene and Tropical Medicine (LSHTM), London, UK
| | - Karla DiazOrdaz
- Department of Medical Statistics, London School of Hygiene and Tropical Medicine (LSHTM), London, UK
| |
Collapse
|
65
|
Gallis JA, Li F, Yu H, Turner EL. cvcrand and cptest: Commands for efficient design and analysis of cluster randomized trials using constrained randomization and permutation tests. THE STATA JOURNAL 2018; 18:357-378. [PMID: 34413708 PMCID: PMC8372194 DOI: 10.1177/1536867x1801800204] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Cluster randomized trials (CRTs), where clusters (for example, schools or clinics) are randomized to comparison arms but measurements are taken on individuals, are commonly used to evaluate interventions in public health, education, and the social sciences. Because CRTs typically involve a small number of clusters (for example, fewer than 20), simple randomization frequently leads to baseline imbalance of cluster characteristics across study arms, threatening the internal validity of the trial. In CRTs with a small number of clusters, classic approaches to balancing baseline characteristics-such as matching and stratification-have several drawbacks, especially when the number of baseline characteristics the researcher desires to balance is large (Ivers et al., 2012, Trials 13: 120). An alternative design approach is covariate-constrained randomization, whereby a randomization scheme is randomly selected from a subset of all possible randomization schemes based on the value of a balancing criterion (Raab and Butcher, 2001, Statistics in Medicine 20: 351-365). Subsequently, a clustered permutation test can be used in the analysis, which provides increased power under constrained randomization compared with simple randomization (Li et al., 2016, Statistics in Medicine 35: 1565-1579). In this article, we describe covariate-constrained randomization and the permutation test for the design and analysis of CRTs and provide an example to demonstrate the use of our new commands cvcrand and cptest to implement constrained randomization and the permutation test.
Collapse
Affiliation(s)
- John A Gallis
- Duke University, Department of Biostatistics and Bioinformatics, Duke Global Health Institute, Durham, NC
| | - Fan Li
- Duke University, Department of Biostatistics and Bioinformatics, Durham, NC
| | - Hengshi Yu
- University of Michigan, Department of Biostatistics, Ann Arbor, MI
| | - Elizabeth L Turner
- Duke University, Department of Biostatistics and Bioinformatics, Duke Global Health Institute, Durham, NC
| |
Collapse
|
66
|
Huang FL. Using Cluster Bootstrapping to Analyze Nested Data With a Few Clusters. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT 2018; 78:297-318. [PMID: 29795957 PMCID: PMC5965657 DOI: 10.1177/0013164416678980] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Cluster randomized trials involving participants nested within intact treatment and control groups are commonly performed in various educational, psychological, and biomedical studies. However, recruiting and retaining intact groups present various practical, financial, and logistical challenges to evaluators and often, cluster randomized trials are performed with a low number of clusters (~20 groups). Although multilevel models are often used to analyze nested data, researchers may be concerned of potentially biased results due to having only a few groups under study. Cluster bootstrapping has been suggested as an alternative procedure when analyzing clustered data though it has seen very little use in educational and psychological studies. Using a Monte Carlo simulation that varied the number of clusters, average cluster size, and intraclass correlations, we compared standard errors using cluster bootstrapping with those derived using ordinary least squares regression and multilevel models. Results indicate that cluster bootstrapping, though more computationally demanding, can be used as an alternative procedure for the analysis of clustered data when treatment effects at the group level are of primary interest. Supplementary material showing how to perform cluster bootstrapped regressions using R is also provided.
Collapse
Affiliation(s)
- Francis L. Huang
- University of Missouri, Columbia, MO, USA
- Francis L. Huang, Department of Educational, School, and Counseling Psychology, College of Education, University of Missouri, 16 Hill Hall, Columbia, MO 65211, USA.
| |
Collapse
|
67
|
How to conduct implementation trials and multicentre studies in the emergency department. CAN J EMERG MED 2018; 20:448-452. [PMID: 29378671 DOI: 10.1017/cem.2017.433] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
OBJECTIVE The objective of Panel 2b was to present an overview of and recommendations for the conduct of implementation trials and multicentre studies in emergency medicine. METHODS Panel members engaged methodologists to discuss the design and conduct of implementation and multicentre studies. We also conducted semi-structured interviews with 37 Canadian adult and pediatric emergency medicine researchers to elicit barriers and facilitators to conducting these kinds of studies. RESULTS Responses were organized by themes, and, based on these responses, recommendations were developed and refined in an iterative fashion by panel members. CONCLUSIONS We offer eight recommendations to facilitate multicentre clinical and implementation studies, along with guidance for conducting implementation research in the emergency department. Recommendations for multicentre studies reflect the importance of local study investigators and champions, requirements for research infrastructure and staffing, and the cooperation and communication between the coordinating centre and participating sites.
Collapse
|
68
|
Heo M, Nair SR, Wylie-Rosett J, Faith MS, Pietrobelli A, Glassman NR, Martin SN, Dickinson S, Allison DB. Trial Characteristics and Appropriateness of Statistical Methods Applied for Design and Analysis of Randomized School-Based Studies Addressing Weight-Related Issues: A Literature Review. J Obes 2018; 2018:8767315. [PMID: 30046468 PMCID: PMC6036807 DOI: 10.1155/2018/8767315] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Accepted: 04/23/2018] [Indexed: 02/04/2023] Open
Abstract
OBJECTIVE To evaluate whether clustering effects, often quantified by the intracluster correlation coefficient (ICC), were appropriately accounted for in design and analysis of school-based trials. METHODS We searched PubMed and extracted variables concerning study characteristics, power analysis, ICC use for power analysis, applied statistical models, and the report of the ICC estimated from the observed data. RESULTS N=263 papers were identified, and N=121 papers were included for evaluation. Overall, only a minority (21.5%) of studies incorporated ICC values for power analysis, fewer studies (8.3%) reported the estimated ICC, and 68.6% of studies applied appropriate multilevel models. A greater proportion of studies applied the appropriate models during the past five years (2013-2017) compared to the prior years (74.1% versus 63.5%, p=0.176). Significantly associated with application of appropriate models were a larger number of schools (p=0.030), a larger sample size (p=0.002), longer follow-up (p=0.014), and randomization at a cluster level (p < 0.001) and so were studies that incorporated the ICC into power analysis (p=0.016) and reported the estimated ICC (p=0.030). CONCLUSION Although application of appropriate models has increased over the years, consideration of clustering effects in power analysis has been inadequate, as has report of estimated ICC. To increase rigor, future school-based trials should address these issues at both the design and analysis stages.
Collapse
Affiliation(s)
- Moonseong Heo
- Department of Epidemiology and Population Health, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Singh R. Nair
- Department of Anesthesiology, Montefiore Medical Center, Bronx, NY, USA
| | - Judith Wylie-Rosett
- Department of Epidemiology and Population Health, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Myles S. Faith
- Department of Counseling, School, and Educational Psychology, Graduate School of Education, University at Buffalo-SUNY, Buffalo, NY, USA
| | - Angelo Pietrobelli
- Department of Pediatrics, University of Verona, Verona, Italy
- Pennington Biomedical Research Center, Baton Rouge, LA, USA
| | - Nancy R. Glassman
- D. Samuel Gottesman Library, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Sarah N. Martin
- Department of Epidemiology and Population Health, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Stephanie Dickinson
- Department of Epidemiology and Biostatistics, School of Public Health, Indiana University-Bloomington, Bloomington, IN, USA
| | - David B. Allison
- Department of Epidemiology and Biostatistics, School of Public Health, Indiana University-Bloomington, Bloomington, IN, USA
| |
Collapse
|
69
|
Chan CL, Leyrat C, Eldridge SM. Quality of reporting of pilot and feasibility cluster randomised trials: a systematic review. BMJ Open 2017; 7:e016970. [PMID: 29122791 PMCID: PMC5695336 DOI: 10.1136/bmjopen-2017-016970] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 07/14/2017] [Accepted: 07/17/2017] [Indexed: 12/04/2022] Open
Abstract
OBJECTIVES To systematically review the quality of reporting of pilot and feasibility of cluster randomised trials (CRTs). In particular, to assess (1) the number of pilot CRTs conducted between 1 January 2011 and 31 December 2014, (2) whether objectives and methods are appropriate and (3) reporting quality. METHODS We searched PubMed (2011-2014) for CRTs with 'pilot' or 'feasibility' in the title or abstract; that were assessing some element of feasibility and showing evidence the study was in preparation for a main effectiveness/efficacy trial. Quality assessment criteria were based on the Consolidated Standards of Reporting Trials (CONSORT) extensions for pilot trials and CRTs. RESULTS Eighteen pilot CRTs were identified. Forty-four per cent did not have feasibility as their primary objective, and many (50%) performed formal hypothesis testing for effectiveness/efficacy despite being underpowered. Most (83%) included 'pilot' or 'feasibility' in the title, and discussed implications for progression from the pilot to the future definitive trial (89%), but fewer reported reasons for the randomised pilot trial (39%), sample size rationale (44%) or progression criteria (17%). Most defined the cluster (100%), and number of clusters randomised (94%), but few reported how the cluster design affected sample size (17%), whether consent was sought from clusters (11%), or who enrolled clusters (17%). CONCLUSIONS That only 18 pilot CRTs were identified necessitates increased awareness of the importance of conducting and publishing pilot CRTs and improved reporting. Pilot CRTs should primarily be assessing feasibility, avoiding formal hypothesis testing for effectiveness/efficacy and reporting reasons for the pilot, sample size rationale and progression criteria, as well as enrolment of clusters, and how the cluster design affects design aspects. We recommend adherence to the CONSORT extensions for pilot trials and CRTs.
Collapse
Affiliation(s)
- Claire L Chan
- Centre for Primary Care and Public Health, Queen Mary University of London, London, UK
| | - Clémence Leyrat
- Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, UK
| | - Sandra M Eldridge
- Centre for Primary Care and Public Health, Queen Mary University of London, London, UK
| |
Collapse
|
70
|
Siebenhofer A, Paulitsch MA, Pregartner G, Berghold A, Jeitler K, Muth C, Engler J. Cluster-randomized controlled trials evaluating complex interventions in general practices are mostly ineffective: a systematic review. J Clin Epidemiol 2017; 94:85-96. [PMID: 29111470 DOI: 10.1016/j.jclinepi.2017.10.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Revised: 09/14/2017] [Accepted: 10/17/2017] [Indexed: 01/03/2023]
Abstract
OBJECTIVES The aim of this study was to evaluate how frequently complex interventions are shown to be superior to routine care in general practice-based cluster-randomized controlled studies (c-RCTs) and to explore whether potential differences explain results that come out in favor of a complex intervention. STUDY DESIGN AND SETTING We performed an unrestricted search in the Central Register of Controlled Trials, MEDLINE, and EMBASE. Included were all c-RCTs that included a patient-relevant primary outcome in a general practice setting with at least 1-year follow-up. We extracted effect sizes, P-values, intracluster correlation coefficients (ICCs), and 22 quality aspects. RESULTS We identified 29 trials with 99 patient-relevant primary outcomes. After adjustment for multiple testing on a trial level, four outcomes (4%) in four studies (14%) remained statistically significant. Of the 11 studies that reported ICCs, in 8, the ICC was equal to or smaller than the assumed ICC. In 16 of the 17 studies with available sample size calculation, effect sizes were smaller than anticipated. CONCLUSION More than 85% of the c-RCTs failed to demonstrate a beneficial effect on a predefined primary endpoint. All but one study were overly optimistic with regard to the expected treatment effect. This highlights the importance of weighing up the potential merit of new treatments and planning prospectively, when designing clinical studies in a general practice setting.
Collapse
Affiliation(s)
- Andrea Siebenhofer
- Institute of General Practice, Goethe University Frankfurt am Main, Theodor-Stern-Kai 7, Frankfurt am Main 60590, Germany; Institute of General Practice and Evidence-based Health Services Research, Medical University of Graz, Auenbruggerplatz 2/9/IV, Graz 8036, Austria.
| | - Michael A Paulitsch
- Institute of General Practice, Goethe University Frankfurt am Main, Theodor-Stern-Kai 7, Frankfurt am Main 60590, Germany
| | - Gudrun Pregartner
- Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, Auenbruggerplatz 2, Graz 8036, Austria
| | - Andrea Berghold
- Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, Auenbruggerplatz 2, Graz 8036, Austria
| | - Klaus Jeitler
- Institute of General Practice and Evidence-based Health Services Research, Medical University of Graz, Auenbruggerplatz 2/9/IV, Graz 8036, Austria; Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, Auenbruggerplatz 2, Graz 8036, Austria
| | - Christiane Muth
- Institute of General Practice, Goethe University Frankfurt am Main, Theodor-Stern-Kai 7, Frankfurt am Main 60590, Germany
| | - Jennifer Engler
- Institute of General Practice, Goethe University Frankfurt am Main, Theodor-Stern-Kai 7, Frankfurt am Main 60590, Germany
| |
Collapse
|
71
|
Leyrat C, Morgan KE, Leurent B, Kahan BC. Cluster randomized trials with a small number of clusters: which analyses should be used? Int J Epidemiol 2017; 47:321-331. [DOI: 10.1093/ije/dyx169] [Citation(s) in RCA: 80] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/02/2017] [Indexed: 11/14/2022] Open
Affiliation(s)
- Clémence Leyrat
- Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, UK
- INSERM CIC 1415, CHRU de Tours, Tours, France
| | - Katy E Morgan
- Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, UK
| | - Baptiste Leurent
- Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, UK
| | - Brennan C Kahan
- Pragmatic Clinical Trials Unit, Queen Mary University of London, London, UK
| |
Collapse
|
72
|
Allanson ER, Tunçalp Ö, Vogel JP, Khan DN, Oladapo OT, Long Q, Gülmezoglu AM. Implementation of effective practices in health facilities: a systematic review of cluster randomised trials. BMJ Glob Health 2017; 2:e000266. [PMID: 29081997 PMCID: PMC5656132 DOI: 10.1136/bmjgh-2016-000266] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Revised: 05/16/2017] [Accepted: 06/05/2017] [Indexed: 11/08/2022] Open
Abstract
Background The capacity for health systems to support the translation of research in to clinical practice may be limited. The cluster randomised controlled trial (cluster RCT) design is often employed in evaluating the effectiveness of implementation of evidence-based practices. We aimed to systematically review available evidence to identify and evaluate the components in the implementation process at the facility level using cluster RCT designs. Methods All cluster RCTs where the healthcare facility was the unit of randomisation, published or written from 1990 to 2014, were assessed. Included studies were analysed for the components of implementation interventions employed in each. Through iterative mapping and analysis, we synthesised a master list of components used and summarised the effects of different combinations of interventions on practices. Results Forty-six studies met the inclusion criteria and covered the specialty groups of obstetrics and gynaecology (n=9), paediatrics and neonatology (n=4), intensive care (n=4), internal medicine (n=20), and anaesthetics and surgery (n=3). Six studies included interventions that were delivered across specialties. Nine components of multifaceted implementation interventions were identified: leadership, barrier identification, tailoring to the context, patient involvement, communication, education, supportive supervision, provision of resources, and audit and feedback. The four main components that were most commonly used were education (n=42, 91%), audit and feedback (n=26, 57%), provision of resources (n=23, 50%) and leadership (n=21, 46%). Conclusions Future implementation research should focus on better reporting of multifaceted approaches, incorporating sets of components that facilitate the translation of research into practice, and should employ rigorous monitoring and evaluation.
Collapse
Affiliation(s)
- Emma R Allanson
- School of Women's and Infants' Health, Faculty of Medicine, Dentistry and Health Sciences, University of Western Australia, Crawley, Australia.,Department of Reproductive Health and Research, UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization, Geneva, Switzerland
| | - Özge Tunçalp
- Department of Reproductive Health and Research, UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization, Geneva, Switzerland
| | - Joshua P Vogel
- Department of Reproductive Health and Research, UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization, Geneva, Switzerland
| | - Dina N Khan
- Department of Reproductive Health and Research, UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization, Geneva, Switzerland
| | - Olufemi T Oladapo
- Department of Reproductive Health and Research, UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization, Geneva, Switzerland
| | - Qian Long
- Department of Reproductive Health and Research, UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization, Geneva, Switzerland
| | - Ahmet Metin Gülmezoglu
- Department of Reproductive Health and Research, UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization, Geneva, Switzerland
| |
Collapse
|
73
|
Lee PH, Tse ACY. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed. Eur J Intern Med 2017; 40:16-21. [PMID: 27769569 DOI: 10.1016/j.ejim.2016.10.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2016] [Revised: 09/29/2016] [Accepted: 10/10/2016] [Indexed: 12/19/2022]
Abstract
BACKGROUND There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. METHODS We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. RESULTS Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. CONCLUSIONS The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed.
Collapse
Affiliation(s)
- Paul H Lee
- School of Nursing, Hong Kong Polytechnic University, Hong kong.
| | - Andy C Y Tse
- Department of Health and Physical Education, The Education University of Hong Kong, Hong kong
| |
Collapse
|
74
|
Rivoirard R, Bourmaud A, Oriol M, Tinquaut F, Méry B, Langrand-Escure J, Vallard A, Fournel P, Magné N, Chauvin F. Quality of reporting in oncology studies: A systematic analysis of literature reviews and prospects. Crit Rev Oncol Hematol 2017; 112:179-189. [DOI: 10.1016/j.critrevonc.2017.02.012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2015] [Revised: 01/19/2017] [Accepted: 02/14/2017] [Indexed: 12/30/2022] Open
|
75
|
Hemming K, Taljaard M, Forbes A. Analysis of cluster randomised stepped wedge trials with repeated cross-sectional samples. Trials 2017; 18:101. [PMID: 28259174 PMCID: PMC5336660 DOI: 10.1186/s13063-017-1833-7] [Citation(s) in RCA: 109] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2016] [Accepted: 02/06/2017] [Indexed: 11/10/2022] Open
Abstract
Background The stepped wedge cluster randomised trial (SW-CRT) is increasingly being used to evaluate policy or service delivery interventions. However, there is a dearth of trials literature addressing analytical approaches to the SW-CRT. Perhaps as a result, a significant number of published trials have major methodological shortcomings, including failure to adjust for secular trends at the analysis stage. Furthermore, the commonly used analytical framework proposed by Hussey and Hughes makes several assumptions. Methods We highlight the assumptions implicit in the basic SW-CRT analytical model proposed by Hussey and Hughes. We consider how simple modifications of the basic model, using both random and fixed effects, can be used to accommodate deviations from the underlying assumptions. We consider the implications of these modifications for the intracluster correlation coefficients. In a case study, the importance of adjusting for the secular trend is illustrated. Results The basic SW-CRT model includes a fixed effect for time, implying a common underlying secular trend across steps and clusters. It also includes a single term for treatment, implying a constant shift in this trend under the treatment. When these assumptions are not realistic, simple modifications can be implemented to allow the secular trend to vary across clusters and the treatment effect to vary across clusters or time. In our case study, the naïve treatment effect estimate (adjusted for clustering but unadjusted for time) suggests a beneficial effect. However, after adjusting for the underlying secular trend, we demonstrate a reversal of the treatment effect. Conclusion Due to the inherent confounding of the treatment effect with time, analysis of a SW-CRT should always account for secular trends or risk-biased estimates of the treatment effect. Furthermore, the basic model proposed by Hussey and Hughes makes a number of important assumptions. Consideration needs to be given to the appropriate model choice at the analysis stage. We provide a Stata code to implement the proposed analyses in the illustrative case study. Electronic supplementary material The online version of this article (doi:10.1186/s13063-017-1833-7) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham, B15 2TT, UK.
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, 1053 Carling Avenue, Ottawa, ON, K1Y4E9, Canada.,Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Andrew Forbes
- School of Public Health and Preventive Medicine, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
76
|
Ford WP, Westgate PM. Improved standard error estimator for maintaining the validity of inference in cluster randomized trials with a small number of clusters. Biom J 2017; 59:478-495. [DOI: 10.1002/bimj.201600182] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Revised: 11/23/2016] [Accepted: 11/23/2016] [Indexed: 11/10/2022]
Affiliation(s)
- Whitney P. Ford
- Department of Biostatistics, College of Public Health; University of Kentucky; Lexington KY 40536 USA
| | - Philip M. Westgate
- Department of Biostatistics, College of Public Health; University of Kentucky; Lexington KY 40536 USA
| |
Collapse
|
77
|
Grayling MJ, Wason JMS, Mander AP. Stepped wedge cluster randomized controlled trial designs: a review of reporting quality and design features. Trials 2017; 18:33. [PMID: 28109321 PMCID: PMC5251280 DOI: 10.1186/s13063-017-1783-0] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2016] [Accepted: 01/03/2017] [Indexed: 11/13/2022] Open
Abstract
Background The stepped wedge (SW) cluster randomized controlled trial (CRCT) design is being used with increasing frequency. However, there is limited published research on the quality of reporting of SW-CRCTs. We address this issue by conducting a literature review. Methods Medline, Ovid, Web of Knowledge, the Cochrane Library, PsycINFO, the ISRCTN registry, and ClinicalTrials.gov were searched to identify investigations employing the SW-CRCT design up to February 2015. For each included completed study, information was extracted on a selection of criteria, based on the CONSORT extension to CRCTs, to assess the quality of reporting. Results A total of 123 studies were included in our review, of which 39 were completed trial reports. The standard of reporting of SW-CRCTs varied in quality. The percentage of trials reporting each criterion varied to as low as 15.4%, with a median of 66.7%. Conclusions There is much room for improvement in the quality of reporting of SW-CRCTs. This is consistent with recent findings for CRCTs. A CONSORT extension for SW-CRCTs is warranted to standardize the reporting of SW-CRCTs. Electronic supplementary material The online version of this article (doi:10.1186/s13063-017-1783-0) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Michael J Grayling
- MRC Biostatistics Unit Hub for Trials Methodology Research, Cambridge Institute of Public Health, University Forvie Site, Robinson Way, Cambridge, CB2 0SR, UK.
| | - James M S Wason
- MRC Biostatistics Unit Hub for Trials Methodology Research, Cambridge Institute of Public Health, University Forvie Site, Robinson Way, Cambridge, CB2 0SR, UK
| | - Adrian P Mander
- MRC Biostatistics Unit Hub for Trials Methodology Research, Cambridge Institute of Public Health, University Forvie Site, Robinson Way, Cambridge, CB2 0SR, UK
| |
Collapse
|
78
|
Arnup SJ, Forbes AB, Kahan BC, Morgan KE, McKenzie JE. The quality of reporting in cluster randomised crossover trials: proposal for reporting items and an assessment of reporting quality. Trials 2016; 17:575. [PMID: 27923384 PMCID: PMC5142135 DOI: 10.1186/s13063-016-1685-6] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Accepted: 11/04/2016] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The cluster randomised crossover (CRXO) design is gaining popularity in trial settings where individual randomisation or parallel group cluster randomisation is not feasible or practical. Our aim is to stimulate discussion on the content of a reporting guideline for CRXO trials and to assess the reporting quality of published CRXO trials. METHODS We undertook a systematic review of CRXO trials. Searches of MEDLINE, EMBASE, and CINAHL Plus as well as citation searches of CRXO methodological articles were conducted to December 2014. Reporting quality was assessed against both modified items from 2010 CONSORT and 2012 cluster trials extension and other proposed quality measures. RESULTS Of the 3425 records identified through database searching, 83 trials met the inclusion criteria. Trials were infrequently identified as "cluster randomis(z)ed crossover" in title (n = 7, 8%) or abstract (n = 21, 25%), and a rationale for the design was infrequently provided (n = 20, 24%). Design parameters such as the number of clusters and number of periods were well reported. Discussion of carryover took place in only 17 trials (20%). Sample size methods were only reported in 58% (n = 48) of trials. A range of approaches were used to report baseline characteristics. The analysis method was not adequately reported in 23% (n = 19) of trials. The observed within-cluster within-period intracluster correlation and within-cluster between-period intracluster correlation for the primary outcome data were not reported in any trial. The potential for selection, performance, and detection bias could be evaluated in 30%, 81%, and 70% of trials, respectively. CONCLUSIONS There is a clear need to improve the quality of reporting in CRXO trials. Given the unique features of a CRXO trial, it is important to develop a CONSORT extension. Consensus amongst trialists on the content of such a guideline is essential.
Collapse
Affiliation(s)
- Sarah J Arnup
- School of Public Health and Preventive Medicine, Monash University, The Alfred Centre, Melbourne, Victoria, 3004, Australia
| | - Andrew B Forbes
- School of Public Health and Preventive Medicine, Monash University, The Alfred Centre, Melbourne, Victoria, 3004, Australia
| | - Brennan C Kahan
- Pragmatic Clinical Trials Unit, Queen Mary University of London, 58 Turner St, London, E1 2AB, UK
| | - Katy E Morgan
- Medical Statistics Department, London School of Hygiene and Tropical Medicine, Keppel Street, London, WC1E 7HT, UK
| | - Joanne E McKenzie
- School of Public Health and Preventive Medicine, Monash University, The Alfred Centre, Melbourne, Victoria, 3004, Australia.
| |
Collapse
|
79
|
Barnhart D, Hertzmark E, Liu E, Mungure E, Muya AN, Sando D, Chalamilla G, Ulenga N, Bärnighausen T, Fawzi W, Spiegelman D. Intra-Cluster Correlation Estimates for HIV-related Outcomes from Care and Treatment Clinics in Dar es Salaam, Tanzania. Contemp Clin Trials Commun 2016; 4:161-169. [PMID: 27766318 PMCID: PMC5066589 DOI: 10.1016/j.conctc.2016.09.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Introduction Researchers planning cluster-randomized controlled trials (cRCTs) require estimates of the intra-cluster correlation coefficient (ICC) from previous studies for sample size calculations. This paper fills a persistent gap in the literature by providing estimates of ICCs for many key HIV-related clinical outcomes. Methods Data from HIV-positive patients from 47 HIV care and treatment clinics in Dar es Salaam, Tanzania were used to calculate ICCs by site of enrollment or site of ART initiation for various clinical outcomes using cross-sectional and longitudinal data. ICCs were estimated using linear mixed models where either clinic of enrollment or clinic of ART initiation served as the random effect. Results ICCs ranged from 0 to 0.0706 (95% CI: 0.0447, 0.1098). For most outcomes, the ICCs were large enough to meaningfully affect sample size calculations. For binary outcomes, the ICCs for event prevalence at baseline tended to be larger than the ICCs for later cumulative incidences. For continuous outcomes, the ICCs for baseline values tended to be larger than the ICCs for the change in values from baseline. Conclusion The ICCs for HIV-related outcomes cannot be ignored when calculating sample sizes for future cluster-randomized trials. The differences between ICCs calculated from baseline data alone and ICCs calculated using longitudinal data demonstrate the importance of selecting an ICC that reflects a study's intended design and duration for sample size calculations. While not generalizable to all contexts, these estimates provide guidance for future researchers seeking to design adequately powered cRCTs in Sub-Saharan African HIV treatment and care clinics.
Collapse
Affiliation(s)
- Dale Barnhart
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Huntington Avenue, Boston, Massachusetts 02115, USA
| | - Ellen Hertzmark
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Huntington Avenue, Boston, Massachusetts 02115, USA
| | - Enju Liu
- Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Huntington Avenue, Boston, Massachusetts 02115, USA
| | - Ester Mungure
- Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Huntington Avenue, Boston, Massachusetts 02115, USA
| | - Aisa N Muya
- Management and Development of Health, Mwai Kibaki Road, Dar es Salaam, Tanzania
| | - David Sando
- Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Huntington Avenue, Boston, Massachusetts 02115, USA; Management and Development of Health, Mwai Kibaki Road, Dar es Salaam, Tanzania
| | - Guerino Chalamilla
- Management and Development of Health, Mwai Kibaki Road, Dar es Salaam, Tanzania
| | - Nzovu Ulenga
- Department of Immunology and Infectious Diseases, Harvard T.H. Chan School of Public Health, Huntington Avenue, Boston, Massachusetts 02115, USA; Management and Development of Health, Mwai Kibaki Road, Dar es Salaam, Tanzania
| | - Till Bärnighausen
- Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Huntington Avenue, Boston, Massachusetts 02115, USA; Wellcome Trust Africa Centre for Population Health, A2074 Road, Mtubatuba, KwaZulu-Natal 3935, South Africa
| | - Wafaie Fawzi
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Huntington Avenue, Boston, Massachusetts 02115, USA; Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Huntington Avenue, Boston, Massachusetts 02115, USA; Department of Nutrition, Harvard T.H. Chan School of Public Health, Huntington Avenue, Boston, Massachusetts 02115, USA
| | - Donna Spiegelman
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Huntington Avenue, Boston, Massachusetts 02115, USA; Department of Biostatistics, Harvard T.H. Chan School of Public Health, Huntington Avenue, Boston, Massachusetts 02115, USA
| |
Collapse
|
80
|
Kahan BC, Forbes G, Ali Y, Jairath V, Bremner S, Harhay MO, Hooper R, Wright N, Eldridge SM, Leyrat C. Increased risk of type I errors in cluster randomised trials with small or medium numbers of clusters: a review, reanalysis, and simulation study. Trials 2016; 17:438. [PMID: 27600609 PMCID: PMC5013635 DOI: 10.1186/s13063-016-1571-2] [Citation(s) in RCA: 66] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Accepted: 08/24/2016] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Cluster randomised trials (CRTs) are commonly analysed using mixed-effects models or generalised estimating equations (GEEs). However, these analyses do not always perform well with the small number of clusters typical of most CRTs. They can lead to increased risk of a type I error (finding a statistically significant treatment effect when it does not exist) if appropriate corrections are not used. METHODS We conducted a small simulation study to evaluate the impact of using small-sample corrections for mixed-effects models or GEEs in CRTs with a small number of clusters. We then reanalysed data from TRIGGER, a CRT with six clusters, to determine the effect of using an inappropriate analysis method in practice. Finally, we reviewed 100 CRTs previously identified by a search on PubMed in order to assess whether trials were using appropriate methods of analysis. Trials were classified as at risk of an increased type I error rate if they did not report using an analysis method which accounted for clustering, or if they had fewer than 40 clusters and performed an individual-level analysis without reporting the use of an appropriate small-sample correction. RESULTS Our simulation study found that using mixed-effects models or GEEs without an appropriate correction led to inflated type I error rates, even for as many as 70 clusters. Conversely, using small-sample corrections provided correct type I error rates across all scenarios. Reanalysis of the TRIGGER trial found that inappropriate methods of analysis gave much smaller P values (P ≤ 0.01) than appropriate methods (P = 0.04-0.15). In our review, of the 99 trials that reported the number of clusters, 64 (65 %) were at risk of an increased type I error rate; 14 trials did not report using an analysis method which accounted for clustering, and 50 trials with fewer than 40 clusters performed an individual-level analysis without reporting the use of an appropriate correction. CONCLUSIONS CRTs with a small or medium number of clusters are at risk of an inflated type I error rate unless appropriate analysis methods are used. Investigators should consider using small-sample corrections with mixed-effects models or GEEs to ensure valid results.
Collapse
Affiliation(s)
- Brennan C Kahan
- Pragmatic Clinical Trials Unit, Queen Mary University of London, 58 Turner St, E1 2AB, London, UK.
| | - Gordon Forbes
- Pragmatic Clinical Trials Unit, Queen Mary University of London, 58 Turner St, E1 2AB, London, UK
| | - Yunus Ali
- Pragmatic Clinical Trials Unit, Queen Mary University of London, 58 Turner St, E1 2AB, London, UK
| | - Vipul Jairath
- Department of Medicine, Western University and London Health Sciences Network, London, ON, Canada.,Division of Epidemiology and Biostatistics, Western University, London, ON, Canada
| | - Stephen Bremner
- Division of Primary Care and Public Health, Brighton and Sussex Medical School, Brighton, UK
| | - Michael O Harhay
- Division of Epidemiology, Department of Biostatistics and Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Richard Hooper
- Pragmatic Clinical Trials Unit, Queen Mary University of London, 58 Turner St, E1 2AB, London, UK
| | - Neil Wright
- Pragmatic Clinical Trials Unit, Queen Mary University of London, 58 Turner St, E1 2AB, London, UK
| | - Sandra M Eldridge
- Pragmatic Clinical Trials Unit, Queen Mary University of London, 58 Turner St, E1 2AB, London, UK
| | - Clémence Leyrat
- Pragmatic Clinical Trials Unit, Queen Mary University of London, 58 Turner St, E1 2AB, London, UK.,INSERM CIC 1415, CHRU de Tours, Tours, France.,London School of Hygiene and Tropical Medicine, London, UK
| |
Collapse
|
81
|
Moerbeek M, van Schie S. How large are the consequences of covariate imbalance in cluster randomized trials: a simulation study with a continuous outcome and a binary covariate at the cluster level. BMC Med Res Methodol 2016; 16:79. [PMID: 27401771 PMCID: PMC4939594 DOI: 10.1186/s12874-016-0182-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2016] [Accepted: 06/25/2016] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. METHODS The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. RESULTS The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. CONCLUSIONS The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
Collapse
Affiliation(s)
- Mirjam Moerbeek
- Department of Methodology and Statistics, Utrecht University, P.O. Box 80140, 3508 TC, Utrecht, The Netherlands.
| | - Sander van Schie
- Department of Methodology and Statistics, Utrecht University, P.O. Box 80140, 3508 TC, Utrecht, The Netherlands
| |
Collapse
|
82
|
Malmberg-Heimonen I, Natland S, Tøge AG, Hansen HC. The Effects of Skill Training on Social Workers' Professional Competences in Norway: Results of a Cluster-Randomised Study. BRITISH JOURNAL OF SOCIAL WORK 2016; 46:1354-1371. [PMID: 27559232 PMCID: PMC4985729 DOI: 10.1093/bjsw/bcv073] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Using a cluster-randomised design, this study analyses the effects of a government-administered skill training programme for social workers in Norway. The training programme aims to improve social workers' professional competences by enhancing and systematising follow-up work directed towards longer-term unemployed clients in the following areas: encountering the user, system-oriented efforts and administrative work. The main tools and techniques of the programme are based on motivational interviewing and appreciative inquiry. The data comprise responses to baseline and eighteen-month follow-up questionnaires administered to all social workers (n = 99) in eighteen participating Labour and Welfare offices randomised into experimental and control groups. The findings indicate that the skill training programme positively affected the social workers' evaluations of their professional competences and quality of work supervision received. The acquisition and mastering of combinations of specific tools and techniques, a comprehensive supervision structure and the opportunity to adapt the learned skills to local conditions were important in explaining the results.
Collapse
Affiliation(s)
- Ira Malmberg-Heimonen
- Oslo and Akershus University College of Applied Sciences, Social Welfare Research Centre, Stensberggate 29, Post Box 4, St. Olavs Plass, N-0130 Oslo, Norway
| | - Sidsel Natland
- Oslo and Akershus University College of Applied Sciences, Social Welfare Research Centre, Stensberggate 29, Post Box 4, St. Olavs Plass, N-0130 Oslo, Norway
| | - Anne Grete Tøge
- Oslo and Akershus University College of Applied Sciences, Social Welfare Research Centre, Stensberggate 29, Post Box 4, St. Olavs Plass, N-0130 Oslo, Norway
| | - Helle Cathrine Hansen
- Oslo and Akershus University College of Applied Sciences, Social Welfare Research Centre, Stensberggate 29, Post Box 4, St. Olavs Plass, N-0130 Oslo, Norway
| |
Collapse
|
83
|
Arnup SJ, Forbes AB, Kahan BC, Morgan KE, McKenzie JE. Appropriate statistical methods were infrequently used in cluster-randomized crossover trials. J Clin Epidemiol 2016; 74:40-50. [DOI: 10.1016/j.jclinepi.2015.11.013] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2015] [Revised: 10/22/2015] [Accepted: 11/20/2015] [Indexed: 10/22/2022]
|
84
|
Huang S, Fiero MH, Bell ML. Generalized estimating equations in cluster randomized trials with a small number of clusters: Review of practice and simulation study. Clin Trials 2016; 13:445-9. [DOI: 10.1177/1740774516643498] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Background/aims: Generalized estimating equations are a common modeling approach used in cluster randomized trials to account for within-cluster correlation. It is well known that the sandwich variance estimator is biased when the number of clusters is small (≤40), resulting in an inflated type I error rate. Various bias correction methods have been proposed in the statistical literature, but how adequately they are utilized in current practice for cluster randomized trials is not clear. The aim of this study is to evaluate the use of generalized estimating equation bias correction methods in recently published cluster randomized trials and demonstrate the necessity of such methods when the number of clusters is small. Methods: Review of cluster randomized trials published between August 2013 and July 2014 and using generalized estimating equations for their primary analyses. Two independent reviewers collected data from each study using a standardized, pre-piloted data extraction template. A two-arm cluster randomized trial was simulated under various scenarios to show the potential effect of a small number of clusters on type I error rate when estimating the treatment effect. The nominal level was set at 0.05 for the simulation study. Results: Of the 51 included trials, 28 (54.9%) analyzed 40 or fewer clusters with a minimum of four total clusters. Of these 28 trials, only one trial used a bias correction method for generalized estimating equations. The simulation study showed that with four clusters, the type I error rate ranged between 0.43 and 0.47. Even though type I error rate moved closer to the nominal level as the number of clusters increases, it still ranged between 0.06 and 0.07 with 40 clusters. Conclusions: Our results showed that statistical issues arising from small number of clusters in generalized estimating equations is currently inadequately handled in cluster randomized trials. Potential for type I error inflation could be very high when the sandwich estimator is used without bias correction.
Collapse
Affiliation(s)
- Shuang Huang
- Departments of Epidemiology and Biostatistics, Mel and Enid Zuckerman College of Public Health, University of Arizona, Tucson, AZ, USA
| | - Mallorie H Fiero
- Departments of Epidemiology and Biostatistics, Mel and Enid Zuckerman College of Public Health, University of Arizona, Tucson, AZ, USA
| | - Melanie L Bell
- Departments of Epidemiology and Biostatistics, Mel and Enid Zuckerman College of Public Health, University of Arizona, Tucson, AZ, USA
| |
Collapse
|
85
|
DiazOrdaz K, Kenward MG, Gomes M, Grieve R. Multiple imputation methods for bivariate outcomes in cluster randomised trials. Stat Med 2016; 35:3482-96. [PMID: 26990655 PMCID: PMC4981911 DOI: 10.1002/sim.6935] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2014] [Revised: 02/15/2016] [Accepted: 02/18/2016] [Indexed: 01/03/2023]
Abstract
Missing observations are common in cluster randomised trials. The problem is exacerbated when modelling bivariate outcomes jointly, as the proportion of complete cases is often considerably smaller than the proportion having either of the outcomes fully observed. Approaches taken to handling such missing data include the following: complete case analysis, single‐level multiple imputation that ignores the clustering, multiple imputation with a fixed effect for each cluster and multilevel multiple imputation. We contrasted the alternative approaches to handling missing data in a cost‐effectiveness analysis that uses data from a cluster randomised trial to evaluate an exercise intervention for care home residents. We then conducted a simulation study to assess the performance of these approaches on bivariate continuous outcomes, in terms of confidence interval coverage and empirical bias in the estimated treatment effects. Missing‐at‐random clustered data scenarios were simulated following a full‐factorial design. Across all the missing data mechanisms considered, the multiple imputation methods provided estimators with negligible bias, while complete case analysis resulted in biased treatment effect estimates in scenarios where the randomised treatment arm was associated with missingness. Confidence interval coverage was generally in excess of nominal levels (up to 99.8%) following fixed‐effects multiple imputation and too low following single‐level multiple imputation. Multilevel multiple imputation led to coverage levels of approximately 95% throughout. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Collapse
Affiliation(s)
- K DiazOrdaz
- Department of Medical Statistics, London School of Hygiene and Tropical Medicine, Keppel Street, London, W1C 7HT, U.K
| | - M G Kenward
- Department of Medical Statistics, London School of Hygiene and Tropical Medicine, Keppel Street, London, W1C 7HT, U.K
| | - M Gomes
- Department of Health Services Research and Policy, London School of Hygiene and Tropical Medicine, 15-17 Tavistock Place, London, WC1H 9SH, U.K
| | - R Grieve
- Department of Health Services Research and Policy, London School of Hygiene and Tropical Medicine, 15-17 Tavistock Place, London, WC1H 9SH, U.K
| |
Collapse
|
86
|
Taljaard M, Teerenstra S, Ivers NM, Fergusson DA. Substantial risks associated with few clusters in cluster randomized and stepped wedge designs. Clin Trials 2016; 13:459-63. [DOI: 10.1177/1740774516634316] [Citation(s) in RCA: 59] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Given the growing attention to quality improvement, comparative effectiveness research, and pragmatic trials embedded within learning health systems, the use of the cluster randomization design is bound to increase. The number of clusters available for randomization is often limited in such trials. Designs that incorporate pre-intervention measurements (e.g. cluster cross-over, repeated parallel arm, and stepped wedge designs) can substantially reduce the required numbers of clusters by decreasing between-cluster sources of variation. However, there are substantial risks associated with few clusters, including increased probability of chance imbalances and type I and type II error, limited perceived or actual generalizability, and fewer options for statistical analysis. Furthermore, current sample size methods for the stepped wedge design make a strong underlying assumption with respect to the correlation structure—in particular, that the intracluster and inter-period correlations are equal. This is in contrast with methods for the cluster cross-over design that explicitly allow for a smaller inter-period correlation. Failing to similarly allow for the inter-period correlation in the design of a stepped wedge trial may yield perilously low sample sizes. Further methodological and empirical work is required to inform sample size methods and guidance for the stepped wedge trial and to provide minimum thresholds for this design.
Collapse
Affiliation(s)
- Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
- School of Epidemiology, Public Health and Preventive Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Steven Teerenstra
- Section Biostatistics, Department for Health Evidence, Radboud Institute for Health Science, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Noah M Ivers
- Women’s College Research Institute, Women’s College Hospital, Toronto, ON, Canada
- Department of Family and Community Medicine, University of Toronto Toronto, ON, Canada
| | - Dean A Fergusson
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
- School of Epidemiology, Public Health and Preventive Medicine, University of Ottawa, Ottawa, ON, Canada
- Departments of Medicine and Surgery, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
87
|
Siebenhofer A, Erckenbrecht S, Pregartner G, Berghold A, Muth C. How often are interventions in cluster-randomised controlled trials of complex interventions in general practices effective and reasons for potential shortcomings? Protocol and results of a feasibility project for a systematic review. BMJ Open 2016; 6:e009414. [PMID: 26892789 PMCID: PMC4762123 DOI: 10.1136/bmjopen-2015-009414] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Revised: 09/11/2015] [Accepted: 10/23/2015] [Indexed: 11/29/2022] Open
Abstract
INTRODUCTION Most studies conducted at general practices investigate complex interventions and increasingly use cluster-randomised controlled trail (c-RCT) designs to do so. Our primary objective is to evaluate how frequently complex interventions are shown to be more, equally or less effective than routine care in c-RCTs with a superior design. The secondary aim is to discover whether the quality of a c-RCT determines the likelihood of the complex intervention being effective. METHODS AND ANALYSIS All c-RCTs of any design that have a patient-relevant primary outcome and with a duration of at least 1 year will be included. The search will be performed in three electronic databases (MEDLINE, EMBASE and the Cochrane Database of Systematic Reviews (CDSR)). The screening process, data collection, quality assessment and statistical data analyses (if suitably similar and of adequate quality) will be performed in accordance with requirements of the Cochrane Handbook for Systematic Reviews of Interventions. A feasibility project was carried out that was restricted to a search in MEDLINE and the CCTR for c-RCTs published in 1 of the 8 journals that are most relevant to general practice. The process from trial selection to data collection, assessment and results presentation was piloted. Of the 512 abstracts identified during the feasibility search, 21 studies examined complex interventions in a general practice setting. Extrapolating the preliminary search to include all relevant c-RCTs in three databases, about 5000 abstracts and 150 primary studies are expected to be identified in the main study. 14 studies included in the feasibility project (67%) did not show a positive effect on a primary patient-relevant end point. ETHICS AND DISSEMINATION Ethical approval is not being sought for this review. Findings will be disseminated via peer-reviewed journals that frequently publish articles on the results of c-RCTs and through presentations at international conferences. TRIAL REGISTRATION NUMBER PROSPERO CRD201400923.
Collapse
Affiliation(s)
- Andrea Siebenhofer
- Institute of General Practice, Goethe University, Frankfurt am Main, Germany
- Institute of General Practice and Evidence-based Health Services Research, Medical University of Graz, Graz, Austria
| | - Stefanie Erckenbrecht
- AQUA-Institute for Applied Quality Improvement and Research in Health Care, Göttingen, Germany
| | - Gudrun Pregartner
- Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, Graz, Austria
| | - Andrea Berghold
- Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, Graz, Austria
| | - Christiane Muth
- Institute of General Practice, Goethe University, Frankfurt am Main, Germany
| |
Collapse
|
88
|
Martin J, Taljaard M, Girling A, Hemming K. Systematic review finds major deficiencies in sample size methodology and reporting for stepped-wedge cluster randomised trials. BMJ Open 2016; 6:e010166. [PMID: 26846897 PMCID: PMC4746455 DOI: 10.1136/bmjopen-2015-010166] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Revised: 11/09/2015] [Accepted: 12/03/2015] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. METHODS We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. RESULTS We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5-6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. DISCUSSION The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs.
Collapse
Affiliation(s)
- James Martin
- School of Health and Population Sciences, University of Birmingham, Birmingham, UK
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
- Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Alan Girling
- School of Health and Population Sciences, University of Birmingham, Birmingham, UK
| | - Karla Hemming
- School of Health and Population Sciences, University of Birmingham, Birmingham, UK
| |
Collapse
|
89
|
Leyrat C, Caille A, Foucher Y, Giraudeau B. Propensity score to detect baseline imbalance in cluster randomized trials: the role of the c-statistic. BMC Med Res Methodol 2016; 16:9. [PMID: 26801083 PMCID: PMC4724161 DOI: 10.1186/s12874-015-0100-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2015] [Accepted: 12/08/2015] [Indexed: 01/19/2023] Open
Abstract
Background Despite randomization, baseline imbalance and confounding bias may occur in cluster randomized trials (CRTs). Covariate imbalance may jeopardize the validity of statistical inferences if they occur on prognostic factors. Thus, the diagnosis of a such imbalance is essential to adjust statistical analysis if required. Methods We developed a tool based on the c-statistic of the propensity score (PS) model to detect global baseline covariate imbalance in CRTs and assess the risk of confounding bias. We performed a simulation study to assess the performance of the proposed tool and applied this method to analyze the data from 2 published CRTs. Results The proposed method had good performance for large sample sizes (n =500 per arm) and when the number of unbalanced covariates was not too small as compared with the total number of baseline covariates (≥40 % of unbalanced covariates). We also provide a strategy for pre selection of the covariates needed to be included in the PS model to enhance imbalance detection. Conclusion The proposed tool could be useful in deciding whether covariate adjustment is required before performing statistical analyses of CRTs. Electronic supplementary material The online version of this article (doi:10.1186/s12874-015-0100-4) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Clémence Leyrat
- INSERM U1153, Paris, France. .,INSERM CIC 1415, Tours, France. .,CHRU de Tours, Tours, France. .,Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, United Kingdom.
| | - Agnès Caille
- INSERM U1153, Paris, France.,INSERM CIC 1415, Tours, France.,CHRU de Tours, Tours, France.,Université François-Rabelais, PRES Centre-Val de Loire Université, Tours, France
| | - Yohann Foucher
- SPHERE (EA 4275): Biostatistics, Clinical Research and Subjective Measures in Health Sciences, Université de Nantes, Nantes, France
| | - Bruno Giraudeau
- INSERM U1153, Paris, France.,INSERM CIC 1415, Tours, France.,CHRU de Tours, Tours, France.,Université François-Rabelais, PRES Centre-Val de Loire Université, Tours, France
| |
Collapse
|
90
|
Tokolahi E, Hocking C, Kersten P, Vandal AC. Quality and Reporting of Cluster Randomized Controlled Trials Evaluating Occupational Therapy Interventions: A Systematic Review. OTJR-OCCUPATION PARTICIPATION AND HEALTH 2015; 36:14-24. [PMID: 27504689 PMCID: PMC4766971 DOI: 10.1177/1539449215618625] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Growing use of cluster randomized control trials (RCTs) in health care research requires careful attention to study designs, with implications for the development of an evidence base for practice. The objective of this study is to investigate the characteristics, quality, and reporting of cluster RCTs evaluating occupational therapy interventions to inform future research design. An extensive search of cluster RCTs evaluating occupational therapy was conducted in several databases. Fourteen studies met our inclusion criteria; four were protocols. Eleven (79%) justified the use of a cluster RCT and accounted for clustering in the sample size and analysis. All full studies reported the number of clusters randomized, and five reported intercluster correlation coefficients (50%): Protocols had higher compliance. Risk of bias was most evident in unblinding of participants. Statistician involvement was associated with improved trial quality and reporting. Quality of cluster RCTs of occupational therapy interventions is comparable with those from other areas of health research and needs improvement.
Collapse
Affiliation(s)
| | | | | | - Alain C Vandal
- Auckland University of Technology, New Zealand Health Intelligence & Informatics, Ko Awatea, Auckland, New Zealand
| |
Collapse
|
91
|
Henderson AH, Upile T, Pilavakis Y, Patel NN. Reporting guidelines and journal quality in otolaryngology. Clin Otolaryngol 2015; 41:461-6. [PMID: 26412303 DOI: 10.1111/coa.12546] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/15/2015] [Indexed: 11/30/2022]
Abstract
OBJECTIVES Journals increasingly use reporting guidelines to standardise research papers, partly to improve quality. Although defining journal quality is difficult, various calculated metrics are used. This study investigates guideline adoption by otolaryngology journals and whether a relationship exists between this and journal quality. DESIGN, SETTING, PARTICIPANTS Retrospective MEDLINE database review for English language, Index Medicus, journals of interest to otolaryngologists (October 2013). MAIN OUTCOME MEASURES The resulting journals were examined for the number of guidelines endorsed and then tabulated against surrogate measures of journal quality (Impact factor, Eigenfactor, SCImago, Source-Normalised rank). The primary outcome measure was the number of recognised reporting guidelines endorsed per journal. This was then correlated against journal quality scores. For comparison, a further small sample correlation was performed with 6 randomly selected and 6 high-profile clinical non-otolaryngology journals. RESULTS 37 otolaryngology journals were identified. Number of guidelines used and quality scores were not normally distributed. Mean guideline usage was 1.0 for otolaryngology journals, 1.5 for randomly selected, and 5.5 for the high-profile journals. Only 18/37 (49%) otolaryngology journals endorsed any guidelines, compared with 11/12 non-otolaryngology journals. Within otolaryngology, Eigenfactor positively correlated with guideline use (r = 0.4, n = 44, p < 0.01) otherwise no correlation was found between guideline endorsement and journal quality. CONCLUSIONS Reporting guideline endorsement within otolaryngology journals is low. Although it might be expected that use of reporting guidelines improved quality, this is not reflected in the derived quality scores in otolaryngology. This may reflect low levels of use/enforcement, that quality indicators are inherently flawed, or that generalised guidelines are not always appropriate or valued by editors.
Collapse
Affiliation(s)
- A H Henderson
- Department of ENT, The Great Western Hospital, Swindon, UK.
| | - T Upile
- Department of ENT, University Hospitals Southampton NHS Trust, Southampton, UK
| | - Y Pilavakis
- Department of ENT, University Hospitals Southampton NHS Trust, Southampton, UK
| | - N N Patel
- Department of ENT, University Hospitals Southampton NHS Trust, Southampton, UK
| |
Collapse
|
92
|
Beard E, Lewis JJ, Copas A, Davey C, Osrin D, Baio G, Thompson JA, Fielding KL, Omar RZ, Ononge S, Hargreaves J, Prost A. Stepped wedge randomised controlled trials: systematic review of studies published between 2010 and 2014. Trials 2015; 16:353. [PMID: 26278881 PMCID: PMC4538902 DOI: 10.1186/s13063-015-0839-2] [Citation(s) in RCA: 103] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2015] [Accepted: 07/01/2015] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND In a stepped wedge, cluster randomised trial, clusters receive the intervention at different time points, and the order in which they received it is randomised. Previous systematic reviews of stepped wedge trials have documented a steady rise in their use between 1987 and 2010, which was attributed to the design's perceived logistical and analytical advantages. However, the interventions included in these systematic reviews were often poorly reported and did not adequately describe the analysis and/or methodology used. Since 2010, a number of additional stepped wedge trials have been published. This article aims to update previous systematic reviews, and consider what interventions were tested and the rationale given for using a stepped wedge design. METHODS We searched PubMed, PsychINFO, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Web of Science, the Cochrane Library and the Current Controlled Trials Register for articles published between January 2010 and May 2014. We considered stepped wedge randomised controlled trials in all fields of research. We independently extracted data from retrieved articles and reviewed them. Interventions were then coded using the functions specified by the Behaviour Change Wheel, and for behaviour change techniques using a validated taxonomy. RESULTS Our review identified 37 stepped wedge trials, reported in 10 articles presenting trial results, one conference abstract, 21 protocol or study design articles and five trial registrations. These were mostly conducted in developed countries (n = 30), and within healthcare organisations (n = 28). A total of 33 of the interventions were educationally based, with the most commonly used behaviour change techniques being 'instruction on how to perform a behaviour' (n = 32) and 'persuasive source' (n = 25). Authors gave a wide range of reasons for the use of the stepped wedge trial design, including ethical considerations, logistical, financial and methodological. The adequacy of reporting varied across studies: many did not provide sufficient detail regarding the methodology or calculation of the required sample size. CONCLUSIONS The popularity of stepped wedge trials has increased since 2010, predominantly in high-income countries. However, there is a need for further guidance on their reporting and analysis.
Collapse
Affiliation(s)
- Emma Beard
- Department of Clinical, Educational and Health Psychology, University College London, 1-19 Torrington Place, London, WC1E 7HB, UK.
- Department of Epidemiology and Public Health, University College London, 1-19 Torrington Place, London, WC1E 7HB, UK.
| | - James J Lewis
- MRC Tropical Epidemiology Group, Department of Infectious Disease Epidemiology, London School of Hygiene and Tropical Medicine, Keppel Street, London, WC1E 7HT, UK.
- Department of Infectious Disease Epidemiology, London School of Hygiene and Tropical Medicine, Keppel Street, London, WC1E 7HT, UK.
| | - Andrew Copas
- MRC Clinical Trials Unit at University College London, 175 Tottenham Court Road, London, W1T 7NU, UK.
| | - Calum Davey
- Department of Social and Environmental Health Research, London School of Hygiene and Tropical Medicine, Keppel Street, London, WC1E 7HT, UK.
| | - David Osrin
- Institute for Global Health, University College London, 30 Guilford Street, London, WC1N 1EH, UK.
| | - Gianluca Baio
- Department of Statistical Science, University College London, 1-19 Torrington Place, London, WC1E 7HB, UK.
| | - Jennifer A Thompson
- MRC Clinical Trials Unit at University College London, 175 Tottenham Court Road, London, W1T 7NU, UK.
- Department of Infectious Disease Epidemiology, London School of Hygiene and Tropical Medicine, Keppel Street, London, WC1E 7HT, UK.
| | - Katherine L Fielding
- MRC Tropical Epidemiology Group, Department of Infectious Disease Epidemiology, London School of Hygiene and Tropical Medicine, Keppel Street, London, WC1E 7HT, UK.
| | - Rumana Z Omar
- Department of Statistical Science, University College London, 1-19 Torrington Place, London, WC1E 7HB, UK.
| | - Sam Ononge
- Department of Obstetrics and Gynaecology, Makerere University College of Health Sciences, P.O. Box 7072, Kampala, Uganda.
| | - James Hargreaves
- Department of Social and Environmental Health Research, London School of Hygiene and Tropical Medicine, Keppel Street, London, WC1E 7HT, UK.
| | - Audrey Prost
- Institute for Global Health, University College London, 30 Guilford Street, London, WC1N 1EH, UK.
| |
Collapse
|
93
|
Rutterford C, Taljaard M, Dixon S, Copas A, Eldridge S. Reporting and methodological quality of sample size calculations in cluster randomized trials could be improved: a review. J Clin Epidemiol 2015; 68:716-23. [DOI: 10.1016/j.jclinepi.2014.10.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2014] [Revised: 09/25/2014] [Accepted: 10/17/2014] [Indexed: 12/18/2022]
|
94
|
Wright N, Ivers N, Eldridge S, Taljaard M, Bremner S. A review of the use of covariates in cluster randomized trials uncovers marked discrepancies between guidance and practice. J Clin Epidemiol 2015; 68:603-9. [PMID: 25648791 PMCID: PMC4425474 DOI: 10.1016/j.jclinepi.2014.12.006] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2014] [Revised: 12/12/2014] [Accepted: 12/23/2014] [Indexed: 11/24/2022]
Abstract
OBJECTIVES Reviews of the handling of covariates in trials have explicitly excluded cluster randomized trials (CRTs). In this study, we review the use of covariates in randomization, the reporting of covariates, and adjusted analyses in CRTs. STUDY DESIGN AND SETTING We reviewed a random sample of 300 CRTs published between 2000 and 2008 across 150 English language journals. RESULTS Fifty-eight percent of trials used covariates in randomization. Only 69 (23%) included tables of cluster- and individual-level covariates. Fifty-eight percent reported significance tests of baseline balance. Of 207 trials that reported baseline measures of the primary outcome, 155 (75%) subsequently adjusted for these in analyses. Of 174 trials that used covariates in randomization, 30 (17%) included an analysis adjusting for all those covariates. Of 219 trial reports that included an adjusted analysis of the primary outcome, only 71 (32%) reported that covariates were chosen a priori. CONCLUSION There are some marked discrepancies between practice and guidance on the use of covariates in the design, analysis, and reporting of CRTs. It is essential that researchers follow guidelines on the use and reporting of covariates in CRTs, promoting the validity of trial conclusions and quality of trial reports.
Collapse
Affiliation(s)
- Neil Wright
- Centre for Primary Care and Public Health, Blizard Institute, Queen Mary University of London, Yvonne Carter Building, 58 Turner Street, London, E1 2AB, United Kingdom.
| | - Noah Ivers
- Family Practice Health Centre and Institute for Health Systems Solutions and Virtual Care, Women's College Hospital, 76 Grenville Street, Toronto, ON M5S1B2, Canada; Department of Family and Community Medicine, University of Toronto, 500 University Avenue, 5th Floor, Toronto, ON M5G1V7, Canada
| | - Sandra Eldridge
- Centre for Primary Care and Public Health, Blizard Institute, Queen Mary University of London, Yvonne Carter Building, 58 Turner Street, London, E1 2AB, United Kingdom
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa Hospital, Civic Campus, 1053 Carling Avenue, Civic Box 693, Ottawa, Ontario K1Y 4E9, Canada; Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Stephen Bremner
- Centre for Primary Care and Public Health, Blizard Institute, Queen Mary University of London, Yvonne Carter Building, 58 Turner Street, London, E1 2AB, United Kingdom
| |
Collapse
|
95
|
Gao F, Earnest A, Matchar DB, Campbell MJ, Machin D. Sample size calculations for the design of cluster randomized trials: A summary of methodology. Contemp Clin Trials 2015; 42:41-50. [DOI: 10.1016/j.cct.2015.02.011] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2014] [Revised: 02/26/2015] [Accepted: 02/28/2015] [Indexed: 01/21/2023]
|
96
|
Abstract
Cluster randomized trials are trials that randomize clusters of people, rather than individuals. They are becoming increasingly common. A number of innovations have been developed recently, particularly in the calculation of the required size of a cluster trial, the handling of missing data, designs to minimize recruitment bias, the ethics of cluster randomized trials and the stepped wedge design. This article will highlight and illustrate these developments. It will also discuss issues with regards to the reporting of cluster randomized trials.
Collapse
|
97
|
Carnes M, Devine PG, Baier Manwell L, Byars-Winston A, Fine E, Ford CE, Forscher P, Isaac C, Kaatz A, Magua W, Palta M, Sheridan J. The effect of an intervention to break the gender bias habit for faculty at one institution: a cluster randomized, controlled trial. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:221-30. [PMID: 25374039 PMCID: PMC4310758 DOI: 10.1097/acm.0000000000000552] [Citation(s) in RCA: 280] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
PURPOSE Despite sincere commitment to egalitarian, meritocratic principles, subtle gender bias persists, constraining women's opportunities for academic advancement. The authors implemented a pair-matched, single-blind, cluster randomized, controlled study of a gender-bias-habit-changing intervention at a large public university. METHOD Participants were faculty in 92 departments or divisions at the University of Wisconsin-Madison. Between September 2010 and March 2012, experimental departments were offered a gender-bias-habit-changing intervention as a 2.5-hour workshop. Surveys measured gender bias awareness; motivation, self-efficacy, and outcome expectations to reduce bias; and gender equity action. A timed word categorization task measured implicit gender/leadership bias. Faculty completed a work-life survey before and after all experimental departments received the intervention. Control departments were offered workshops after data were collected. RESULTS Linear mixed-effects models showed significantly greater changes post intervention for faculty in experimental versus control departments on several outcome measures, including self-efficacy to engage in gender-equity-promoting behaviors (P = .013). When ≥ 25% of a department's faculty attended the workshop (26 of 46 departments), significant increases in self-reported action to promote gender equity occurred at three months (P = .007). Post intervention, faculty in experimental departments expressed greater perceptions of fit (P = .024), valuing of their research (P = .019), and comfort in raising personal and professional conflicts (P = .025). CONCLUSIONS An intervention that facilitates intentional behavioral change can help faculty break the gender bias habit and change department climate in ways that should support the career advancement of women in academic medicine, science, and engineering.
Collapse
Affiliation(s)
- Molly Carnes
- Dr. Carnes is director, Center for Women's Health Research, professor, Departments of Medicine, Psychiatry, and Industrial & Systems Engineering, University of Wisconsin-Madison, Madison, Wisconsin, and part-time physician, William S. Middleton Memorial Veterans Hospital, Madison, Wisconsin. Dr. Devine is professor and chair, Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin. Ms. Baier Manwell is a research administrator, Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin, and national training coordinator for women's health services, Veterans Health Administration Central Office, Washington, DC. Dr. Byars-Winston is associate professor, Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin. Dr. Fine is a researcher, Women in Science and Engineering Leadership Institute, University of Wisconsin-Madison, Madison, Wisconsin. Dr. Ford is professor, Departments of English and Sociology, University of Wisconsin-Madison, Madison, Wisconsin. Mr. Forscher is a graduate student, Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin. Dr. Isaac is assistant professor, Mercer University, Atlanta, Georgia. Dr. Kaatz is assistant scientist, Center for Women's Health Research, University of Wisconsin-Madison, Madison, Wisconsin. Dr. Magua is a postdoctoral fellow, Center for Women's Health Research, University of Wisconsin-Madison, Madison, Wisconsin. Dr. Palta is professor, Departments of Biostatistics and Population Health Science, University of Wisconsin-Madison, Madison, Wisconsin. Dr. Sheridan is executive and research director, Women in Science and Engineering Leadership Institute, University of Wisconsin-Madison, Madison, Wisconsin
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
98
|
Diaz-Ordaz K, Froud R, Sheehan B, Eldridge S. A systematic review of cluster randomised trials in residential facilities for older people suggests how to improve quality. BMC Med Res Methodol 2013; 13:127. [PMID: 24148859 PMCID: PMC4015673 DOI: 10.1186/1471-2288-13-127] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2013] [Accepted: 10/10/2013] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Previous reviews of cluster randomised trials have been critical of the quality of the trials reviewed, but none has explored determinants of the quality of these trials in a specific field over an extended period of time. Recent work suggests that correct conduct and reporting of these trials may require more than published guidelines. In this review, our aim was to assess the quality of cluster randomised trials conducted in residential facilities for older people, and to determine whether (1) statistician involvement in the trial and (2) strength of journal endorsement of the Consolidated Standards of Reporting Trials (CONSORT) statement influence quality. METHODS We systematically identified trials randomising residential facilities for older people, or parts thereof, without language restrictions, up to the end of 2010, using National Library of Medicine (Medline) via PubMed and hand-searching. We based quality assessment criteria largely on the extended CONSORT statement for cluster randomised trials. We assessed statistician involvement based on statistician co-authorship, and strength of journal endorsement of the CONSORT statement from journal websites. RESULTS 73 trials met our inclusion criteria. Of these, 20 (27%) reported accounting for clustering in sample size calculations and 54 (74%) in the analyses. In 29 trials (40%), methods used to identify/recruit participants were judged by us to have potentially caused bias or reporting was unclear to reach a conclusion. Some elements of quality improved over time but this appeared not to be related to the publication of the extended CONSORT statement for these trials. Trials with statistician/epidemiologist co-authors were more likely to account for clustering in sample size calculations (unadjusted odds ratio 5.4, 95% confidence interval 1.1 to 26.0) and analyses (unadjusted OR 3.2, 1.2 to 8.5). Journal endorsement of the CONSORT statement was not associated with trial quality. CONCLUSIONS Despite international attempts to improve methods in cluster randomised trials, important quality limitations remain amongst these trials in residential facilities. Statistician involvement on trial teams may be more effective in promoting quality than further journal endorsement of the CONSORT statement. Funding bodies and journals should promote statistician involvement and co-authorship in addition to adherence to CONSORT guidelines.
Collapse
Affiliation(s)
- Karla Diaz-Ordaz
- Centre for Primary Care and Public Health, Queen Mary University of London, London, E1 2AB, UK.
| | | | | | | |
Collapse
|
99
|
Shergis JL, Zhang AL, Zhou W, Xue CC. Quality and risk of bias in Panax ginseng randomized controlled trials: a review. THE AMERICAN JOURNAL OF CHINESE MEDICINE 2013; 41:231-52. [PMID: 23548116 DOI: 10.1142/s0192415x13500171] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Panax ginseng is one of the most frequently used herbs in the world. Numerous trials have evaluated its clinical benefits. However, the quality of these studies has not been comprehensively and systematically assessed. We reviewed randomized controlled trials (RCTs) of Panax ginseng to evaluate their quality and risk of bias. We searched four English databases, without publication date restriction. Two reviewers extracted details about the studies' methodological quality, guided by the Consolidated Standards of Reporting Trials (CONSORT) checklist and its extension for herbal interventions. Risk of bias was determined using the Cochrane Risk of Bias tool. Of 475 potentially relevant studies, 58 met our inclusion criteria. In these 58 studies, 48.3% of the suggested CONSORT checklist items and 35.9% of the extended herbal items were reported. The quality of RCTs published after the CONSORT checklist improved. Until 1995 (before CONSORT) (n = 4), 32.8% of the items were reported in studies. From 1996-2006 (CONSORT published and revised) (n = 30), 46.1% were reported, and from 2007 (n = 24), 53.5% were reported (p = 0.005). After the CONSORT extension for herbal interventions was published in 2006, RCT quality also improved, although not significantly. Until 2005 (n = 34), 35.2% of the extended herbal items were reported in studies; and from 2006 onwards (n = 24), 37.3% were reported (p = 0.64). Most studies classified risk of bias as "unclear". Overall, the quality of Panax ginseng RCT methodology has improved since the CONSORT checklist was introduced. However, more can be done to improve the methodological quality of, and reporting in, RCTs.
Collapse
Affiliation(s)
- Johannah L Shergis
- Traditional and Complementary Medicine Research Program, School of Health Sciences and Health Innovations Research Institute (HIRi), RMIT University, Bundoora, VIC 3083, Australia
| | | | | | | |
Collapse
|
100
|
Bastuji-Garin S, Sbidian E, Gaudy-Marqueste C, Ferrat E, Roujeau JC, Richard MA, Canoui-Poitrine F. Impact of STROBE statement publication on quality of observational study reporting: interrupted time series versus before-after analysis. PLoS One 2013; 8:e64733. [PMID: 23990867 PMCID: PMC3753332 DOI: 10.1371/journal.pone.0064733] [Citation(s) in RCA: 63] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2012] [Accepted: 04/17/2013] [Indexed: 01/24/2023] Open
Abstract
BACKGROUND In uncontrolled before-after studies, CONSORT was shown to improve the reporting of randomised trials. Before-after studies ignore underlying secular trends and may overestimate the impact of interventions. Our aim was to assess the impact of the 2007 STROBE statement publication on the quality of observational study reporting, using both uncontrolled before-after analyses and interrupted time series. METHODS For this quasi-experimental study, original articles reporting cohort, case-control, and cross-sectional studies published between 2004 and 2010 in the four dermatological journals having the highest 5-year impact factors (≥ 4) were selected. We compared the proportions of STROBE items (STROBE score) adequately reported in each article during three periods, two pre STROBE period (2004-2005 and 2006-2007) and one post STROBE period (2008-2010). Segmented regression analysis of interrupted time series was also performed. RESULTS Of the 456 included articles, 187 (41%) reported cohort studies, 166 (36.4%) cross-sectional studies, and 103 (22.6%) case-control studies. The median STROBE score was 57% (range, 18%-98%). Before-after analysis evidenced significant STROBE score increases between the two pre-STROBE periods and between the earliest pre-STROBE period and the post-STROBE period (median score2004-05 48% versus median score2008-10 58%, p<0.001) but not between the immediate pre-STROBE period and the post-STROBE period (median score2006-07 58% versus median score2008-10 58%, p = 0.42). In the pre STROBE period, the six-monthly mean STROBE score increased significantly, by 1.19% per six-month period (absolute increase 95%CI, 0.26% to 2.11%, p = 0.016). By segmented analysis, no significant changes in STROBE score trends occurred (-0.40%; 95%CI, -2.20 to 1.41; p = 0.64) in the post STROBE statement publication. INTERPRETATION The quality of reports increased over time but was not affected by STROBE. Our findings raise concerns about the relevance of uncontrolled before-after analysis for estimating the impact of guidelines.
Collapse
Affiliation(s)
- Sylvie Bastuji-Garin
- Université Paris Est Créteil (UPEC), LIC EA4393 (Laboratoire d'Investigation Clinique), Créteil, France
- AP-HP, Hôpital Henri-Mondor, Department of Clinical Research and Public Health, Créteil, France
- AP-HP, Hôpital Henri-Mondor, Unité de Recherche Clinique (URC), Créteil, France
- * E-mail:
| | - Emilie Sbidian
- Université Paris Est Créteil (UPEC), LIC EA4393 (Laboratoire d'Investigation Clinique), Créteil, France
- AP-HP, Hôpital Henri-Mondor, Department of Dermatology, Créteil, France
| | | | - Emilie Ferrat
- Université Paris Est Créteil (UPEC), LIC EA4393 (Laboratoire d'Investigation Clinique), Créteil, France
- Université Paris Est Créteil (UPEC), Faculté de Medecine, Department of General Practice, Créteil, France
| | | | | | - Florence Canoui-Poitrine
- Université Paris Est Créteil (UPEC), LIC EA4393 (Laboratoire d'Investigation Clinique), Créteil, France
- AP-HP, Hôpital Henri-Mondor, Department of Clinical Research and Public Health, Créteil, France
| | | |
Collapse
|