1
|
Sperger J, Kosorok MR, Linnan L, Kneipp SM. Multilevel Intervention Stepped Wedge Designs (MLI-SWDs). PREVENTION SCIENCE : THE OFFICIAL JOURNAL OF THE SOCIETY FOR PREVENTION RESEARCH 2024; 25:371-383. [PMID: 38748315 PMCID: PMC11239753 DOI: 10.1007/s11121-024-01657-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/20/2024] [Indexed: 07/12/2024]
Abstract
Multilevel interventions (MLIs) hold promise for reducing health inequities by intervening at multiple types of social determinants of health consistent with the socioecological model of health. In spite of their potential, methodological challenges related to study design compounded by a lack of tools for sample size calculation inhibit their development. We help address this gap by proposing the Multilevel Intervention Stepped Wedge Design (MLI-SWD), a hybrid experimental design which combines cluster-level (CL) randomization using a Stepped Wedge design (SWD) with independent individual-level (IL) randomization. The MLI-SWD is suitable for MLIs where the IL intervention has a low risk of interference between individuals in the same cluster, and it enables estimation of the component IL and CL treatment effects, their interaction, and the combined intervention effect. The MLI-SWD accommodates cross-sectional and cohort designs as well as both incomplete (clusters are not observed in every study period) and complete observation patterns. We adapt recent work using generalized estimating equations for SWD sample size calculation to the multilevel setting and provide an R package for power and sample size calculation. Furthermore, motivated by our experiences with the ongoing NC Works 4 Health study, we consider how to apply the MLI-SWD when individuals join clusters over the course of the study. This situation arises when unemployment MLIs include IL interventions that are delivered while the individual is unemployed. This extension requires carefully considering whether the study interventions will satisfy additional causal assumptions but could permit randomization in new settings.
Collapse
Affiliation(s)
- John Sperger
- Department of Biostatistics, Gillings School of Global Public Health, The University of North Carolina at Chapel Hill, Chapel Hill, USA.
| | - Michael R Kosorok
- Department of Biostatistics, Gillings School of Global Public Health, The University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Laura Linnan
- Department of Health Behavior, Gillings School of Global Public Health, The University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Shawn M Kneipp
- School of Nursing, The University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
2
|
Liu J, Li F. Optimal designs using generalized estimating equations in cluster randomized crossover and stepped wedge trials. Stat Methods Med Res 2024:9622802241247717. [PMID: 38813761 DOI: 10.1177/09622802241247717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/31/2024]
Abstract
Cluster randomized crossover and stepped wedge cluster randomized trials are two types of longitudinal cluster randomized trials that leverage both the within- and between-cluster comparisons to estimate the treatment effect and are increasingly used in healthcare delivery and implementation science research. While the variance expressions of estimated treatment effect have been previously developed from the method of generalized estimating equations for analyzing cluster randomized crossover trials and stepped wedge cluster randomized trials, little guidance has been provided for optimal designs to ensure maximum efficiency. Here, an optimal design refers to the combination of optimal cluster-period size and optimal number of clusters that provide the smallest variance of the treatment effect estimator or maximum efficiency under a fixed total budget. In this work, we develop optimal designs for multiple-period cluster randomized crossover trials and stepped wedge cluster randomized trials with continuous outcomes, including both closed-cohort and repeated cross-sectional sampling schemes. Local optimal design algorithms are proposed when the correlation parameters in the working correlation structure are known. MaxiMin optimal design algorithms are proposed when the exact values are unavailable, but investigators may specify a range of correlation values. The closed-form formulae of local optimal design and MaxiMin optimal design are derived for multiple-period cluster randomized crossover trials, where the cluster-period size and number of clusters are decimal. The decimal estimates from closed-form formulae can then be used to investigate the performances of integer estimates from local optimal design and MaxiMin optimal design algorithms. One unique contribution from this work, compared to the previous optimal design research, is that we adopt constrained optimization techniques to obtain integer estimates under the MaxiMin optimal design. To assist practical implementation, we also develop four SAS macros to find local optimal designs and MaxiMin optimal designs.
Collapse
Affiliation(s)
- Jingxia Liu
- Division of Public Health Sciences, Department of Surgery and Division of Biostatistics, Washington University School of Medicine, St. Louis, MO, USA
| | - Fan Li
- Department of Biostatistics, Yale University, New Haven, CT, USA
| |
Collapse
|
3
|
Hooper R, Quintin O, Kasza J. Efficient designs for three-sequence stepped wedge trials with continuous recruitment. Clin Trials 2024:17407745241251780. [PMID: 38773924 DOI: 10.1177/17407745241251780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/24/2024]
Abstract
BACKGROUND/AIMS The standard approach to designing stepped wedge trials that recruit participants in a continuous stream is to divide time into periods of equal length. But the choice of design in such cases is infinitely more flexible: each cluster could cross from the control to the intervention at any point on the continuous time-scale. We consider the case of a stepped wedge design with clusters randomised to just three sequences (designs with small numbers of sequences may be preferred for their simplicity and practicality) and investigate the choice of design that minimises the variance of the treatment effect estimator under different assumptions about the intra-cluster correlation. METHODS We make some simplifying assumptions in order to calculate the variance: in particular that we recruit the same number of participants, m , from each cluster over the course of the trial, and that participants present at regularly spaced intervals. We consider an intra-cluster correlation that decays exponentially with separation in time between the presentation of two individuals from the same cluster, from a value of ρ for two individuals who present at the same time, to a value of ρ τ for individuals presenting at the start and end of the trial recruitment interval. We restrict attention to three-sequence designs with centrosymmetry - the property that if we reverse time and swap the intervention and control conditions then the design looks the same. We obtain an expression for the variance of the treatment effect estimator adjusted for effects of time, using methods for generalised least squares estimation, and we evaluate this expression numerically for different designs, and for different parameter values. RESULTS There is a two-dimensional space of possible three-sequence, centrosymmetric stepped wedge designs with continuous recruitment. The variance of the treatment effect estimator for given ρ and τ can be plotted as a contour map over this space. The shape of this variance surface depends on τ and on the parameter m ρ / ( 1 - ρ ) , but typically indicates a broad, flat region of close-to-optimal designs. The 'standard' design with equally spaced periods and 1:1:1 allocation rarely performs well, however. CONCLUSIONS In many different settings, a relatively simple design can be found (e.g. one based on simple fractions) that offers close-to-optimal efficiency in that setting. There may also be designs that are robustly efficient over a wide range of settings. Contour maps of the kind we illustrate can help guide this choice. If efficiency is offered as one of the justifications for using a stepped wedge design, then it is worth designing with optimal efficiency in mind.
Collapse
Affiliation(s)
- Richard Hooper
- Wolfson Institute of Population Health, Queen Mary University of London, London, UK
| | - Olivier Quintin
- Wolfson Institute of Population Health, Queen Mary University of London, London, UK
| | - Jessica Kasza
- School of Public Health and Preventive Medicine, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
4
|
Nevins P, Ryan M, Davis-Plourde K, Ouyang Y, Macedo JAP, Meng C, Tong G, Wang X, Ortiz-Reyes L, Caille A, Li F, Taljaard M. Adherence to key recommendations for design and analysis of stepped-wedge cluster randomized trials: A review of trials published 2016-2022. Clin Trials 2024; 21:199-210. [PMID: 37990575 PMCID: PMC11003836 DOI: 10.1177/17407745231208397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2023]
Abstract
BACKGROUND/AIMS The stepped-wedge cluster randomized trial (SW-CRT), in which clusters are randomized to a time at which they will transition to the intervention condition - rather than a trial arm - is a relatively new design. SW-CRTs have additional design and analytical considerations compared to conventional parallel arm trials. To inform future methodological development, including guidance for trialists and the selection of parameters for statistical simulation studies, we conducted a review of recently published SW-CRTs. Specific objectives were to describe (1) the types of designs used in practice, (2) adherence to key requirements for statistical analysis, and (3) practices around covariate adjustment. We also examined changes in adherence over time and by journal impact factor. METHODS We used electronic searches to identify primary reports of SW-CRTs published 2016-2022. Two reviewers extracted information from each trial report and its protocol, if available, and resolved disagreements through discussion. RESULTS We identified 160 eligible trials, randomizing a median (Q1-Q3) of 11 (8-18) clusters to 5 (4-7) sequences. The majority (122, 76%) were cross-sectional (almost all with continuous recruitment), 23 (14%) were closed cohorts and 15 (9%) open cohorts. Many trials had complex design features such as multiple or multivariate primary outcomes (50, 31%) or time-dependent repeated measures (27, 22%). The most common type of primary outcome was binary (51%); continuous outcomes were less common (26%). The most frequently used method of analysis was a generalized linear mixed model (112, 70%); generalized estimating equations were used less frequently (12, 8%). Among 142 trials with fewer than 40 clusters, only 9 (6%) reported using methods appropriate for a small number of clusters. Statistical analyses clearly adjusted for time effects in 119 (74%), for within-cluster correlations in 132 (83%), and for distinct between-period correlations in 13 (8%). Covariates were included in the primary analysis of the primary outcome in 82 (51%) and were most often individual-level covariates; however, clear and complete pre-specification of covariates was uncommon. Adherence to some key methodological requirements (adjusting for time effects, accounting for within-period correlation) was higher among trials published in higher versus lower impact factor journals. Substantial improvements over time were not observed although a slight improvement was observed in the proportion accounting for a distinct between-period correlation. CONCLUSIONS Future methods development should prioritize methods for SW-CRTs with binary or time-to-event outcomes, small numbers of clusters, continuous recruitment designs, multivariate outcomes, or time-dependent repeated measures. Trialists, journal editors, and peer reviewers should be aware that SW-CRTs have additional methodological requirements over parallel arm designs including the need to account for period effects as well as complex intracluster correlations.
Collapse
Affiliation(s)
- Pascale Nevins
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| | - Mary Ryan
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
| | - Kendra Davis-Plourde
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
- Yale Center for Analytical Sciences, Yale School of Public Health, New Haven, CT, USA
| | - Yongdong Ouyang
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada
| | | | - Can Meng
- Yale Center for Analytical Sciences, Yale School of Public Health, New Haven, CT, USA
| | - Guangyu Tong
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
- Center for Methods in Implementation and Prevention Science, Yale University, New Haven, CT, USA
| | - Xueqi Wang
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
- Section of Geriatrics, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
| | - Luis Ortiz-Reyes
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| | - Agnès Caille
- Université de Tours, Université de Nantes, INSERM, SPHERE U1246, Tours, France
- INSERM CIC 1415, CHRU de Tours, Tours, France
| | - Fan Li
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
- Center for Methods in Implementation and Prevention Science, Yale University, New Haven, CT, USA
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
5
|
Hemming K, Copas A, Forbes A, Kasza J. What type of cluster randomized trial for which setting? JOURNAL OF EPIDEMIOLOGY AND POPULATION HEALTH 2024; 72:202195. [PMID: 38477476 DOI: 10.1016/j.jeph.2024.202195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 01/03/2024] [Accepted: 01/08/2024] [Indexed: 03/14/2024]
Abstract
The cluster randomized trial allows a randomized evaluation when it is either not possible to randomize the individual or randomizing individuals would put the trial at high risk of contamination across treatment arms. There are many variations of the cluster randomized design, including the parallel design with or without baseline measures, the cluster randomized cross-over design, the stepped-wedge cluster randomized design, and more recently-developed variants such as the batched stepped-wedge design and the staircase design. Once it has been clearly established that there is a need for cluster randomization, one ever important question is which form the cluster design should take. If a design in which time is split into multiple trial periods is to be adopted (e.g. as in a stepped-wedge), researchers must decide whether the same participants should be measured in multiple trial periods (cohort sampling); or if different participants should be measured in each period (continual recruitment or cross-sectional sampling). Here we outline the different possible options and weigh up the pros and cons of the different design choices, which revolve around statistical efficiency, study logistics and the assumptions required.
Collapse
Affiliation(s)
- Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK.
| | - Andrew Copas
- MRC Clinical Trials Unit at University College London, London, UK
| | - Andrew Forbes
- School of Public Health and Preventive Medicine, Monash University, Victoria, Australia
| | - Jessica Kasza
- School of Public Health and Preventive Medicine, Monash University, Victoria, Australia
| |
Collapse
|
6
|
Kasza J, Bowden R, Ouyang Y, Taljaard M, Forbes AB. Does it decay? Obtaining decaying correlation parameter values from previously analysed cluster randomised trials. Stat Methods Med Res 2023; 32:2123-2134. [PMID: 37589088 PMCID: PMC10683336 DOI: 10.1177/09622802231194753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/18/2023]
Abstract
A frequently applied assumption in the analysis of data from cluster randomised trials is that the outcomes from all participants within a cluster are equally correlated. That is, the intracluster correlation, which describes the degree of dependence between outcomes from participants in the same cluster, is the same for each pair of participants in a cluster. However, recent work has discussed the importance of allowing for this correlation to decay as the time between the measurement of participants in a cluster increases. Incorrect omission of such a decay can lead to under-powered studies, and confidence intervals for estimated treatment effects can be too narrow or too wide, depending on the characteristics of the design. When planning studies, researchers often rely on previously reported analyses of trials to inform their choice of intracluster correlation. However, most reported analyses of clustered data do not incorporate a correlation decay. Thus, often all that is available are estimates of intracluster correlations obtained under the potentially incorrect assumption of no decay. In this article, we show that it is possible to use intracluster correlation values obtained from models that incorrectly omit a decay to inform plausible choices of decaying correlations. Our focus is on intracluster correlation estimates for continuous outcomes obtained by fitting linear mixed models with exchangeable or block-exchangeable correlation structures. We describe how plausible values for decaying correlations may be obtained given these estimated intracluster correlations. An online app is presented that allows users to obtain plausible values of the decay, which can be used at the trial planning stage to assess the sensitivity of sample size and power calculations to decaying correlation structures.
Collapse
Affiliation(s)
- Jessica Kasza
- School of Public Health and Preventive Medicine, Monash University, Melbourne, VIC, Australia
| | - Rhys Bowden
- School of Public Health and Preventive Medicine, Monash University, Melbourne, VIC, Australia
| | - Yongdong Ouyang
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada
| | - Andrew B Forbes
- School of Public Health and Preventive Medicine, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
7
|
Ouyang Y, Li F, Preisser JS, Taljaard M. Sample size calculators for planning stepped-wedge cluster randomized trials: a review and comparison. Int J Epidemiol 2022; 51:2000-2013. [PMID: 35679584 PMCID: PMC9749719 DOI: 10.1093/ije/dyac123] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 05/17/2022] [Indexed: 01/21/2023] Open
Abstract
Recent years have seen a surge of interest in stepped-wedge cluster randomized trials (SW-CRTs). SW-CRTs include several design variations and methodology is rapidly developing. Accordingly, a variety of power and sample size calculation software for SW-CRTs has been developed. However, each calculator may support only a selected set of design features and may not be appropriate for all scenarios. Currently, there is no resource to assist researchers in selecting the most appropriate calculator for planning their trials. In this paper, we review and classify 18 existing calculators that can be implemented in major platforms, such as R, SAS, Stata, Microsoft Excel, PASS and nQuery. After reviewing the main sample size considerations for SW-CRTs, we summarize the features supported by the available calculators, including the types of designs, outcomes, correlation structures and treatment effects; whether incomplete designs, cluster-size variation or secular trends are accommodated; and the analytical approach used. We then discuss in more detail four main calculators and identify their strengths and limitations. We illustrate how to use these four calculators to compute power for two real SW-CRTs with a continuous and binary outcome and compare the results. We show that the choice of calculator can make a substantial difference in the calculated power and explain these differences. Finally, we make recommendations for implementing sample size or power calculations using the available calculators. An R Shiny app is available for users to select the calculator that meets their requirements (https://douyang.shinyapps.io/swcrtcalculator/).
Collapse
Affiliation(s)
- Yongdong Ouyang
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
| | - Fan Li
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
- Center for Methods in Implementation and Prevention Science, Yale School of Public Health, New Haven, CT, USA
| | - John S Preisser
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
8
|
Cadilhac DA, Marion V, Andrew NE, Breen SJ, Grabsch B, Purvis T, Morrison JL, Lannin NA, Grimley RS, Middleton S, Kilkenny MF. A Stepped-Wedge Cluster-Randomized Trial to Improve Adherence to Evidence-Based Practices for Acute Stroke Management. Jt Comm J Qual Patient Saf 2022; 48:653-664. [PMID: 36307360 DOI: 10.1016/j.jcjq.2022.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 09/18/2022] [Accepted: 09/21/2022] [Indexed: 12/30/2022]
Abstract
BACKGROUND There is limited evidence regarding the optimal design and composition of multifaceted quality improvement programs to improve acute stroke care. The researchers aimed to test the effectiveness of a co-designed multifaceted intervention (STELAR: Shared Team Efforts Leading to Adherence Results) directed at hospital clinicians for improving acute stroke care tailored to the local context using feedback of national registry indicator data. METHODS STELAR was a stepped-wedge cluster trial (partial randomization) using routinely collected Australian Stroke Clinical Registry data from Victorian hospitals segmented in two-month blocks. Each hospital (cluster) contributed control data from May 2017 and data for the intervention phase from July 2017 until September 2018. The intervention was multifaceted, delivered predominantly in two educational outreach workshops by experienced, external improvement facilitators, consisting of (1) feedback of registry data to identify practice gaps and (2) interprofessional education, barrier assessment, and documentation of an agreed action plan initiated by local clinical leaders appointed as change champions for prioritized clinical indicators. The researchers provided additional outreach support by e-mail/telephone for two months. Multilevel, multivariable regression models were used to assess change in a composite outcome of indicators selected for actions plans (primary outcome) and individual indicators (secondary outcome). Patient survival and disability 90-180 days after stroke were also compared. RESULTS Nine hospitals (clusters) participated, and 144 clinicians attended 18 intervention workshops. The control phase included 1,001 patients (median age 76.7 years; 47.4% female, 64.7% ischemic stroke), and the intervention phase 2,146 patients (median age 74.9 years; 44.2% female, 73.8% ischemic stroke). Compared to the control phase, the median score for the composite outcome for the intervention phase was 17% greater for the indicators included in the hospitals' action plans (range 3% to 30%, p = 0.016) and overall for the 10 indicators 6% greater (range 3% to 10%, p < 0.001). Compared to the control phase, patients in the intervention phase more often received stroke unit care (odds ratio [OR] 1.39, 95% confidence interval [CI] 1.05-1.84), were discharged on antithrombotic medications (OR 1.87, 95% CI 1.50-2.33), and received a discharge care plan (OR 1.27, 95% CI 1.05-1.53). Patient outcomes were unchanged. CONCLUSION External quality improvement facilitation using workshops and remote support, aligned with routine monitoring via registries, can improve acute stroke care.
Collapse
|
9
|
Li F, Yu H, Rathouz PJ, Turner EL, Preisser JS. Marginal modeling of cluster-period means and intraclass correlations in stepped wedge designs with binary outcomes. Biostatistics 2022; 23:772-788. [PMID: 33527999 PMCID: PMC9291643 DOI: 10.1093/biostatistics/kxaa056] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 11/30/2020] [Indexed: 01/09/2023] Open
Abstract
Stepped wedge cluster randomized trials (SW-CRTs) with binary outcomes are increasingly used in prevention and implementation studies. Marginal models represent a flexible tool for analyzing SW-CRTs with population-averaged interpretations, but the joint estimation of the mean and intraclass correlation coefficients (ICCs) can be computationally intensive due to large cluster-period sizes. Motivated by the need for marginal inference in SW-CRTs, we propose a simple and efficient estimating equations approach to analyze cluster-period means. We show that the quasi-score for the marginal mean defined from individual-level observations can be reformulated as the quasi-score for the same marginal mean defined from the cluster-period means. An additional mapping of the individual-level ICCs into correlations for the cluster-period means further provides a rigorous justification for the cluster-period approach. The proposed approach addresses a long-recognized computational burden associated with estimating equations defined based on individual-level observations, and enables fast point and interval estimation of the intervention effect and correlations. We further propose matrix-adjusted estimating equations to improve the finite-sample inference for ICCs. By providing a valid approach to estimate ICCs within the class of generalized linear models for correlated binary outcomes, this article operationalizes key recommendations from the CONSORT extension to SW-CRTs, including the reporting of ICCs.
Collapse
Affiliation(s)
- Fan Li
- To whom correspondence should be addressed.
| | - Hengshi Yu
- Department of Biostatistics, University of Michigan, 1415 Washington Heights, Ann Arbor, MI 48109, USA
| | - Paul J Rathouz
- Department of Population Health, The University of Texas at Austin, Dell Medical School, 1601 Trinity St, Bldg. B, Austin, TX 78712, USA
| | - Elizabeth L Turner
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, 2424 Erwin Rd, Durham, NC 27710, USA
| | - John S Preisser
- Department of Biostatistics, University of North Carolina at Chapel Hill, 135 Dauer Drive, Chapel Hill, NC 27514, USA
| |
Collapse
|
10
|
Jacob RR, Parks RG, Allen P, Mazzucca S, Yan Y, Kang S, Dekker D, Brownson RC. How to "Start Small and Just Keep Moving Forward": Mixed Methods Results From a Stepped-Wedge Trial to Support Evidence-Based Processes in Local Health Departments. Front Public Health 2022; 10:853791. [PMID: 35570955 PMCID: PMC9096224 DOI: 10.3389/fpubh.2022.853791] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 03/25/2022] [Indexed: 11/18/2022] Open
Abstract
Background Local health departments (LHDs) in the United States are charged with preventing disease and promoting health in their respective communities. Understanding and addressing what supports LHD's need to foster a climate and culture supportive of evidence-based decision making (EBDM) processes can enhance delivery of effective practices and services. Methods We employed a stepped-wedge trial design to test staggered delivery of implementation supports in 12 LHDs (Missouri, USA) to expand capacity for EBDM processes. The intervention was an in-person training in EBDM and continued support by the research team over 24 months (March 2018-February 2020). We used a mixed-methods approach to evaluate: (1) individuals' EBDM skills, (2) organizational supports for EBDM, and (3) administered evidence-based interventions. LHD staff completed a quantitative survey at 4 time points measuring their EBDM skills, organizational supports, and evidence-based interventions. We selected 4 LHDs with high contact and engagement during the intervention period to interview staff (n = 17) about facilitators and barriers to EBDM. We used mixed-effects linear regression to examine quantitative survey outcomes. Interviews were transcribed verbatim and coded through a dual independent process. Results Overall, 519 LHD staff were eligible and invited to complete quantitative surveys during control periods and 593 during intervention (365 unique individuals). A total of 434 completed during control and 492 during intervention (83.6 and 83.0% response, respectively). In both trial modes, half the participants had at least a master's degree (49.7-51.7%) and most were female (82.1-83.8%). No significant intervention effects were found in EBDM skills or in implementing evidence-based interventions. Two organizational supports scores decreased in intervention vs. control periods: awareness (-0.14, 95% CI -0.26 to -0.01, p < 0.05) and climate cultivation (-0.14, 95% CI -0.27 to -0.02, p < 0.05) but improved over time among all participants. Interviewees noted staff turnover, limited time, resources and momentum as challenges to continue EBDM work. Setting expectations, programmatic reviews, and pre-existing practices were seen as facilitators. Conclusions Challenges (e.g., turnover, resources) may disrupt LHDs' abilities to fully embed organizational processes which support EBDM. This study and related literature provides understanding on how best to support LHDs in building capacity to use and sustain evidence-based practices.
Collapse
Affiliation(s)
- Rebekah R. Jacob
- Prevention Research Center, Brown School, Washington University in St. Louis, St. Louis, MO, United States
| | - Renee G. Parks
- Prevention Research Center, Brown School, Washington University in St. Louis, St. Louis, MO, United States
| | - Peg Allen
- Prevention Research Center, Brown School, Washington University in St. Louis, St. Louis, MO, United States
| | - Stephanie Mazzucca
- Prevention Research Center, Brown School, Washington University in St. Louis, St. Louis, MO, United States
| | - Yan Yan
- Division of Public Health Sciences, Department of Surgery, Alvin J. Siteman Cancer Center, Washington University School of Medicine, Washington University in St. Louis, St. Louis, MO, United States
| | - Sarah Kang
- Fredrick S. Pardee RAND Graduate School, RAND Corporation, Santa Monica, CA, United States
| | - Debra Dekker
- National Association of County and City Health Officials, Washington, DC, United States
| | - Ross C. Brownson
- Prevention Research Center, Brown School, Washington University in St. Louis, St. Louis, MO, United States
- Division of Public Health Sciences, Department of Surgery, Alvin J. Siteman Cancer Center, Washington University School of Medicine, Washington University in St. Louis, St. Louis, MO, United States
| |
Collapse
|
11
|
Hvidhjelm J, Brandt-Christensen M, Delcomyn C, Møllerhøj J, Siersma V, Bak J. Effects of Implementing the Short-Term Assessment of Risk and Treatability for Mechanical Restraint in a Forensic Male Population: A Stepped-Wedge, Cluster-Randomized Design. Front Psychiatry 2022; 13:822295. [PMID: 35280154 PMCID: PMC8907583 DOI: 10.3389/fpsyt.2022.822295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Accepted: 01/31/2022] [Indexed: 11/13/2022] Open
Abstract
The assessment and formulation of the risk of violence and other unwanted behaviors at forensic psychiatric facilities have been attempted for decades. Structured professional judgment tools, such as the Short-Term Assessment of Risk and Treatability (START), are among the recent attempts to overcome the challenge of accomplishing these goals. This study examined the effect of implementing START in clinical practice for the most serious adverse events among the target group of severely mentally ill forensic psychiatric inpatients. Results were based on the use of mechanical restraints as an outcome. This study is a pragmatic, stepped-wedge, cluster-randomized controlled trial and was conducted over 5 years. It included eight forensic psychiatric units. Fifty out of 156 patients who had a basic aggression score of more than 0 were included in the study. We found that the rate of mechanical restraint use within the START period were 82% [relative risk (RR) = 0.18], lower than those outside of the START period. Patients evaluated within the START period were also found to have a 36% (RR = 0.64) lower risk of having higher Brøset Violence Checklist scores than patients evaluated outside the START period. Previous studies on START have primarily focused on validation, the predictive capability of the assessment, and implementation. We were only able to identify one study that aimed to identify the benefits and outcomes of START in a forensic setting. This study showed a significant reduction in the chance for inpatients in a forensic psychiatric facility to become mechanically restrained during periods where the START was used as risk assessment.
Collapse
Affiliation(s)
- Jacob Hvidhjelm
- Clinical Mental Health and Nursing Research Unit, Mental Health Center Sct Hans, Copenhagen University Hospital - Mental Health Services CPH, Copenhagen, Denmark
| | - Mette Brandt-Christensen
- Mental Health Centre Sct Hans, Copenhagen University Hospital - Mental Health Services CPH, Copenhagen, Denmark
| | - Christian Delcomyn
- Mental Health Centre Sct Hans, Copenhagen University Hospital - Mental Health Services CPH, Copenhagen, Denmark
| | - Jette Møllerhøj
- Head of Centre, Competence Centre for Forensic Psychiatry, Mental Health Centre Sct Hans, Copenhagen University Hospital - Mental Health Services CPH, Copenhagen, Denmark
| | - Volkert Siersma
- The Research Unit for General Practice and Section of General Practice, Department of Public Health, University of Copenhagen, Copenhagen, Denmark
| | - Jesper Bak
- Clinical Mental Health and Nursing Research Unit, Mental Health Center Sct Hans, Copenhagen University Hospital - Mental Health Services CPH, Copenhagen, Denmark
| |
Collapse
|
12
|
Korevaar E, Kasza J, Taljaard M, Hemming K, Haines T, Turner EL, Thompson JA, Hughes JP, Forbes AB. Intra-cluster correlations from the CLustered OUtcome Dataset bank to inform the design of longitudinal cluster trials. Clin Trials 2021; 18:529-540. [PMID: 34088230 DOI: 10.1177/17407745211020852] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Sample size calculations for longitudinal cluster randomised trials, such as crossover and stepped-wedge trials, require estimates of the assumed correlation structure. This includes both within-period intra-cluster correlations, which importantly differ from conventional intra-cluster correlations by their dependence on period, and also cluster autocorrelation coefficients to model correlation decay. There are limited resources to inform these estimates. In this article, we provide a repository of correlation estimates from a bank of real-world clustered datasets. These are provided under several assumed correlation structures, namely exchangeable, block-exchangeable and discrete-time decay correlation structures. METHODS Longitudinal studies with clustered outcomes were collected to form the CLustered OUtcome Dataset bank. Forty-four available continuous outcomes from 29 datasets were obtained and analysed using each correlation structure. Patterns of within-period intra-cluster correlation coefficient and cluster autocorrelation coefficients were explored by study characteristics. RESULTS The median within-period intra-cluster correlation coefficient for the discrete-time decay model was 0.05 (interquartile range: 0.02-0.09) with a median cluster autocorrelation of 0.73 (interquartile range: 0.19-0.91). The within-period intra-cluster correlation coefficients were similar for the exchangeable, block-exchangeable and discrete-time decay correlation structures. Within-period intra-cluster correlation coefficients and cluster autocorrelations were found to vary with the number of participants per cluster-period, the period-length, type of cluster (primary care, secondary care, community or school) and country income status (high-income country or low- and middle-income country). The within-period intra-cluster correlation coefficients tended to decrease with increasing period-length and slightly decrease with increasing cluster-period sizes, while the cluster autocorrelations tended to move closer to 1 with increasing cluster-period size. Using the CLustered OUtcome Dataset bank, an RShiny app has been developed for determining plausible values of correlation coefficients for use in sample size calculations. DISCUSSION This study provides a repository of intra-cluster correlations and cluster autocorrelations for longitudinal cluster trials. This can help inform sample size calculations for future longitudinal cluster randomised trials.
Collapse
Affiliation(s)
- Elizabeth Korevaar
- School of Public Health and Preventive Medicine, Monash University, Melbourne, VIC, Australia
| | - Jessica Kasza
- School of Public Health and Preventive Medicine, Monash University, Melbourne, VIC, Australia
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada.,School of Epidemiology, Public Health and Preventive Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| | - Terry Haines
- School of Primary and Allied Health Care, Monash University, Melbourne, VIC, Australia
| | - Elizabeth L Turner
- Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, USA.,Duke Global Health Institute, Durham, NC, USA
| | - Jennifer A Thompson
- Department of Infectious Disease Epidemiology, London School of Hygiene and Tropical Medicine, London, UK
| | - James P Hughes
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Andrew B Forbes
- School of Public Health and Preventive Medicine, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
13
|
Hooper R, Copas AJ. Optimal design of cluster randomised trials with continuous recruitment and prospective baseline period. Clin Trials 2021; 18:147-157. [PMID: 33685241 PMCID: PMC8010895 DOI: 10.1177/1740774520976564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Cluster randomised trials, like individually randomised trials, may benefit from a baseline period of data collection. We consider trials in which clusters prospectively recruit or identify participants as a continuous process over a given calendar period, and ask whether and for how long investigators should collect baseline data as part of the trial, in order to maximise precision. METHODS We show how to calculate and plot the variance of the treatment effect estimator for different lengths of baseline period in a range of scenarios, and offer general advice. RESULTS In some circumstances it is optimal not to include a baseline, while in others there is an optimal duration for the baseline. All other things being equal, the circumstances where it is preferable not to include a baseline period are those with a smaller recruitment rate, smaller intracluster correlation, greater decay in the intracluster correlation over time, or wider transition period between recruitment under control and intervention conditions. CONCLUSION The variance of the treatment effect estimator can be calculated numerically, and plotted against the duration of baseline to inform design. It would be of interest to extend these investigations to cluster randomised trial designs with more than two randomised sequences of control and intervention condition, including stepped wedge designs.
Collapse
Affiliation(s)
- Richard Hooper
- Centre for Clinical Trials & Methodology, Institute of Population Health Sciences, Queen Mary University of London, London, UK
| | - Andrew J Copas
- MRC Clinical Trials Unit at University College London, London, UK
| |
Collapse
|
14
|
Li F, Hughes JP, Hemming K, Taljaard M, Melnick ER, Heagerty PJ. Mixed-effects models for the design and analysis of stepped wedge cluster randomized trials: An overview. Stat Methods Med Res 2021; 30:612-639. [PMID: 32631142 PMCID: PMC7785651 DOI: 10.1177/0962280220932962] [Citation(s) in RCA: 86] [Impact Index Per Article: 28.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The stepped wedge cluster randomized design has received increasing attention in pragmatic clinical trials and implementation science research. The key feature of the design is the unidirectional crossover of clusters from the control to intervention conditions on a staggered schedule, which induces confounding of the intervention effect by time. The stepped wedge design first appeared in the Gambia hepatitis study in the 1980s. However, the statistical model used for the design and analysis was not formally introduced until 2007 in an article by Hussey and Hughes. Since then, a variety of mixed-effects model extensions have been proposed for the design and analysis of these trials. In this article, we explore these extensions under a unified perspective. We provide a general model representation and regard various model extensions as alternative ways to characterize the secular trend, intervention effect, as well as sources of heterogeneity. We review the key model ingredients and clarify their implications for the design and analysis. The article serves as an entry point to the evolving statistical literatures on stepped wedge designs.
Collapse
Affiliation(s)
- Fan Li
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
- Center for Methods in Implementation and Preventive Science, Yale University, New Haven, CT, USA
| | - James P Hughes
- Department of Biostatistics, School of Public Health, University of Washington, Seattle, WA, USA
| | - Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
| | - Edward R. Melnick
- Department of Emergency Medicine, Yale School of Medicine, New Haven, CT, USA
| | - Patrick J Heagerty
- Department of Biostatistics, School of Public Health, University of Washington, Seattle, WA, USA
| |
Collapse
|
15
|
Kasza J, Bowden R, Forbes AB. Information content of stepped wedge designs with unequal cluster-period sizes in linear mixed models: Informing incomplete designs. Stat Med 2021; 40:1736-1751. [PMID: 33438255 DOI: 10.1002/sim.8867] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 12/13/2020] [Accepted: 12/16/2020] [Indexed: 12/30/2022]
Abstract
In practice, stepped wedge trials frequently include clusters of differing sizes. However, investigations into the theoretical aspects of stepped wedge designs have, until recently, typically assumed equal numbers of subjects in each cluster and in each period. The information content of the cluster-period cells, clusters, and periods of stepped wedge designs has previously been investigated assuming equal cluster-period sizes, and has shown that incomplete stepped wedge designs may be efficient alternatives to the full stepped wedge. How this changes when cluster-period sizes are not equal is unknown, and we investigate this here. Working within the linear mixed model framework, we show that the information contributed by design components (clusters, sequences, and periods) does depend on the sizes of each cluster-period. Using a particular trial that assessed the impact of an individual education intervention on log-length of stay in rehabilitation units, we demonstrate how strongly the efficiency of incomplete designs depends on which cells are excluded: smaller incomplete designs may be more powerful than alternative incomplete designs that include a greater total number of participants. This also serves to demonstrate how the pattern of information content can be used to inform a set of incomplete designs to be considered as alternatives to the complete stepped wedge design. Our theoretical results for the information content can be extended to a broad class of longitudinal (ie, multiple period) cluster randomized trial designs.
Collapse
Affiliation(s)
- Jessica Kasza
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
| | - Rhys Bowden
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
| | - Andrew B Forbes
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
| |
Collapse
|
16
|
Zhang P, Shoben A, Jackson R, Fernandez S. Variance formulae for multiphase stepped wedge cluster randomized trial. Stat Med 2020; 39:4147-4168. [PMID: 32808315 DOI: 10.1002/sim.8716] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Revised: 07/04/2020] [Accepted: 07/14/2020] [Indexed: 11/11/2022]
Abstract
In a multiphase stepped wedge cluster randomized trial (MSW-CRT), more than one intervention will be initiated on each sequence in a fixed order. Hence, with the MSW-CRT design, the effect of the first intervention can be evaluated when compared to control, as well as the added-on effects of the subsequent interventions. Studies that use MSW-CRT have been proposed, but properties of this design have not been explicitly studied. We derive closed-form variance formulae to test the interventions' effects, which can be readily used for sample size and power calculation. Additionally, we provide relationships between variances to test the interventions' effects and design parameters. Under special conditions, some important properties include: (i) the variances to test different interventions' effects (ie, the first intervention effect and the second intervention effect) may be same; (ii) as the cluster-period mean autocorrelation increases, the variance to test an intervention effect may first increase and then decrease; (iii) as the amount of periods between the initiations of two interventions (ie, lag) increases, the variance to test an intervention effect may remain unchanged. We illustrate the relationships between power and design parameters using the variance formulae. From a few illustrative examples, we observe that the statistical test that uses data only relevant to a specific intervention has inferior power (relative power loss <15%) compared to the test when using all the study data. Also, power is reduced when both the total number of periods and lag are decreased simultaneously (relative power loss <20%).
Collapse
Affiliation(s)
- Pengyue Zhang
- Department of Biomedical Informatics, Ohio State University, Columbus, Ohio, USA
| | - Abigail Shoben
- Division of Biostatistics, College of Public Health, Ohio State University, Columbus, Ohio, USA
| | - Rebecca Jackson
- Departments of Physical Medicine and Rehabilitation, Internal Medicine/Endocrinology, and Diabetes and Metabolism, Ohio State University, Columbus, Ohio, USA
| | - Soledad Fernandez
- Department of Biomedical Informatics, Ohio State University, Columbus, Ohio, USA
| |
Collapse
|
17
|
Hooper R, Kasza J, Forbes A. The hunt for efficient, incomplete designs for stepped wedge trials with continuous recruitment and continuous outcome measures. BMC Med Res Methodol 2020; 20:279. [PMID: 33203361 PMCID: PMC7672921 DOI: 10.1186/s12874-020-01155-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 10/30/2020] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND We consider the design of stepped wedge trials with continuous recruitment and continuous outcome measures. Suppose we recruit from a fixed number of clusters where eligible participants present continuously, and suppose we have fine control over when each cluster crosses to the intervention. Suppose also that we want to minimise the number of participants, leading us to consider "incomplete" designs (i.e. without full recruitment). How can we schedule recruitment and cross-over at different clusters to recruit efficiently while achieving good precision? METHODS The large number of possible designs can make exhaustive searches impractical. Instead we consider an algorithm using iterative improvements to hunt for an efficient design. At each iteration (starting from a complete design) a single participant - the one with the smallest impact on precision - is removed, and small changes preserving total sample size are made until no further improvement in precision can be found. RESULTS Striking patterns emerge. Solutions typically focus recruitment and cross-over on the leading diagonal of the cluster-by-time diagram, but in some scenarios clusters form distinct phases resembling before-and-after designs. CONCLUSIONS There is much to be learned about optimal design for incomplete stepped wedge trials. Algorithmic searches could offer a practical approach to trial design in complex settings generally.
Collapse
Affiliation(s)
- Richard Hooper
- Queen Mary University of London, London, UK. .,Institute of Population Health Sciences, Yvonne Carter Building, 58 Turner Street, Whitechapel, London, E1 2AB, UK.
| | | | | |
Collapse
|
18
|
Hemming K, Taljaard M, Weijer C, Forbes AB. Use of multiple period, cluster randomised, crossover trial designs for comparative effectiveness research. BMJ 2020; 371:m3800. [PMID: 33148538 DOI: 10.1136/bmj.m3800] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Affiliation(s)
- Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham B15 2TT, UK
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
| | - Charles Weijer
- Departments of Medicine, Epidemiology, and Biostatistics, and Philosophy, Western University, London, ON, Canada
| | - Andrew B Forbes
- School of Public Health and Preventive Medicine, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
19
|
Hastings SN, Stechuchak KM, Choate A, Mahanna EP, Van Houtven C, Allen KD, Wang V, Sperber N, Zullig L, Bosworth HB, Coffman CJ. Implementation of a stepped wedge cluster randomized trial to evaluate a hospital mobility program. Trials 2020; 21:863. [PMID: 33076997 PMCID: PMC7574435 DOI: 10.1186/s13063-020-04764-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 09/22/2020] [Indexed: 11/27/2022] Open
Abstract
Background Stepped wedge cluster randomized trials (SW-CRT) are increasingly used to evaluate new clinical programs, yet there is limited guidance on practical aspects of applying this design. We report our early experiences conducting a SW-CRT to examine an inpatient mobility program (STRIDE) in the Veterans Health Administration (VHA). We provide recommendations for future research using this design to evaluate clinical programs. Methods Based on data from study records and reflections from the investigator team, we describe and assess the design and initial stages of a SW-CRT, from site recruitment to program launch in 8 VHA hospitals. Results Site recruitment consisted of thirty 1-h conference calls with representatives from 22 individual VAs who expressed interest in implementing STRIDE. Of these, 8 hospitals were enrolled and randomly assigned in two stratified blocks (4 hospitals per block) to a STRIDE launch date. Block 1 randomization occurred in July 2017 with first STRIDE launch in December 2017; block 2 randomization occurred in April 2018 with first STRIDE launch in January 2019. The primary study outcome of discharge destination will be assessed using routinely collected data in the electronic health record (EHR). Within randomized blocks, two hospitals per sequence launched STRIDE approximately every 3 months with primary outcome assessment paused during the 3-month time period of program launch. All sites received 6–8 implementation support calls, according to a pre-specified schedule, from the time of recruitment to program launch, and all 8 sites successfully launched within their assigned 3-month window. Seven of the eight sites initially started with a limited roll out (for example on one ward) or modified version of STRIDE (for example, using existing staff to conduct walks until new positions were filled). Conclusions Future studies should incorporate sufficient time for site recruitment and carefully consider the following to inform design of SW-CRTs to evaluate rollout of a new clinical program: (1) whether a blocked randomization fits study needs, (2) the amount of time and implementation support sites will need to start their programs, and (3) whether clinical programs are likely to include a “ramp-up” period. Successful execution of SW-CRT designs requires both adherence to rigorous design principles and also careful consideration of logistical requirements for timing of program roll out. Trial registration ClinicalsTrials.gov NCT03300336. Prospectively registered on 3 October 2017.
Collapse
Affiliation(s)
- Susan N Hastings
- Center of Innovation to Accelerate Discovery and Practice Transformation, Durham VA Health Care System, Durham, NC, USA. .,Geriatrics Research, Education and Clinical Center, Durham VA Health Care System, Durham, NC, USA. .,Department of Medicine, Duke University School of Medicine, Durham, NC, USA. .,Department of Population Health Sciences, Duke University School of Medicine, Durham, NC, USA. .,Center for Aging, Duke University School of Medicine, Durham, NC, USA.
| | - Karen M Stechuchak
- Center of Innovation to Accelerate Discovery and Practice Transformation, Durham VA Health Care System, Durham, NC, USA
| | - Ashley Choate
- Center of Innovation to Accelerate Discovery and Practice Transformation, Durham VA Health Care System, Durham, NC, USA
| | - Elizabeth P Mahanna
- Center of Innovation to Accelerate Discovery and Practice Transformation, Durham VA Health Care System, Durham, NC, USA
| | - Courtney Van Houtven
- Center of Innovation to Accelerate Discovery and Practice Transformation, Durham VA Health Care System, Durham, NC, USA.,Department of Population Health Sciences, Duke University School of Medicine, Durham, NC, USA
| | - Kelli D Allen
- Center of Innovation to Accelerate Discovery and Practice Transformation, Durham VA Health Care System, Durham, NC, USA.,Department of Medicine and Thurston Arthritis Research Center, University of North Carolina, Chapel Hill, NC, USA
| | - Virginia Wang
- Center of Innovation to Accelerate Discovery and Practice Transformation, Durham VA Health Care System, Durham, NC, USA.,Department of Medicine, Duke University School of Medicine, Durham, NC, USA.,Department of Population Health Sciences, Duke University School of Medicine, Durham, NC, USA
| | - Nina Sperber
- Center of Innovation to Accelerate Discovery and Practice Transformation, Durham VA Health Care System, Durham, NC, USA.,Department of Medicine, Duke University School of Medicine, Durham, NC, USA.,Department of Population Health Sciences, Duke University School of Medicine, Durham, NC, USA
| | - Leah Zullig
- Center of Innovation to Accelerate Discovery and Practice Transformation, Durham VA Health Care System, Durham, NC, USA.,Department of Medicine, Duke University School of Medicine, Durham, NC, USA.,Department of Population Health Sciences, Duke University School of Medicine, Durham, NC, USA
| | - Hayden B Bosworth
- Center of Innovation to Accelerate Discovery and Practice Transformation, Durham VA Health Care System, Durham, NC, USA.,Department of Medicine, Duke University School of Medicine, Durham, NC, USA.,Department of Population Health Sciences, Duke University School of Medicine, Durham, NC, USA
| | - Cynthia J Coffman
- Center of Innovation to Accelerate Discovery and Practice Transformation, Durham VA Health Care System, Durham, NC, USA.,Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
20
|
Adrion C, Weiss B, Paul N, Berger E, Busse R, Marschall U, Caumanns J, Rosseau S, Mansmann U, Spies C. Enhanced Recovery after Intensive Care (ERIC): study protocol for a German stepped wedge cluster randomised controlled trial to evaluate the effectiveness of a critical care telehealth program on process quality and functional outcomes. BMJ Open 2020; 10:e036096. [PMID: 32978185 PMCID: PMC7520839 DOI: 10.1136/bmjopen-2019-036096] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Revised: 05/27/2020] [Accepted: 07/29/2020] [Indexed: 12/17/2022] Open
Abstract
INTRODUCTION Survival after critical illness has noticeably improved over the last decades due to advances in critical care medicine. Besides, there is an increasing number of elderly patients with chronic diseases being treated in the intensive care unit (ICU). More than half of the survivors of critical illness suffer from medium-term or long-term cognitive, psychological and/or physical impairments after ICU discharge, which is recognised as post-intensive care syndrome (PICS). There are evidence-based and consensus-based quality indicators (QIs) in intensive care medicine, which have a positive influence on patients' long-term outcomes if adhered to. METHODS AND ANALYSIS The protocol of a multicentre, pragmatic, stepped wedge cluster randomised controlled, quality improvement trial is presented. During 3 predefined steps, 12 academic hospitals in Berlin and Brandenburg, Germany, are randomly selected to move in a one-way crossover from the control to the intervention condition. After a multifactorial training programme on QIs and clinical outcomes for site personnel, ICUs will receive an adapted, interprofessional protocol for a complex telehealth intervention comprising of daily telemedical rounds at ICU. The targeted sample size is 1431 patients. The primary objective of this trial is to evaluate the effectiveness of the intervention on the adherence to eight QIs daily measured during the patient's ICU stay, compared with standard of care. Furthermore, the impact on long-term recovery such as PICS-related, patient-centred outcomes including health-related quality of life, mental health, clinical assessments of cognition and physical function, all-cause mortality and cost-effectiveness 3 and 6 months after ICU discharge will be evaluated. ETHICS AND DISSEMINATION This protocol was approved by the ethics committee of the Charité-Universitätsmedizin, Berlin, Germany (EA1/006/18). The results will be published in a peer-reviewed scientific journal and presented at international conferences. Study findings will also be disseminated via the website (www.eric-projekt.net). TRIAL REGISTRATION NUMBER ClinicalTrials.gov Registry (NCT03671447).
Collapse
Affiliation(s)
- Christine Adrion
- Institute for Medical Information Processing, Biometry and Epidemiology, Ludwig-Maximilians-University, Munich, Germany
| | - Bjoern Weiss
- Department of Anesthesiology and Operative Intensive Care Medicine, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Nicolas Paul
- Department of Anesthesiology and Operative Intensive Care Medicine, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Elke Berger
- Department of Health Care Management, Technical University of Berlin, Berlin, Germany
| | - Reinhard Busse
- Department of Health Care Management, Technical University of Berlin, Berlin, Germany
| | | | - Jörg Caumanns
- Fraunhofer Institute for Open Communication Systems, Berlin, Germany
| | - Simone Rosseau
- Weaning and Ventilation Centre, Ernst von Bergmann Klinikum, Bad Belzig, Germany
| | - Ulrich Mansmann
- Institute for Medical Information Processing, Biometry and Epidemiology, Ludwig-Maximilians-University, Munich, Germany
| | - Claudia Spies
- Department of Anesthesiology and Operative Intensive Care Medicine, Charité-Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
21
|
Hooper R, Eldridge SM. Cutting edge or blunt instrument: how to decide if a stepped wedge design is right for you. BMJ Qual Saf 2020; 30:245-250. [PMID: 32546592 PMCID: PMC7907557 DOI: 10.1136/bmjqs-2020-011620] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 05/26/2020] [Indexed: 12/20/2022]
Affiliation(s)
- Richard Hooper
- Institute of Population Health Sciences, Queen Mary University of London, London, UK
| | - Sandra M Eldridge
- Institute of Population Health Sciences, Queen Mary University of London, London, UK
| |
Collapse
|
22
|
Bobb JF, Qiu H, Matthews AG, McCormack J, Bradley KA. Addressing identification bias in the design and analysis of cluster-randomized pragmatic trials: a case study. Trials 2020; 21:289. [PMID: 32293514 PMCID: PMC7092580 DOI: 10.1186/s13063-020-4148-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Accepted: 02/06/2020] [Indexed: 02/03/2023] Open
Abstract
Background Pragmatic trials provide the opportunity to study the effectiveness of health interventions to improve care in real-world settings. However, use of open-cohort designs with patients becoming eligible after randomization and reliance on electronic health records (EHRs) to identify participants may lead to a form of selection bias referred to as identification bias. This bias can occur when individuals identified as a result of the treatment group assignment are included in analyses. Methods To demonstrate the importance of identification bias and how it can be addressed, we consider a motivating case study, the PRimary care Opioid Use Disorders treatment (PROUD) Trial. PROUD is an ongoing pragmatic, cluster-randomized implementation trial in six health systems to evaluate a program for increasing medication treatment of opioid use disorders (OUDs). A main study objective is to evaluate whether the PROUD intervention decreases acute care utilization among patients with OUD (effectiveness aim). Identification bias is a particular concern, because OUD is underdiagnosed in the EHR at baseline, and because the intervention is expected to increase OUD diagnosis among current patients and attract new patients with OUD to the intervention site. We propose a framework for addressing this source of bias in the statistical design and analysis. Results The statistical design sought to balance the competing goals of fully capturing intervention effects and mitigating identification bias, while maximizing power. For the primary analysis of the effectiveness aim, identification bias was avoided by defining the study sample using pre-randomization data (pre-trial modeling demonstrated that the optimal approach was to use individuals with a prior OUD diagnosis). To expand generalizability of study findings, secondary analyses were planned that also included patients newly diagnosed post-randomization, with analytic methods to account for identification bias. Conclusion As more studies seek to leverage existing data sources, such as EHRs, to make clinical trials more affordable and generalizable and to apply novel open-cohort study designs, the potential for identification bias is likely to become increasingly common. This case study highlights how this bias can be addressed in the statistical study design and analysis. Trial registration ClinicalTrials.gov, NCT03407638. Registered on 23 January 2018.
Collapse
Affiliation(s)
- Jennifer F Bobb
- Biostatistics Unit, Kaiser Permanente Washington Health Research Institute, 1730 Minor Ave, Seattle, WA, 98101, USA. .,Department of Biostatistics, University of Washington, 1705 NE Pacific St, Seattle, WA, 98195, USA.
| | - Hongxiang Qiu
- Department of Biostatistics, University of Washington, 1705 NE Pacific St, Seattle, WA, 98195, USA
| | | | | | - Katharine A Bradley
- Biostatistics Unit, Kaiser Permanente Washington Health Research Institute, 1730 Minor Ave, Seattle, WA, 98101, USA.,Department of Health Services, University of Washington, 1959 NE Pacific St, Seattle, WA, 98195, USA.,Department of Medicine, University of Washington, 1959 NE Pacific St, Seattle, WA, 98195, USA
| |
Collapse
|
23
|
Kasza J, Hooper R, Copas A, Forbes AB. Sample size and power calculations for open cohort longitudinal cluster randomized trials. Stat Med 2020; 39:1871-1883. [PMID: 32133688 PMCID: PMC7217159 DOI: 10.1002/sim.8519] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 01/15/2020] [Accepted: 02/17/2020] [Indexed: 01/24/2023]
Abstract
When calculating sample size or power for stepped wedge or other types of longitudinal cluster randomized trials, it is critical that the planned sampling structure be accurately specified. One common assumption is that participants will provide measurements in each trial period, that is, a closed cohort, and another is that each participant provides only one measurement during the course of the trial. However some studies have an "open cohort" sampling structure, where participants may provide measurements in variable numbers of periods. To date, sample size calculations for longitudinal cluster randomized trials have not accommodated open cohorts. Feldman and McKinlay (1994) provided some guidance, stating that the participant-level autocorrelation could be varied to account for the degree of overlap in different periods of the study, but did not indicate precisely how to do so. We present sample size and power formulas that allow for open cohorts and discuss the impact of the degree of "openness" on sample size and power. We consider designs where the number of participants in each cluster will be maintained throughout the trial, but individual participants may provide differing numbers of measurements. Our results are a unification of closed cohort and repeated cross-sectional sample results of Hooper et al (2016), and indicate precisely how participant autocorrelation of Feldman and McKinlay should be varied to account for an open cohort sampling structure. We discuss different types of open cohort sampling schemes and how open cohort sampling structure impacts on power in the presence of decaying within-cluster correlations and autoregressive participant-level errors.
Collapse
Affiliation(s)
- Jessica Kasza
- School of Public Health and Preventive MedicineMonash UniversityMelbourneVictoriaAustralia
| | - Richard Hooper
- Centre for Primary Care and Public HealthQueen Mary University of LondonLondonUK
| | - Andrew Copas
- MRC Clinical Trials UnitUniversity College LondonLondonUK
| | - Andrew B. Forbes
- School of Public Health and Preventive MedicineMonash UniversityMelbourneVictoriaAustralia
| |
Collapse
|