1
|
Manolov R, Tanious R. Type I error rates and power of two randomization test procedures for the changing criterion design. Behav Res Methods 2024; 56:3637-3657. [PMID: 38082114 PMCID: PMC11133045 DOI: 10.3758/s13428-023-02303-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/21/2023] [Indexed: 05/30/2024]
Abstract
Single-case experimental design (SCED) data can be analyzed following different approaches. One of the first historically proposed options is randomizations tests, benefiting from the inclusion of randomization in the design: a desirable methodological feature. Randomization tests have become more feasible with the availability of computational resources, and such tests have been proposed for all major types of SCEDs: multiple-baseline, reversal/withdrawal, alternating treatments, and changing criterion designs. The focus of the current text is on the last of these, given that they have not been the subject of any previous simulation study. Specifically, we estimate type I error rates and statistical power for two different randomization procedures applicable to changing criterion designs: the phase change moment randomization and the blocked alternating criterion randomization. We include different series lengths, number of phases, levels of autocorrelation, and random variability. The results suggest that type I error rates are generally controlled and that sufficient power can be achieved with as few as 28-30 measurements for independent data, although more measurements are needed in case of positive autocorrelation. The presence of a reversal to a previous criterion level is beneficial. R code is provided for carrying out randomization tests following the two randomization procedures.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of Psychology, University of Barcelona, Passeig de la Vall d'Hebron 171, 08035, Barcelona, Spain.
| | - René Tanious
- Faculty of Psychology and Neuroscience, Maastricht University, Universiteitssingel 40, 6229 ER, Maastricht, the Netherlands
| |
Collapse
|
2
|
Moore TR, Lee S, Freeman R, Mahmoundi M, Dimian A, Riegelman A, Simacek JJ. A Meta-Analysis of Treatment for Self-Injurious Behavior in Children and Adolescents With Intellectual and Developmental Disabilities. Behav Modif 2024; 48:216-256. [PMID: 38197303 DOI: 10.1177/01454455231218742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
Self-injurious behavior (SIB) among children and youth with developmental disabilities has not diminished in prevalence despite the availability of effective interventions, and the impact on quality of life for people and their families is devastating. The current meta-analysis reviews SIB intervention research between 2011 and 2021 using single-case experimental designs with children and youth up to 21 years old and provides a quantitative synthesis of data from high-quality studies including moderator analyses to determine effects of participant and study characteristics on intervention outcomes. Encouraging findings include a high level of effectiveness across studies in the decrease of SIB (Tau-U = -0.90) and increase of positive behavior (Tau-U = 0.73), as well as an increase in studies (relative to prior reviews) reporting intervention fidelity, generalization, maintenance, and social validity. However, our findings shed limited light on potential moderating variables in the development of interventions for children and youth who exhibit SIB. Of the potential moderators of intervention effects, only implementer (researcher/therapist vs. parent/caregiver) and setting (clinic vs. home) were significantly associated with improved outcomes. We discuss the need for more robust involvement of natural communities of implementers in SIB intervention research to better equip them to effectively and sustainably meet the needs of people they care for. We also discuss the importance of creating systems enabling broad access for children with SIB to effective interventions in service of reducing burden for people, families, and society over time.
Collapse
Affiliation(s)
- Timothy R Moore
- Department of Psychiatry and Behavioral Sciences, University of Minnesota, Minneapolis, USA
| | - Seunghee Lee
- Institute on Community Integration, University of Minnesota, Minneapolis, USA
| | - Rachel Freeman
- Institute on Community Integration, University of Minnesota, Minneapolis, USA
| | - Maryam Mahmoundi
- Institute on Community Integration, University of Minnesota, Minneapolis, USA
| | - Adele Dimian
- Institute on Community Integration, University of Minnesota, Minneapolis, USA
| | - Amy Riegelman
- Social Sciences Libraries, University of Minnesota, Minneapolis, USA
| | - Jessica J Simacek
- Institute on Community Integration, University of Minnesota, Minneapolis, USA
| |
Collapse
|
3
|
Manolov R, Vannest KJ. A Visual Aid and Objective Rule Encompassing the Data Features of Visual Analysis. Behav Modif 2023; 47:1345-1376. [PMID: 31165621 DOI: 10.1177/0145445519854323] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Visual analysis of single-case research is commonly described as a gold standard, but it is often unreliable. Thus, an objective tool for applying visual analysis is necessary, as an alternative to the Conservative Dual Criterion, which presents some drawbacks. The proposed free web-based tool enables assessing change in trend and level between two adjacent phases, while taking data variability into account. The application of the tool results in (a) a dichotomous decision regarding the presence or absence of an immediate effect, a progressive or delayed effect, or an overall effect and (b) a quantification of overlap. The proposal is evaluated by applying it to both real and simulated data, obtaining favorable results. The visual aid and the objective rules are expected to make visual analysis more consistent, but they are not intended as a substitute for the analysts' judgment, as a formal test of statistical significance, or as a tool for assessing social validity.
Collapse
|
4
|
Baek E, Luo W, Lam KH. Meta-Analysis of Single-Case Experimental Design using Multilevel Modeling. Behav Modif 2023; 47:1546-1573. [PMID: 36647266 DOI: 10.1177/01454455221144034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Multilevel modeling (MLM) is an approach for meta-analyzing single-case experimental designs (SCED). In this paper, we provide a step-by-step guideline for using the MLM to meta-analyze SCED time-series data. The MLM approach is first presented using a basic three-level model, then gradually extended to represent more realistic situations of SCED data, such as modeling a time variable, moderators representing different design types and multiple outcomes, and heterogeneous within-case variance. The presented approach is then illustrated using real SCED data. Practical recommendations using the MLM approach are also provided for applied researchers based on the current methodological literature. Available free and commercial software programs to meta-analyze SCED data are also introduced, along with several hands-on software codes for applied researchers to implement their own studies. Potential advantages and limitations of using the MLM approach to meta-analyzing SCED are discussed.
Collapse
Affiliation(s)
| | - Wen Luo
- Texas A&M University, College Station, TX, USA
| | | |
Collapse
|
5
|
Manolov R, Onghena P. Defining and assessing immediacy in single-case experimental designs. J Exp Anal Behav 2022; 118:462-492. [PMID: 36106573 PMCID: PMC9825864 DOI: 10.1002/jeab.799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 08/24/2022] [Accepted: 08/28/2022] [Indexed: 01/11/2023]
Abstract
Immediacy is one of six data aspects (alongside level, trend, variability, overlap, and consistency) that has to be accounted for when visually analyzing single-case data. Given that it is one of the aspects that has received considerably less attention than other data aspects, the current text offers a review of the proposed conceptual definitions of immediacy (i.e., what it refers to) and also of the suggested operational definitions (i.e., how exactly is it assessed and/or quantified). Provided that a variety of conceptual and operational definitions is identified, we propose following a sensitivity analysis using a randomization test for assessing immediate effects in single-case experimental designs, by identifying when changes were most clear. In such a sensitivity analysis, the immediate effects are tested for multiple possible intervention points and for different possible operational definitions. Robust immediate effects can be detected if the results for the different operational definitions converge.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of PsychologyUniversity of Barcelona
| | - Patrick Onghena
- Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research GroupKU Leuven – University of LeuvenLeuvenBelgium
| |
Collapse
|
6
|
Rast P, Martin SR, Liu S, Williams DR. A new frontier for studying within-person variability: Bayesian multivariate generalized autoregressive conditional heteroskedasticity models. Psychol Methods 2022; 27:856-873. [PMID: 33001672 PMCID: PMC8483560 DOI: 10.1037/met0000357] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Research on individual variation has received increased attention. The bulk of the models discussed in psychological research so far, focus mainly on the temporal development of the mean structure. We expand the view on within-person residual variability and present a new model parameterization derived from classic multivariate GARCH models used to predict and forecast volatility in financial time-series. We propose a new pdBEKK and a modified dynamic conditional correlation (DCC) model that accommodate external time-varying predictors for the within-person variance. The main goal of this work is to evaluate the potential usefulness of MGARCH models for research in within-person variability. MGARCH models partition the within-person variance into, at least, 3 components: An overall constant and unconditional baseline variance, a process that introduces variance conditional on previous innovations, or random shocks, and a process that governs the carry-over effects of previous conditional variance, similar to an AR model. These models allow for variance spillover effects from one time-series to another. We illustrate the pdBEKK- and the DCC-MGARCH on two individuals who have rated their daily positive and negative affect over 100 consecutive days. The full models comprised a multivariate ARMA(1,1) model for the means and included physical activity as moderator of the overall baseline variance. Overall, the pdBEKK seems to result in a more straightforward psychological interpretation, but the DCC is generally easier to estimate and can accommodate more simultaneous time-series. Both models require rather large amounts of datapoints to detect nonzero parameters. We provide an R-package bmgarch that facilitates the estimation of these types of models. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
|
7
|
Estimation and statistical inferences of variance components in the analysis of single-case experimental design using multilevel modeling. Behav Res Methods 2021; 54:1559-1579. [PMID: 34508288 DOI: 10.3758/s13428-021-01691-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/21/2021] [Indexed: 11/08/2022]
Abstract
Multilevel models (MLMs) can be used to examine treatment heterogeneity in single-case experimental designs (SCEDs). With small sample sizes, common issues for estimating between-case variance components in MLMs include nonpositive definite matrix, biased estimates, misspecification of covariance structures, and invalid Wald tests for variance components with bounded distributions. To address these issues, unconstrained optimization, model selection procedure based on parametric bootstrap, and restricted likelihood ratio test (RLRT)-based procedure are introduced. Using simulation studies, we compared the performance of two types of optimization methods (constrained vs. unconstrained) when the covariance structures are correctly specified or misspecified. We also examined the performance of a model selection procedure to obtain the optimal covariance structure. The results showed that the unconstrained optimization can avoid nonpositive definite issues to a great extent without a compromise in model convergence. The misspecification of covariance structures would cause biased estimates, especially with small between case variance components. However, the model selection procedure was found to attenuate the magnitude of bias. A practical guideline was generated for empirical researchers in SCEDs, providing conditions under which trustworthy point and interval estimates can be obtained for between-case variance components in MLMs, as well as the conditions under which the RLRT-based procedure can produce acceptable empirical type I error rate and power.
Collapse
|
8
|
Barnard‐Brak L, Watkins L, Richman DM. Autocorrelation and estimates of treatment effect size for single‐case experimental design data. BEHAVIORAL INTERVENTIONS 2021. [DOI: 10.1002/bin.1783] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Lucy Barnard‐Brak
- Department of Special Education and Multiple Abilities The University of Alabama Tuscaloosa Alabama USA
| | - Laci Watkins
- Department of Special Education and Multiple Abilities The University of Alabama Tuscaloosa Alabama USA
| | | |
Collapse
|
9
|
Accurate models vs. accurate estimates: A simulation study of Bayesian single-case experimental designs. Behav Res Methods 2021; 53:1782-1798. [PMID: 33575987 PMCID: PMC8367899 DOI: 10.3758/s13428-020-01522-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/11/2020] [Indexed: 11/08/2022]
Abstract
Although statistical practices to evaluate intervention effects in single-case experimental design (SCEDs) have gained prominence in recent times, models are yet to incorporate and investigate all their analytic complexities. Most of these statistical models incorporate slopes and autocorrelations, both of which contribute to trend in the data. The question that arises is whether in SCED data that show trend, there is indeterminacy between estimating slope and autocorrelation, because both contribute to trend, and the data have a limited number of observations. Using Monte Carlo simulation, we compared the performance of four Bayesian change-point models: (a) intercepts only (IO), (b) slopes but no autocorrelations (SI), (c) autocorrelations but no slopes (NS), and (d) both autocorrelations and slopes (SA). Weakly informative priors were used to remain agnostic about the parameters. Coverage rates showed that for the SA model, either the slope effect size or the autocorrelation credible interval almost always erroneously contained 0, and the type II errors were prohibitively large. Considering the 0-coverage and coverage rates of slope effect size, intercept effect size, mean relative bias, and second-phase intercept relative bias, the SI model outperformed all other models. Therefore, it is recommended that researchers favor the SI model over the other three models. Research studies that develop slope effect sizes for SCEDs should consider the performance of the statistic by taking into account coverage and 0-coverage rates. These helped uncover patterns that were not realized in other simulation studies. We underline the need for investigating the use of informative priors in SCEDs.
Collapse
|
10
|
Baek E, Ferron JM. Bayesian Analysis for Multiple-baseline Studies Where the Variance Differs across Cases in OpenBUGS. Dev Neurorehabil 2021; 24:130-143. [PMID: 33393404 DOI: 10.1080/17518423.2020.1858455] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Objective: There is a growing interest in the potential benefits of applying Bayesian estimation for multilevel models of SCED data. Methodological studies have shown that Bayesian estimation resolves convergence issues, can be adequate for the small sample, and can improve the accuracy of the variance components. Despite the potential benefits, the lack of accessibility to software codes makes it difficult for applied researchers to implement Bayesian estimation in their studies. The purpose of this article is to illustrate a feasible way to implement Bayesian estimation using OpenBUGS software to analyze a complex SCED model where within-participants variability and autocorrelation may differ across cases. Method: By using extracted data from a published study, step-by-step guidance in analyzing the data using OpenBUGS software is provided, including (1) model specification, (2) prior distributions, (3) data entering, (4) model estimation, (5) convergence criteria, and (6) posterior inferences and interpretations. Result: Full codes for the analysis are provided.
Collapse
|
11
|
Investigating immediacy in multiple-phase-change single-case experimental designs using a Bayesian unknown change-points model. Behav Res Methods 2021; 52:1714-1728. [PMID: 32103466 DOI: 10.3758/s13428-020-01345-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Immediacy is one of the necessary criteria to show strong evidence of treatment effect in single-case experimental designs (SCEDs). However, with the exception of Natesan and Hedges (2017), no inferential statistical tool has been used to demonstrate or quantify it until now. We investigate and quantify immediacy by treating the change points between the baseline and treatment phases as unknown. We extend Natesan and Hedges' work to multiple-phase-change (e.g. ABAB) designs using a variational Bayesian (VB) unknown change-point model. VB was used instead of Markov chain Monte Carlo methods (MCMC), because MCMC cannot be used effectively to determine multiple change points. Combined and individual probabilities of correctly estimating the change points were used as indicators of the algorithm's accuracy. Unlike MCMC in the Natesan and Hedges (2017) study, the VB method was able to recover the change points with high accuracy even for short time series and in only a fraction of the time for all time-series lengths. We illustrate the algorithm with 13 real data sets. Additionally, we discuss the advantages of the unknown change-point approach, and the Bayesian and variational Bayesian estimation for SCEDs.
Collapse
|
12
|
Natesan Batley P, Contractor AA, Caldas SV. Bayesian Time-Series Models in Single Case Experimental Designs: A Tutorial for Trauma Researchers. J Trauma Stress 2020; 33:1144-1153. [PMID: 33205545 PMCID: PMC8246830 DOI: 10.1002/jts.22614] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Revised: 09/25/2020] [Accepted: 09/26/2020] [Indexed: 01/15/2023]
Abstract
Single-case experimental designs (SCEDs) involve obtaining repeated measures from one or a few participants before, during, and, sometimes, after treatment implementation. Because they are cost-, time-, and resource-efficient and can provide robust causal evidence for more large-scale research, SCEDs are gaining popularity in trauma treatment research. However, sophisticated techniques to analyze SCED data remain underutilized. Herein, we discuss the utility of SCED data for trauma research, provide recommendations for addressing challenges specific to SCED approaches, and introduce a tutorial for two Bayesian models-the Bayesian interrupted time-series (BITS) model and the Bayesian unknown change-point (BUCP) model-that can be used to analyze the typically small sample, autocorrelated, SCED data. Software codes are provided for the ease of guiding readers in estimating these models. Analyses of a dataset from a published article as well as a trauma-specific simulated dataset are used to illustrate the models and demonstrate the interpretation of the results. We further discuss the implications of using such small-sample data-analytic techniques for SCEDs specific to trauma research.
Collapse
|
13
|
The role of simulation in imagery rescripting for post-traumatic stress disorder: a single case series. Behav Cogn Psychother 2020; 49:257-271. [PMID: 33153512 DOI: 10.1017/s1352465820000806] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
BACKGROUND Imagery rescripting (ImRs) is an experiential therapy technique used to change the content and meaning of intrusive imagery in post-traumatic stress disorder (PTSD) by imagining alternative endings to traumatic events. There is growing evidence that ImRs is an effective treatment for PTSD; however, little is known about how it brings about change. AIMS This study aimed to explore the role of mental simulation as a candidate mechanism of action in ImRs, and, specifically, whether well-simulated imagery rescripts are associated with greater change in symptom severity during ImRs. METHOD Using a single-case experimental design, seven participants receiving cognitive therapy for PTSD were assessed before, during and after sessions of imagery rescripting for one intrusive image. Participants completed continuous symptom severity measures. Sessions were recorded, then coded for goodness of simulation (GOS) as well as additional factors (e.g. rescript believability, vividness). RESULTS Participants were divided into high- and low-responders and coding was compared across groups. Correlational analyses were supported by descriptive analysis of individual sessions. High-responders' rescripts tended to be rated as well-simulated compared with those of low-responders. Specific factors (e.g. intensity of thoughts/emotions related to original and new imagery elements, level of cognitive and emotional shift and belief in the resultant rescript) were also associated with reductions in symptom severity. CONCLUSIONS There was tentative evidence that well-simulated rescripted images tended to be associated with greater reductions in symptom severity of the target image. Clinical implications and avenues for further research are discussed.
Collapse
|
14
|
Abstract
Single-case experimental design (SCED) research plays an important role in establishing and confirming evidence-based practices. Due to multiple measures of a target behavior in such studies, missing information is common in their data. The expectation-maximization (EM) algorithm has been successfully applied to deal with missing data in between-subjects designs, but only in a handful of SCED studies. The present study extends the findings from Smith, Borckardt, and Nash (2012) and Velicer and Colby (2005b, Study 2) by systematically examining the performance of EM in a baseline-intervention (or AB) design under various missing rates, autocorrelations, intervention phase lengths, and magnitudes of effects, as well as two fitted models. Three indicators of an intervention effect (baseline slope, level shift, and slope change) were estimated. The estimates' relative bias, root-mean squared error, and relative bias of the estimated standard error were used to assess EM's performance. The findings revealed that autocorrelation impacted the estimates' qualities most profoundly. Autocorrelation interacted with missing rate in impacting the relative bias of the estimates, impacted the root-mean squared error nonlinearly, and interacted with the fitted model in impacting the relative bias of the estimated standard errors. A simpler model without autocorrelation can be used to estimate baseline slope and slope change in time-series data. EM is recommended as a principled method to handle missing data in SCED studies. Two decision trees are presented to assist researchers and practitioners in applying EM. Emerging research directions are identified for treating missing data in SCED studies.
Collapse
|
15
|
Baek E, Ferron JJM. Modeling heterogeneity of the level-1 error covariance matrix in multilevel models for single-case data. METHODOLOGY-EUROPEAN JOURNAL OF RESEARCH METHODS FOR THE BEHAVIORAL AND SOCIAL SCIENCES 2020. [DOI: 10.5964/meth.2817] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Previous research applying multilevel models to single-case data has made a critical assumption that the level-1 error covariance matrix is constant across all participants. However, the level-1 error covariance matrix may differ across participants and ignoring these differences can have an impact on estimation and inferences. Despite the importance of this issue, the effects of modeling between-case variation in the level-1 error structure had not yet been systematically studied. The purpose of this simulation study was to identify the consequences of modeling and not modeling between-case variation in the level-1 error covariance matrices in single-case studies, using Bayesian estimation. The results of this study found that variance estimation was more sensitive to the method used to model the level-1 error structure than fixed effect estimation, with fixed effects only being impacted in the most extreme heterogeneity conditions. Implications for applied single-case researchers and methodologists are discussed.
Collapse
|
16
|
Krone T, Boessen R, Bijlsma S, van Stokkum R, Clabbers NDS, Pasman WJ. The possibilities of the use of N-of-1 and do-it-yourself trials in nutritional research. PLoS One 2020; 15:e0232680. [PMID: 32374745 PMCID: PMC7202616 DOI: 10.1371/journal.pone.0232680] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Accepted: 04/21/2020] [Indexed: 12/02/2022] Open
Abstract
Background N-of-1 designs gain popularity in nutritional research because of the improving technological possibilities, practical applicability and promise of increased accuracy and sensitivity, especially in the field of personalized nutrition. This move asks for a search of applicable statistical methods. Objective To demonstrate the differences of three popular statistical methods in analyzing treatment effects of data obtained in N-of-1 designs. Method We compare Individual-participant data meta-analysis, frequentist and Bayesian linear mixed effect models using a simulation experiment. Furthermore, we demonstrate the merits of the Bayesian model including prior information by analyzing data of an empirical study on weight loss. Results The linear mixed effect models are to be preferred over the meta-analysis method, since the individual effects are estimated more accurately as evidenced by the lower errors, especially with lower sample sizes. Differences between Bayesian and frequentist mixed models were found to be small, indicating that they will lead to the same results without including an informative prior. Conclusion For empirical data, the Bayesian mixed model allows the inclusion of prior knowledge and gives potential for population based and personalized inference.
Collapse
|
17
|
Natesan P. Fitting Bayesian Models for Single-Case Experimental Designs. METHODOLOGY-EUROPEAN JOURNAL OF RESEARCH METHODS FOR THE BEHAVIORAL AND SOCIAL SCIENCES 2019. [DOI: 10.1027/1614-2241/a000180] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Abstract. Single-case experimental designs (SCEDs) are interrupted time-series designs that have recently gained recognition as being able to provide a strong basis for establishing intervention effect. Typically, SCED data are short time series and autocorrelated, which renders maximum likelihood and parametric analyses inadequate for data analysis, respectively. Although Bayesian methods overcome these challenges, most practitioners do not use Bayesian estimation because of: (a) its steep learning curve, (b) lack of Bayesian training, and (c) lack of knowledge of Bayesian software solutions. This study demonstrates two Bayesian interrupted time-series models using freeware programs R and JAGS. Practitioners could modify these codes and run them for their own data by changing the values in the codes where indicated. Providing practitioners with such tools to facilitate their analysis is one way to improve methodological rigor in applied research.
Collapse
Affiliation(s)
- Prathiba Natesan
- Department of Educational Psychology, University of North Texas, Denton, TX, USA
| |
Collapse
|
18
|
Bolton DAE, Buick AR, Carroll TJ, Carson RG. Interlimb transfer and generalisation of learning in the context of persistent failure to accomplish a visuomotor task. Exp Brain Res 2019; 237:1077-1092. [PMID: 30758515 PMCID: PMC6430762 DOI: 10.1007/s00221-019-05484-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Accepted: 01/29/2019] [Indexed: 11/30/2022]
Abstract
Transfer, in which capability acquired in one situation influences performance in another is considered, along with retention, as demonstrative of effectual learning. In this regard, interlimb transfer of functional capacity has commanded particular attention as a means of gauging the generalisation of acquired capability. Both theoretical treatments and prior empirical studies suggest that the successful accomplishment of a physical training regime is required to bring about generalised changes that extend to the untrained limb. In the present study, we pose the following question: Does interlimb transfer occur if and only if the training movements are executed? We report findings from JG-an individual recruited to a larger scale trial, who presented with (unilateral) deficits of motor control. We examined whether changes in the performance of the untrained right limb arose following practice undertaken by the impaired left limb, wherein the majority of JG's attempts to execute the training task were unsuccessful. Comparison was made with a group of "control" participants drawn from the main trial, who did not practice the task. For JG, substantial gains in the performance of the untrained limb (registered 3 days, 10 days and 1 year following training) indicated that effective learning had occurred. Learning was, however, expressed principally when the unimpaired (i.e. untrained) limb was utilised to perform the task. When the impaired limb was used, marked deficiencies in movement execution remained prominent throughout.
Collapse
Affiliation(s)
- D A E Bolton
- Department of Kinesiology and Health Sciences, Utah State University, Logan, USA
- School of Psychology, Queen's University Belfast, Belfast, UK
| | - A R Buick
- School of Psychology, Queen's University Belfast, Belfast, UK
| | - T J Carroll
- School of Human Movement and Nutrition Sciences, The University of Queensland, Brisbane, Australia
| | - R G Carson
- School of Psychology, Queen's University Belfast, Belfast, UK.
- School of Human Movement and Nutrition Sciences, The University of Queensland, Brisbane, Australia.
- Trinity College Institute of Neuroscience, School of Psychology, Trinity College Dublin, Dublin 2, Ireland.
| |
Collapse
|
19
|
Extrapolating baseline trend in single-case data: Problems and tentative solutions. Behav Res Methods 2018; 51:2847-2869. [DOI: 10.3758/s13428-018-1165-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
20
|
Moeyaert M, Manolov R, Rodabaugh E. Meta-Analysis of Single-Case Research via Multilevel Models: Fundamental Concepts and Methodological Considerations. Behav Modif 2018; 44:265-295. [PMID: 30360633 DOI: 10.1177/0145445518806867] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Multilevel modeling is an approach that can be used to summarize single-case experimental design (SCED) data. Multilevel models were developed to analyze hierarchical structured data with units at a lower level nested within higher level units. SCEDs use time series data collected from multiple cases (or subjects) within a study that allow researchers to investigate intervention effectiveness at the individual level and also to investigate how these individual intervention effects change over time. There is an increased interest in the field regarding how SCEDs can be used to establish an evidence base for interventions by synthesizing data from a series of intervention studies. Although using multilevel models to meta-analyze SCED studies is promising, application is often hampered by being potentially excessively technical. First, this article provides an accessible description and overview of the potential of multilevel meta-analysis to combine SCED data. Second, a summary of the methodological evidence on the performance of multilevel models for meta-analysis is provided, which is useful given that such evidence is currently scattered over multiple technical articles in the literature. Third, the actual steps to perform a multilevel meta-analysis are outlined in a brief practical guide. Fourth, a suggestion for integrating the quantitative results with a visual representation is provided.
Collapse
|
21
|
Swan DM, Pustejovsky JE. A Gradual Effects Model for Single-Case Designs. MULTIVARIATE BEHAVIORAL RESEARCH 2018; 53:574-593. [PMID: 29757002 DOI: 10.1080/00273171.2018.1466681] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Single-case designs are a class of repeated measures experiments used to evaluate the effects of interventions for small or specialized populations, such as individuals with low-incidence disabilities. There has been growing interest in systematic reviews and syntheses of evidence from single-case designs, but there remains a need to further develop appropriate statistical models and effect sizes for data from the designs. We propose a novel model for single-case data that exhibit nonlinear time trends created by an intervention that produces gradual effects, which build up and dissipate over time. The model expresses a structural relationship between a pattern of treatment assignment and an outcome variable, making it appropriate for both treatment reversal and multiple baseline designs. It is formulated as a generalized linear model so that it can be applied to outcomes measured as frequency counts or proportions, both of which are commonly used in single-case research, while providing readily interpretable effect size estimates such as log response ratios or log odds ratios. We demonstrate the gradual effects model by applying it to data from a single-case study and examine the performance of proposed estimation methods in a Monte Carlo simulation of frequency count data.
Collapse
|
22
|
Manolov R, Jamieson M, Evans JJ, Sierra V. Probability and Visual Aids for Assessing Intervention Effectiveness in Single-Case Designs. Behav Modif 2015; 39:691-720. [DOI: 10.1177/0145445515593512] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Single-case data analysis still relies heavily on visual inspection, and, at the same time, it is not clear to what extent the results of different quantitative procedures converge in identifying an intervention effect and its magnitude when applied to the same data; this is the type of evidence provided here for two procedures. One of the procedures, included due to the importance of providing objective criteria to visual analysts, is a visual aid fitting and projecting split-middle trend while taking into account data variability. The other procedure converts several different metrics into probabilities making their results comparable. In the present study, we expore to what extend these two procedures coincide in the magnitude of intervention effect taking place in a set of studies stemming from a recent meta-analysis. The procedures concur to a greater extent with the values of the indices computed and with each other and, to a lesser extent, with our own visual analysis. For distinguishing smaller from larger effects, the probability-based approach seems somewhat better suited. Moreover, the results of the field test suggest that the latter is a reasonably good mechanism for translating different metrics into similar labels. User friendly R code is provided for promoting the use of the visual aid, together with a quantification based on nonoverlap and the label provided by the probability approach.
Collapse
Affiliation(s)
- Rumen Manolov
- University of Barcelona, Spain
- Ramon Llull University, Barcelona, Spain
| | | | | | | |
Collapse
|
23
|
Ross SG, Begeny JC. Single-case effect size calculation: Comparing regression and non-parametric approaches across previously published reading intervention data sets. J Sch Psychol 2014; 52:419-31. [DOI: 10.1016/j.jsp.2014.06.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2012] [Revised: 06/12/2014] [Accepted: 06/13/2014] [Indexed: 11/28/2022]
|
24
|
|
25
|
Abstract
A wide variety of effect sizes (ESs) has been used in the single-case design literature. Several researchers have “stress tested” these ESs by subjecting them to various degrees of problem data (e.g., autocorrelation, slope), resulting in the conditions by which different ESs can be considered valid. However, on the back end, few researchers have considered how prevalent and severe these problems are in extant data and as a result, how concerned applied researchers should be. The current study extracted and aggregated indicators of violations of normality and independence across four domains of educational study. Significant violations were found in total and across fields, including low levels of autocorrelation and moderate levels of absolute trend. These violations affect the selection and interpretation of ESs at the individual study level and for meta-analysis. Implications and recommendations are discussed.
Collapse
|