1
|
Liu Y, Hau KT, Liu H. Linear Mixed-Effects Models for Dependent Data: Power and Accuracy in Parameter Estimation. MULTIVARIATE BEHAVIORAL RESEARCH 2024:1-17. [PMID: 38779786 DOI: 10.1080/00273171.2024.2350236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Linear mixed-effects models have been increasingly used to analyze dependent data in psychological research. Despite their many advantages over ANOVA, critical issues in their analyses remain. Due to increasing random effects and model complexity, estimation computation is demanding, and convergence becomes challenging. Applied users need help choosing appropriate methods to estimate random effects. The present Monte Carlo simulation study investigated the impacts when the restricted maximum likelihood (REML) and Bayesian estimation models were misspecified in the estimation. We also compared the performance of Akaike information criterion (AIC) and deviance information criterion (DIC) in model selection. Results showed that models neglecting the existing random effects had inflated Type I errors, unacceptable coverage, and inaccurate R-squared measures of fixed and random effects variation. Furthermore, models with redundant random effects had convergence problems, lower statistical power, and inaccurate R-squared measures for Bayesian estimation. The convergence problem is more severe for REML, while reduced power and inaccurate R-squared measures were more severe for Bayesian estimation. Notably, DIC was better than AIC in identifying the true models (especially for models including person random intercept only), improving convergence rates, and providing more accurate effect size estimates, despite AIC having higher power than DIC with 10 items and the most complicated true model.
Collapse
Affiliation(s)
- Yue Liu
- Institute of Brain and Psychological Sciences, Sichuan Normal University
| | - Kit-Tai Hau
- Faculty of Education, The Chinese University of Hong Kong
| | - Hongyun Liu
- School of Psychology, Beijing Normal University
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University
| |
Collapse
|
2
|
Manolov R, Tanious R. Type I error rates and power of two randomization test procedures for the changing criterion design. Behav Res Methods 2024; 56:3637-3657. [PMID: 38082114 PMCID: PMC11133045 DOI: 10.3758/s13428-023-02303-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/21/2023] [Indexed: 05/30/2024]
Abstract
Single-case experimental design (SCED) data can be analyzed following different approaches. One of the first historically proposed options is randomizations tests, benefiting from the inclusion of randomization in the design: a desirable methodological feature. Randomization tests have become more feasible with the availability of computational resources, and such tests have been proposed for all major types of SCEDs: multiple-baseline, reversal/withdrawal, alternating treatments, and changing criterion designs. The focus of the current text is on the last of these, given that they have not been the subject of any previous simulation study. Specifically, we estimate type I error rates and statistical power for two different randomization procedures applicable to changing criterion designs: the phase change moment randomization and the blocked alternating criterion randomization. We include different series lengths, number of phases, levels of autocorrelation, and random variability. The results suggest that type I error rates are generally controlled and that sufficient power can be achieved with as few as 28-30 measurements for independent data, although more measurements are needed in case of positive autocorrelation. The presence of a reversal to a previous criterion level is beneficial. R code is provided for carrying out randomization tests following the two randomization procedures.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of Psychology, University of Barcelona, Passeig de la Vall d'Hebron 171, 08035, Barcelona, Spain.
| | - René Tanious
- Faculty of Psychology and Neuroscience, Maastricht University, Universiteitssingel 40, 6229 ER, Maastricht, the Netherlands
| |
Collapse
|
3
|
Blozis SA, Craft M. Alternative covariance structures in mixed-effects models: Addressing intra- and inter-individual heterogeneity. Behav Res Methods 2024; 56:2013-2032. [PMID: 37231325 PMCID: PMC11327215 DOI: 10.3758/s13428-023-02133-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/21/2023] [Indexed: 05/27/2023]
Abstract
Mixed-effects models for repeated measures and longitudinal data include random coefficients that are unique to the individual, and thus permit subject-specific growth trajectories, as well as direct study of how the coefficients of a growth function vary as a function of covariates. Although applications of these models often assume homogeneity of the within-subject residual variance that characterizes within-person variation after accounting for systematic change and the variances of the random coefficients of a growth model that quantify individual differences in aspects of change, alternative covariance structures can be considered. These include allowing for serial correlations between the within-subject residuals to account for dependencies in data that remain after fitting a particular growth model or specifying the within-subject residual variance to be a function of covariates or a random subject effect to address between-subject heterogeneity due to unmeasured influences. Further, the variances of the random coefficients can be functions of covariates to relax the assumption that these variances are constant across subjects and to allow for the study of determinants of these sources of variation. In this paper, we consider combinations of these structures that permit flexibility in how mixed-effects models are specified to understand within- and between-subject variation in repeated measures and longitudinal data. Data from three learning studies are analyzed using these different specifications of mixed-effects models.
Collapse
Affiliation(s)
- Shelley A Blozis
- Department of Psychology, University of California, Davis, Davis, California, USA.
| | - Madeline Craft
- Department of Psychology, University of California, Davis, Davis, California, USA
| |
Collapse
|
4
|
Manolov R. Does the choice of a linear trend-assessment technique matter in the context of single-case data? Behav Res Methods 2023; 55:4200-4221. [PMID: 36622560 DOI: 10.3758/s13428-022-02013-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/22/2022] [Indexed: 01/10/2023]
Abstract
Trend is one of the data aspects that is an object of assessment in the context of single-case experimental designs. This assessment can be performed both visually and quantitatively. Given that trend, just like other relevant data features such as level, immediacy, or overlap does not have a single operative definition, a comparison among the existing alternatives is necessary. Previous studies have included illustrations of differences between trend-line fitting techniques using real data. In the current study, I carry out a simulation to study the degree to which different trend-line fitting techniques lead to different degrees of bias, mean square error, and statistical power for a variety of quantifications that entail trend lines. The simulation involves generating both continuous and count data, for several phase lengths, degrees of autocorrelation, and effect sizes (change in level and change in slope). The results suggest that, in general, ordinary least squares estimation performs well in terms of relative bias and mean square error. Especially, a quantification of slope change is associated with better statistical results than quantifying an average difference between conditions on the basis of a projected baseline trend. In contrast, the performance of the split-middle (bisplit) technique is less than optimal.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of Psychology, University of Barcelona, Passeig de la Vall d'Hebron 171, 08035, Barcelona, Spain.
| |
Collapse
|
5
|
Baek E, Luo W, Lam KH. Meta-Analysis of Single-Case Experimental Design using Multilevel Modeling. Behav Modif 2023; 47:1546-1573. [PMID: 36647266 DOI: 10.1177/01454455221144034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Multilevel modeling (MLM) is an approach for meta-analyzing single-case experimental designs (SCED). In this paper, we provide a step-by-step guideline for using the MLM to meta-analyze SCED time-series data. The MLM approach is first presented using a basic three-level model, then gradually extended to represent more realistic situations of SCED data, such as modeling a time variable, moderators representing different design types and multiple outcomes, and heterogeneous within-case variance. The presented approach is then illustrated using real SCED data. Practical recommendations using the MLM approach are also provided for applied researchers based on the current methodological literature. Available free and commercial software programs to meta-analyze SCED data are also introduced, along with several hands-on software codes for applied researchers to implement their own studies. Potential advantages and limitations of using the MLM approach to meta-analyzing SCED are discussed.
Collapse
Affiliation(s)
| | - Wen Luo
- Texas A&M University, College Station, TX, USA
| | | |
Collapse
|
6
|
Baek E, Luo W. Modeling multiple dependent variables in meta-analysis of single-case experimental design using multilevel modeling. Behav Res Methods 2023; 55:3760-3771. [PMID: 36253594 DOI: 10.3758/s13428-022-01987-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2022] [Indexed: 11/08/2022]
Abstract
Although meta-analyses of single-case experimental design (SCED) often include multiple types of dependent variables (DVs), multiple DVs are rarely considered within models in the analysis. Baek et al. (Journal of Experimental Education, 90(4), 934-961, 2022) identified several statistical issues that arise when researchers fail to model multiple DVs in meta-analyses of SCED data. However, the degree to which non-modeling of multiple DVs impacts the results of the meta-analysis of SCED has not been fully examined. In this simulation study, we have systematically investigated the impact of non-modeling of multiple DVs when analyzing meta SCED data using multilevel modeling. The result demonstrates that modeling multiple DVs has advantages over the non-modeling option for meta-analysis of SCED. Modeling multiple DVs enables the determination of precise effects from different DVs in addition to the unbiased and accurate average effect and accurate estimates and inferences for the error variances at the study level as well as the observation level. The current study also reveals potential factors (i.e., the number of DVs, degree of heterogeneity in the level-1 error variances and autocorrelation, and presence of the moderator effect) that impact the precision and accuracy of the variance parameters.
Collapse
Affiliation(s)
- Eunkyeng Baek
- Educational Psychology, Texas A&M University, 718E Harrington Tower, 4225 TAMU, College Station, TX, 77843-4225, USA.
| | - Wen Luo
- Educational Psychology, Texas A&M University, 718E Harrington Tower, 4225 TAMU, College Station, TX, 77843-4225, USA
| |
Collapse
|
7
|
Tanious R, Manolov R. Violin plots as visual tools in the meta-analysis of Single-Case Experimental Designs. METHODOLOGY-EUROPEAN JOURNAL OF RESEARCH METHODS FOR THE BEHAVIORAL AND SOCIAL SCIENCES 2022. [DOI: 10.5964/meth.9209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Despite the existence of sophisticated statistical methods, systematic reviews regularly indicate that single-case experimental designs (SCEDs) are predominantly analyzed through visual tools. For the quantitative aggregation of results, different meta-analytical techniques are available, but specific visual tools for the meta-analysis of SCEDs are lacking. The present article therefore describes the use of violin plots as visual tools to represent the raw data. We first describe the underlying rationale of violin plots and their main characteristics. We then show how the violin plots can complement the statistics obtained in a quantitative meta-analysis. The main advantages of violin plots as visual tools in meta-analysis are (a) that they preserve information about the raw data from each study, (b) that they have the ability to visually represent data from different designs in one graph, and (c) that they enable the comparison of score distributions from different experimental phases from different studies.
Collapse
|
8
|
Estimation and statistical inferences of variance components in the analysis of single-case experimental design using multilevel modeling. Behav Res Methods 2021; 54:1559-1579. [PMID: 34508288 DOI: 10.3758/s13428-021-01691-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/21/2021] [Indexed: 11/08/2022]
Abstract
Multilevel models (MLMs) can be used to examine treatment heterogeneity in single-case experimental designs (SCEDs). With small sample sizes, common issues for estimating between-case variance components in MLMs include nonpositive definite matrix, biased estimates, misspecification of covariance structures, and invalid Wald tests for variance components with bounded distributions. To address these issues, unconstrained optimization, model selection procedure based on parametric bootstrap, and restricted likelihood ratio test (RLRT)-based procedure are introduced. Using simulation studies, we compared the performance of two types of optimization methods (constrained vs. unconstrained) when the covariance structures are correctly specified or misspecified. We also examined the performance of a model selection procedure to obtain the optimal covariance structure. The results showed that the unconstrained optimization can avoid nonpositive definite issues to a great extent without a compromise in model convergence. The misspecification of covariance structures would cause biased estimates, especially with small between case variance components. However, the model selection procedure was found to attenuate the magnitude of bias. A practical guideline was generated for empirical researchers in SCEDs, providing conditions under which trustworthy point and interval estimates can be obtained for between-case variance components in MLMs, as well as the conditions under which the RLRT-based procedure can produce acceptable empirical type I error rate and power.
Collapse
|
9
|
Baek E, Ferron JM. Bayesian Analysis for Multiple-baseline Studies Where the Variance Differs across Cases in OpenBUGS. Dev Neurorehabil 2021; 24:130-143. [PMID: 33393404 DOI: 10.1080/17518423.2020.1858455] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Objective: There is a growing interest in the potential benefits of applying Bayesian estimation for multilevel models of SCED data. Methodological studies have shown that Bayesian estimation resolves convergence issues, can be adequate for the small sample, and can improve the accuracy of the variance components. Despite the potential benefits, the lack of accessibility to software codes makes it difficult for applied researchers to implement Bayesian estimation in their studies. The purpose of this article is to illustrate a feasible way to implement Bayesian estimation using OpenBUGS software to analyze a complex SCED model where within-participants variability and autocorrelation may differ across cases. Method: By using extracted data from a published study, step-by-step guidance in analyzing the data using OpenBUGS software is provided, including (1) model specification, (2) prior distributions, (3) data entering, (4) model estimation, (5) convergence criteria, and (6) posterior inferences and interpretations. Result: Full codes for the analysis are provided.
Collapse
|