1
|
Van den Noortgate W, Onghena P. Harnessing Available Evidence in Single-Case Experimental Studies: The Use of Multilevel Meta-Analysis. Psychol Belg 2024; 64:166-184. [PMID: 39464391 PMCID: PMC11505138 DOI: 10.5334/pb.1307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 10/09/2024] [Indexed: 10/29/2024] Open
Abstract
The use of multilevel models to combine and compare the results of multiple single-case experimental design (SCED) studies has been proposed about two decades ago. Since then, the number of multilevel meta-analyses of SCED studies steadily increased, together with the complexity of multilevel models used. At the same time, many studies were done to empirically evaluate the approach in a variety of situations, and to study how the flexibility of multilevel models can be employed to account for many complexities that often are encountered in SCED research, such as autocorrelation, linear and nonlinear time trends, specific designs, external event effects, multiple outcomes, and heterogeneity. In this paper, we give a state-of-the-art of the multilevel approach, by making an overview of basic and more extended models, summarizing simulation results, and discussing some remaining issues.
Collapse
Affiliation(s)
- Wim Van den Noortgate
- Methodology of Educational Sciences Research Group, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium
- Itec, an imec research group at KU Leuven, Belgium
| | - Patrick Onghena
- Methodology of Educational Sciences Research Group, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium
| |
Collapse
|
2
|
Moeyaert M, Dehghan-Chaleshtori M, Xu X, Yang P. Single-case design meta-analyses in education and psychology: a systematic review of methodology. Front Res Metr Anal 2023; 8:1190362. [PMID: 38025959 PMCID: PMC10679716 DOI: 10.3389/frma.2023.1190362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Meta-analysis is of increasing importance as this quantitative synthesis technique has the potential to summarize a tremendous amount of research evidence, which can help making evidence-based decisions in policy, practice, and theory. This paper examines the single-case meta-analyses within the Education and Psychology fields. The amount of methodological studies related to the meta-analysis of Single-Case Experimental Designs (SCEDs) is increasing rapidly, especially in these fields. This underscores the necessity of a succinct summary to help methodologists identify areas for further development in Education and Psychology research. It also aids applied researchers and research synthesists in discerning when to use meta-analytic techniques for SCED studies based on criteria such as bias, mean squared error, 95% confidence intervals, Type I error rates, and statistical power. Based on the summary of empirical evidence from 18 reports identified through a systematic search procedure, information related to meta-analytic techniques, data generation and analysis models, design conditions, statistical properties, conditions under which the meta-analytic technique is appropriate, and the study purpose(s) were extracted. The results indicate that three-level hierarchical linear modeling is the most empirically validated SCED meta-analytic technique, and parameter bias is the most prominent statistical property investigated. A large number of primary studies (more than 30) and at least 20 measurement occasions per participant are recommended for usage of SCED meta-analysis in Education and Psychology fields.
Collapse
Affiliation(s)
- Mariola Moeyaert
- Department of Educational and Counseling Psychology, University at Albany-State University of New York, Albany, NY, United States
| | - Marzieh Dehghan-Chaleshtori
- Department of Educational and Counseling Psychology, University at Albany-State University of New York, Albany, NY, United States
| | - Xinyun Xu
- Department of Educational and Counseling Psychology, University at Albany-State University of New York, Albany, NY, United States
- Center of Tsinghua Think Tanks, Tsinghua University, Beijing, China
| | - Panpan Yang
- Center for Research on Child Wellbeing, Princeton University, Wallace Hall, Princeton, NJ, United States
| |
Collapse
|
3
|
Manolov R, Vannest KJ. A Visual Aid and Objective Rule Encompassing the Data Features of Visual Analysis. Behav Modif 2023; 47:1345-1376. [PMID: 31165621 DOI: 10.1177/0145445519854323] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Visual analysis of single-case research is commonly described as a gold standard, but it is often unreliable. Thus, an objective tool for applying visual analysis is necessary, as an alternative to the Conservative Dual Criterion, which presents some drawbacks. The proposed free web-based tool enables assessing change in trend and level between two adjacent phases, while taking data variability into account. The application of the tool results in (a) a dichotomous decision regarding the presence or absence of an immediate effect, a progressive or delayed effect, or an overall effect and (b) a quantification of overlap. The proposal is evaluated by applying it to both real and simulated data, obtaining favorable results. The visual aid and the objective rules are expected to make visual analysis more consistent, but they are not intended as a substitute for the analysts' judgment, as a formal test of statistical significance, or as a tool for assessing social validity.
Collapse
|
4
|
Somer E, Gische C, Miočević M. Methods for Modeling Autocorrelation and Handling Missing Data in Mediation Analysis in Single Case Experimental Designs (SCEDs). Eval Health Prof 2022; 45:36-53. [PMID: 35225017 PMCID: PMC8980456 DOI: 10.1177/01632787211071136] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Single-Case Experimental Designs (SCEDs) are increasingly recognized as a valuable alternative to group designs. Mediation analysis is useful in SCEDs contexts because it informs researchers about the underlying mechanism through which an intervention influences the outcome. However, methods for conducting mediation analysis in SCEDs have only recently been proposed. Furthermore, repeated measures of a target behavior present the challenges of autocorrelation and missing data. This paper aims to extend methods for estimating indirect effects in piecewise regression analysis in SCEDs by (1) evaluating three methods for modeling autocorrelation, namely, Newey-West (NW) estimation, feasible generalized least squares (FGLS) estimation, and explicit modeling of an autoregressive structure of order one (AR(1)) in the error terms and (2) evaluating multiple imputation in the presence of data that are missing completely at random. FGLS and AR(1) outperformed NW and OLS estimation in terms of efficiency, Type I error rates, and coverage, while OLS was superior to the methods in terms of power for larger samples. The performance of all methods is consistent across 0% and 20% missing data conditions. 50% missing data led to unsatisfactory power and biased estimates. In light of these findings, we provide recommendations for applied researchers.
Collapse
Affiliation(s)
- Emma Somer
- Department of Psychology, 5620McGill University, Montreal, QC, Canada
| | - Christian Gische
- Department of Psychology, 9373Humboldt-Universitätzu Berlin, Berlin, Germany
| | - Milica Miočević
- Department of Psychology, 5620McGill University, Montreal, QC, Canada
| |
Collapse
|
5
|
Estimation and statistical inferences of variance components in the analysis of single-case experimental design using multilevel modeling. Behav Res Methods 2021; 54:1559-1579. [PMID: 34508288 DOI: 10.3758/s13428-021-01691-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/21/2021] [Indexed: 11/08/2022]
Abstract
Multilevel models (MLMs) can be used to examine treatment heterogeneity in single-case experimental designs (SCEDs). With small sample sizes, common issues for estimating between-case variance components in MLMs include nonpositive definite matrix, biased estimates, misspecification of covariance structures, and invalid Wald tests for variance components with bounded distributions. To address these issues, unconstrained optimization, model selection procedure based on parametric bootstrap, and restricted likelihood ratio test (RLRT)-based procedure are introduced. Using simulation studies, we compared the performance of two types of optimization methods (constrained vs. unconstrained) when the covariance structures are correctly specified or misspecified. We also examined the performance of a model selection procedure to obtain the optimal covariance structure. The results showed that the unconstrained optimization can avoid nonpositive definite issues to a great extent without a compromise in model convergence. The misspecification of covariance structures would cause biased estimates, especially with small between case variance components. However, the model selection procedure was found to attenuate the magnitude of bias. A practical guideline was generated for empirical researchers in SCEDs, providing conditions under which trustworthy point and interval estimates can be obtained for between-case variance components in MLMs, as well as the conditions under which the RLRT-based procedure can produce acceptable empirical type I error rate and power.
Collapse
|
6
|
The Power to Explain Variability in Intervention Effectiveness in Single-Case Research Using Hierarchical Linear Modeling. Perspect Behav Sci 2021; 45:13-35. [DOI: 10.1007/s40614-021-00304-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/14/2021] [Indexed: 10/20/2022] Open
|
7
|
Gooty J, Banks GC, Loignon AC, Tonidandel S, Williams CE. Meta-Analyses as a Multi-Level Model. ORGANIZATIONAL RESEARCH METHODS 2019. [DOI: 10.1177/1094428119857471] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Meta-analyses are well known and widely implemented in almost every domain of research in management as well as the social, medical, and behavioral sciences. While this technique is useful for determining validity coefficients (i.e., effect sizes), meta-analyses are predicated on the assumption of independence of primary effect sizes, which might be routinely violated in the organizational sciences. Here, we discuss the implications of violating the independence assumption and demonstrate how meta-analysis could be cast as a multilevel, variance known (Vknown) model to account for such dependency in primary studies’ effect sizes. We illustrate such techniques for meta-analytic data via the HLM 7.0 software as it remains the most widely used multilevel analyses software in management. In so doing, we draw on examples in educational psychology (where such techniques were first developed), organizational sciences, and a Monte Carlo simulation (Appendix). We conclude with a discussion of implications, caveats, and future extensions. Our Appendix details features of a newly developed application that is free (based on R), user-friendly, and provides an alternative to the HLM program.
Collapse
Affiliation(s)
- Janaki Gooty
- Belk College of Business, University of North Carolina at Charlotte, Charlotte, NC, USA
| | - George C. Banks
- Belk College of Business, University of North Carolina at Charlotte, Charlotte, NC, USA
| | - Andrew C. Loignon
- E. J. Ourso College of Business, Louisiana State University, Baton Rouge, LA, USA
| | - Scott Tonidandel
- Belk College of Business, University of North Carolina at Charlotte, Charlotte, NC, USA
| | - Courtney E. Williams
- Belk College of Business, University of North Carolina at Charlotte, Charlotte, NC, USA
| |
Collapse
|
8
|
Moeyaert M, Manolov R, Rodabaugh E. Meta-Analysis of Single-Case Research via Multilevel Models: Fundamental Concepts and Methodological Considerations. Behav Modif 2018; 44:265-295. [PMID: 30360633 DOI: 10.1177/0145445518806867] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Multilevel modeling is an approach that can be used to summarize single-case experimental design (SCED) data. Multilevel models were developed to analyze hierarchical structured data with units at a lower level nested within higher level units. SCEDs use time series data collected from multiple cases (or subjects) within a study that allow researchers to investigate intervention effectiveness at the individual level and also to investigate how these individual intervention effects change over time. There is an increased interest in the field regarding how SCEDs can be used to establish an evidence base for interventions by synthesizing data from a series of intervention studies. Although using multilevel models to meta-analyze SCED studies is promising, application is often hampered by being potentially excessively technical. First, this article provides an accessible description and overview of the potential of multilevel meta-analysis to combine SCED data. Second, a summary of the methodological evidence on the performance of multilevel models for meta-analysis is provided, which is useful given that such evidence is currently scattered over multiple technical articles in the literature. Third, the actual steps to perform a multilevel meta-analysis are outlined in a brief practical guide. Fourth, a suggestion for integrating the quantitative results with a visual representation is provided.
Collapse
|
9
|
Moeyaert M, Ferron JM, Beretvas SN, Van den Noortgate W. From a single-level analysis to a multilevel analysis of single-case experimental designs. J Sch Psychol 2013; 52:191-211. [PMID: 24606975 DOI: 10.1016/j.jsp.2013.11.003] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2013] [Revised: 11/09/2013] [Accepted: 11/09/2013] [Indexed: 10/25/2022]
Abstract
Multilevel modeling provides one approach to synthesizing single-case experimental design data. In this study, we present the multilevel model (the two-level and the three-level models) for summarizing single-case results over cases, over studies, or both. In addition to the basic multilevel models, we elaborate on several plausible alternative models. We apply the proposed models to real datasets and investigate to what extent the estimated treatment effect is dependent on the modeling specifications and the underlying assumptions. By considering a range of plausible models and assumptions, researchers can determine the degree to which the effect estimates and conclusions are sensitive to the specific assumptions made. If the same conclusions are reached across a range of plausible assumptions, confidence in the conclusions can be enhanced. We advise researchers not to focus on one model but conduct multiple plausible multilevel analyses and investigate whether the results depend on the modeling options.
Collapse
Affiliation(s)
- Mariola Moeyaert
- Faculty of Psychology and Educational Sciences, Katholieke Universiteit Leuven, Belgium.
| | - John M Ferron
- Department of Educational Measurement and Research, University of South Florida, USA
| | | | - Wim Van den Noortgate
- Faculty of Psychology and Educational Sciences, ITEC-iMinds Kortrijk, Katholieke Universiteit Leuven, Belgium
| |
Collapse
|
10
|
Baek EK, Moeyaert M, Petit-Bois M, Beretvas SN, Van den Noortgate W, Ferron JM. The use of multilevel analysis for integrating single-case experimental design results within a study and across studies. Neuropsychol Rehabil 2013; 24:590-606. [DOI: 10.1080/09602011.2013.835740] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
11
|
Moeyaert M, Ugille M, Ferron JM, Beretvas SN, Van den Noortgate W. The Three-Level Synthesis of Standardized Single-Subject Experimental Data: A Monte Carlo Simulation Study. MULTIVARIATE BEHAVIORAL RESEARCH 2013; 48:719-748. [PMID: 26741060 DOI: 10.1080/00273171.2013.816621] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Previous research indicates that three-level modeling is a valid statistical method to make inferences from unstandardized data from a set of single-subject experimental studies, especially when a homogeneous set of at least 30 studies are included ( Moeyaert, Ugille, Ferron, Beretvas, & Van den Noortgate, 2013a ). When single-subject data from multiple studies are combined, however, it often occurs that the dependent variable is measured on a different scale, requiring standardization of the data before combining them over studies. One approach is to divide the dependent variable by the residual standard deviation. In this study we use Monte Carlo methods to evaluate this approach. We examine how well the fixed effects (e.g., immediate treatment effect and treatment effect on the time trend) and the variance components (the between- and within-subject variance) are estimated under a number of realistic conditions. The three-level synthesis of standardized single-subject data is found appropriate for the estimation of the treatment effects, especially when many studies (30 or more) and many measurement occasions within subjects (20 or more) are included and when the studies are rather homogeneous (with small between-study variance). The estimates of the variance components are less accurate.
Collapse
|