1
|
Lee JD, Kang VY, Terol AK, Joo S. Examining the Efficacy of Culturally Responsive Interventions for Autistic Children and Their Families: A Meta-Analysis. J Autism Dev Disord 2025; 55:706-726. [PMID: 38246962 PMCID: PMC11260274 DOI: 10.1007/s10803-023-06212-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/04/2023] [Indexed: 01/23/2024]
Abstract
Culturally responsive interventions for autistic children and their families have been developed and implemented to address issues related to limited representation, inequities, and disparities in access to care of minoritized families in research. Currently available reviews are relatively limited in scope or do not synthesize interventions specifically. Therefore, we conducted a meta-analysis to synthesize autism intervention literature that specifically targeted autistic individuals and their family members from minoritized backgrounds, such as immigrant families. We used four databases to identify studies that used culturally responsive interventions with minoritized autistic children and their families. An article was included if it included empirical intervention data using an experimental design. A total of 354 studies were initially screened, and 24 studies were included. Effect sizes of these studies were extracted across two levels (i.e., child and family levels). Data from group design studies were extracted manually, and data from single-case design studies were extracted using a web-based tool. We used design-comparable standardized effect sizes to compare across both designs. The analysis revealed a large, positive, and significant overall effect size across culturally responsive interventions. Specifically, social-communication and mental health outcomes yielded significant effects at the child level. Additionally, parents' mental health and fidelity of strategy implementation also yielded significant results. Our results suggest that culturally responsive interventions yield comparable outcomes to unadapted, original interventions. Future research should examine the distinction between the effect of cultural adaptation and the efficacy of the intervention itself.
Collapse
Affiliation(s)
- James D Lee
- Department of Psychiatry and Behavioral Sciences, University of Washington, 6901 Sand Point Ave NE, Seattle, WA, 98115, USA.
| | - Veronica Y Kang
- Department of Counseling, Higher Education, and Special Education, University of Maryland, College Park, USA
| | - Adriana Kaori Terol
- Department of Special Education, University of Illinois Urbana-Champaign, Champaign, USA
| | - Sean Joo
- Department of Educational Psychology, University of Kansas, Lawrence, USA
| |
Collapse
|
2
|
Van den Noortgate W, Onghena P. Harnessing Available Evidence in Single-Case Experimental Studies: The Use of Multilevel Meta-Analysis. Psychol Belg 2024; 64:166-184. [PMID: 39464391 PMCID: PMC11505138 DOI: 10.5334/pb.1307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 10/09/2024] [Indexed: 10/29/2024] Open
Abstract
The use of multilevel models to combine and compare the results of multiple single-case experimental design (SCED) studies has been proposed about two decades ago. Since then, the number of multilevel meta-analyses of SCED studies steadily increased, together with the complexity of multilevel models used. At the same time, many studies were done to empirically evaluate the approach in a variety of situations, and to study how the flexibility of multilevel models can be employed to account for many complexities that often are encountered in SCED research, such as autocorrelation, linear and nonlinear time trends, specific designs, external event effects, multiple outcomes, and heterogeneity. In this paper, we give a state-of-the-art of the multilevel approach, by making an overview of basic and more extended models, summarizing simulation results, and discussing some remaining issues.
Collapse
Affiliation(s)
- Wim Van den Noortgate
- Methodology of Educational Sciences Research Group, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium
- Itec, an imec research group at KU Leuven, Belgium
| | - Patrick Onghena
- Methodology of Educational Sciences Research Group, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium
| |
Collapse
|
3
|
Van Norman ER, Klingbeil DA, Sturgell AK. The influence of procedural characteristics on within-case effect sizes for academic outcomes. J Sch Psychol 2024; 106:101347. [PMID: 39251321 DOI: 10.1016/j.jsp.2024.101347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 06/10/2024] [Accepted: 06/11/2024] [Indexed: 09/11/2024]
Abstract
Single-case experimental designs (SCEDs) have been used with increasing frequency to identify evidence-based interventions in education. The purpose of this study was to explore how several procedural characteristics, including within-phase variability (i.e., measurement error), number of baseline observations, and number of intervention observations influenced the magnitude of four SCED effect sizes, including (a) non-overlap of all pairs (NAP), (b) baseline corrected tau (BC-Tau), (c) mean-phase difference (MPD), and (d) generalized least squares (GLS) when applied to hypothetical academic intervention SCED data. Higher levels of measurement error decreased the average magnitude of effect sizes, particularly NAP and BC-Tau. However, the number of intervention observations had minimal impact on the average magnitude of NAP and BC-Tau. Increasing the number of intervention observations dramatically increased the magnitude of GLS and MPD. Increasing the number of baseline observations also tended to increase the average magnitude of MPD. The ratio of baseline to intervention observations had a statistically but not practically significant influence on the average magnitude of NAP, BC-Tau, and GLS. Careful consideration is required when determining the length of time academic SCEDs are conducted and what effect sizes are used to summarize treatment outcomes. This article also highlights the value of using meaningful simulation conditions to understand the performance of SCED effect sizes.
Collapse
Affiliation(s)
- Ethan R Van Norman
- Center for Promoting Research to Practice, Lehigh University, United States.
| | - David A Klingbeil
- Department of Educational Psychology, University of Wisconsin-Madison, United States
| | - Adelle K Sturgell
- Center for Promoting Research to Practice, Lehigh University, United States
| |
Collapse
|
4
|
Chen LT, Chen YK, Yang TR, Chiang YS, Hsieh CY, Cheng C, Ding QW, Wu PJ, Peng CYJ. Examining the normality assumption of a design-comparable effect size in single-case designs. Behav Res Methods 2024; 56:379-405. [PMID: 36650402 DOI: 10.3758/s13428-022-02035-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/22/2022] [Indexed: 01/18/2023]
Abstract
What Works Clearinghouse (WWC, 2022) recommends a design-comparable effect size (D-CES; i.e., gAB) to gauge an intervention in single-case experimental design (SCED) studies, or to synthesize findings in meta-analysis. So far, no research has examined gAB's performance under non-normal distributions. This study expanded Pustejovsky et al. (2014) to investigate the impact of data distributions, number of cases (m), number of measurements (N), within-case reliability or intra-class correlation (ρ), ratio of variance components (λ), and autocorrelation (ϕ) on gAB in multiple-baseline (MB) design. The performance of gAB was assessed by relative bias (RB), relative bias of variance (RBV), MSE, and coverage rate of 95% CIs (CR). Findings revealed that gAB was unbiased even under non-normal distributions. gAB's variance was generally overestimated, and its 95% CI was over-covered, especially when distributions were normal or nearly normal combined with small m and N. Large imprecision of gAB occurred when m was small and ρ was large. According to the ANOVA results, data distributions contributed to approximately 49% of variance in RB and 25% of variance in both RBV and CR. m and ρ each contributed to 34% of variance in MSE. We recommend gAB for MB studies and meta-analysis with N ≥ 16 and when either (1) data distributions are normal or nearly normal, m = 6, and ρ = 0.6 or 0.8, or (2) data distributions are mildly or moderately non-normal, m ≥ 4, and ρ = 0.2, 0.4, or 0.6. The paper concludes with a discussion of gAB's applicability and design-comparability, and sound reporting practices of ES indices.
Collapse
Affiliation(s)
- Li-Ting Chen
- Department of Educational Studies, University of Nevada, Reno, Reno, NV, USA.
| | - Yi-Kai Chen
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Tong-Rong Yang
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Yu-Shan Chiang
- Department of Curriculum & Instruction, Indiana University Bloomington, Bloomington, IN, USA
| | - Cheng-Yu Hsieh
- Department of Psychology, National Taiwan University, Taipei, Taiwan
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Che Cheng
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Qi-Wen Ding
- Institute of Sociology, Academia Sinica, Taipei, Taiwan
| | - Po-Ju Wu
- Department of Counseling and Educational Psychology, Indiana University Bloomington, Bloomington, IN, USA
| | - Chao-Ying Joanne Peng
- Department of Psychology, National Taiwan University, Taipei, Taiwan
- Department of Counseling and Educational Psychology, Indiana University Bloomington, Bloomington, IN, USA
| |
Collapse
|
5
|
Moeyaert M, Dehghan-Chaleshtori M, Xu X, Yang P. Single-case design meta-analyses in education and psychology: a systematic review of methodology. Front Res Metr Anal 2023; 8:1190362. [PMID: 38025959 PMCID: PMC10679716 DOI: 10.3389/frma.2023.1190362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Meta-analysis is of increasing importance as this quantitative synthesis technique has the potential to summarize a tremendous amount of research evidence, which can help making evidence-based decisions in policy, practice, and theory. This paper examines the single-case meta-analyses within the Education and Psychology fields. The amount of methodological studies related to the meta-analysis of Single-Case Experimental Designs (SCEDs) is increasing rapidly, especially in these fields. This underscores the necessity of a succinct summary to help methodologists identify areas for further development in Education and Psychology research. It also aids applied researchers and research synthesists in discerning when to use meta-analytic techniques for SCED studies based on criteria such as bias, mean squared error, 95% confidence intervals, Type I error rates, and statistical power. Based on the summary of empirical evidence from 18 reports identified through a systematic search procedure, information related to meta-analytic techniques, data generation and analysis models, design conditions, statistical properties, conditions under which the meta-analytic technique is appropriate, and the study purpose(s) were extracted. The results indicate that three-level hierarchical linear modeling is the most empirically validated SCED meta-analytic technique, and parameter bias is the most prominent statistical property investigated. A large number of primary studies (more than 30) and at least 20 measurement occasions per participant are recommended for usage of SCED meta-analysis in Education and Psychology fields.
Collapse
Affiliation(s)
- Mariola Moeyaert
- Department of Educational and Counseling Psychology, University at Albany-State University of New York, Albany, NY, United States
| | - Marzieh Dehghan-Chaleshtori
- Department of Educational and Counseling Psychology, University at Albany-State University of New York, Albany, NY, United States
| | - Xinyun Xu
- Department of Educational and Counseling Psychology, University at Albany-State University of New York, Albany, NY, United States
- Center of Tsinghua Think Tanks, Tsinghua University, Beijing, China
| | - Panpan Yang
- Center for Research on Child Wellbeing, Princeton University, Wallace Hall, Princeton, NJ, United States
| |
Collapse
|
6
|
Moeyaert M, Yang P, Xu X, Kim E. Characteristics of Moderators in Meta-Analyses of Single-Case Experimental Design Studies. Behav Modif 2023; 47:1510-1545. [PMID: 33759586 DOI: 10.1177/01454455211002111] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Hierarchical linear modeling (HLM) has been recommended as a meta-analytic technique for the quantitative synthesis of single-case experimental design (SCED) studies. The HLM approach is flexible and can model a variety of different SCED data complexities, such as intervention heterogeneity. A major advantage of using HLM is that participant and-or study characteristics can be incorporated in the model in an attempt to explain intervention heterogeneity. The inclusion of moderators in the context of meta-analysis of SCED studies did not yet receive attention and is in need of methodological research. Prior to extending methodological work validating the hierarchical linear model including moderators at the different levels, an overview of characteristics of moderators typically encountered in the field is needed. This will inform design conditions to be embedded in future methodological studies and ensure that these conditions are realistic and representative for the field of SCED meta-analyses. This study presents the results of systematic review of SCED meta-analyses, with the particular focus on moderator characteristic. The initial search yielded a total of 910 articles and book chapters. After excluding duplicate studies and non peer-reviewed studies, 658 unique peer-reviewed studies were maintained and screened by two independent researchers. Sixty articles met the inclusion criteria and were eligible for data retrieval. The results of the analysis of moderator characteristics retrieved from these 60 meta-analyses are presented. The first part of the results section contains an overview of moderator characteristics per moderator level (within-participant level, participant level, and study level), including the types of moderators, the ratio of the number of moderators relative to the number of units at that level, the measurement scale, and the degree of missing data. The second part of the results section focuses on the metric used to quantify moderator effectiveness and the analysis approach. Based on the results of the systematic review, recommendations are given for conditions to be included in future methodological work.
Collapse
|
7
|
Pustejovsky JE, Swan DM, English KW. An Examination of Measurement Procedures and Characteristics of Baseline Outcome Data in Single-Case Research. Behav Modif 2023; 47:1423-1454. [PMID: 31375029 DOI: 10.1177/0145445519864264] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
There has been growing interest in using statistical methods to analyze data and estimate effect size indices from studies that use single-case designs (SCDs), as a complement to traditional visual inspection methods. The validity of a statistical method rests on whether its assumptions are plausible representations of the process by which the data were collected, yet there is evidence that some assumptions-particularly regarding normality of error distributions-may be inappropriate for single-case data. To develop more appropriate modeling assumptions and statistical methods, researchers must attend to the features of real SCD data. In this study, we examine several features of SCDs with behavioral outcome measures in order to inform development of statistical methods. Drawing on a corpus of over 300 studies, including approximately 1,800 cases, from seven systematic reviews that cover a range of interventions and outcome constructs, we report the distribution of study designs, distribution of outcome measurement procedures, and features of baseline outcome data distributions for the most common types of measurements used in single-case research. We discuss implications for the development of more realistic assumptions regarding outcome distributions in SCD studies, as well as the design of Monte Carlo simulation studies evaluating the performance of statistical analysis techniques for SCD data.
Collapse
|
8
|
Baek E, Luo W, Lam KH. Meta-Analysis of Single-Case Experimental Design using Multilevel Modeling. Behav Modif 2023; 47:1546-1573. [PMID: 36647266 DOI: 10.1177/01454455221144034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Multilevel modeling (MLM) is an approach for meta-analyzing single-case experimental designs (SCED). In this paper, we provide a step-by-step guideline for using the MLM to meta-analyze SCED time-series data. The MLM approach is first presented using a basic three-level model, then gradually extended to represent more realistic situations of SCED data, such as modeling a time variable, moderators representing different design types and multiple outcomes, and heterogeneous within-case variance. The presented approach is then illustrated using real SCED data. Practical recommendations using the MLM approach are also provided for applied researchers based on the current methodological literature. Available free and commercial software programs to meta-analyze SCED data are also introduced, along with several hands-on software codes for applied researchers to implement their own studies. Potential advantages and limitations of using the MLM approach to meta-analyzing SCED are discussed.
Collapse
Affiliation(s)
| | - Wen Luo
- Texas A&M University, College Station, TX, USA
| | | |
Collapse
|
9
|
Baek E, Luo W. Modeling multiple dependent variables in meta-analysis of single-case experimental design using multilevel modeling. Behav Res Methods 2023; 55:3760-3771. [PMID: 36253594 DOI: 10.3758/s13428-022-01987-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2022] [Indexed: 11/08/2022]
Abstract
Although meta-analyses of single-case experimental design (SCED) often include multiple types of dependent variables (DVs), multiple DVs are rarely considered within models in the analysis. Baek et al. (Journal of Experimental Education, 90(4), 934-961, 2022) identified several statistical issues that arise when researchers fail to model multiple DVs in meta-analyses of SCED data. However, the degree to which non-modeling of multiple DVs impacts the results of the meta-analysis of SCED has not been fully examined. In this simulation study, we have systematically investigated the impact of non-modeling of multiple DVs when analyzing meta SCED data using multilevel modeling. The result demonstrates that modeling multiple DVs has advantages over the non-modeling option for meta-analysis of SCED. Modeling multiple DVs enables the determination of precise effects from different DVs in addition to the unbiased and accurate average effect and accurate estimates and inferences for the error variances at the study level as well as the observation level. The current study also reveals potential factors (i.e., the number of DVs, degree of heterogeneity in the level-1 error variances and autocorrelation, and presence of the moderator effect) that impact the precision and accuracy of the variance parameters.
Collapse
Affiliation(s)
- Eunkyeng Baek
- Educational Psychology, Texas A&M University, 718E Harrington Tower, 4225 TAMU, College Station, TX, 77843-4225, USA.
| | - Wen Luo
- Educational Psychology, Texas A&M University, 718E Harrington Tower, 4225 TAMU, College Station, TX, 77843-4225, USA
| |
Collapse
|
10
|
Aydin O, Tanious R. Performance criteria-based effect size (PCES) measurement of single-case experimental designs: A real-world data study. J Appl Behav Anal 2022; 55:891-918. [PMID: 35593661 DOI: 10.1002/jaba.928] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 04/06/2022] [Indexed: 12/20/2022]
Abstract
Visual analysis and nonoverlap-based effect sizes are predominantly used in analyzing single case experimental designs (SCEDs). Although they are popular analytical methods for SCEDs, they have certain limitations. In this study, a new effect size calculation model for SCEDs, named performance criteria-based effect size (PCES), is proposed considering the limitations of 4 nonoverlap-based effect size measures, widely accepted in the literature and that blend well with visual analysis. In the field test of PCES, actual data from published studies were utilized, and the relations between PCES, visual analysis, and the 4 nonoverlap-based methods were examined. In determining the data to be used in the field test, 1,052 tiers (AB phases) were identified from 6 journals. The results revealed a weak or moderate relation between PCES and nonoverlap-based methods due to its focus on performance criteria. Although PCES has some weaknesses, it promises to eliminate the causes that may create issues in nonoverlap-based methods, using quantitative data to determine socially important changes in behavior and to complement visual analysis.
Collapse
|
11
|
Declercq L, Jamshidi L, Fernández Castilla B, Moeyaert M, Beretvas SN, Ferron JM, Van den Noortgate W. Multilevel Meta-Analysis of Individual Participant Data of Single-Case Experimental Designs: One-Stage versus Two-Stage Methods. MULTIVARIATE BEHAVIORAL RESEARCH 2022; 57:298-317. [PMID: 32996335 DOI: 10.1080/00273171.2020.1822148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
To conduct a multilevel meta-analysis of multiple single-case experimental design (SCED) studies, the individual participant data (IPD) can be analyzed in one or two stages. In the one-stage approach, a multilevel model is estimated based on the raw data. In the two-stage approach, an effect size is calculated for each participant and these effect sizes and their sampling variances are subsequently combined to estimate a meta-analytic multilevel model. The multilevel model in the two-stage approach has fewer parameters to estimate, in exchange for the reduction of information of the raw data to effect sizes. In this paper we explore how the one-stage and two-stage IPD approaches can be applied in the context of meta-analysis of single-case designs. Both approaches are compared for several single-case designs of increasing complexity. Through a simulation study we show that the two-stage approach obtains better convergence rates for more complex models, but that model estimation does not necessarily converge at a faster speed. The point estimates of the fixed effects are unbiased for both approaches across all models, as such confirming results from methodological research on IPD meta-analysis of group-comparison designs. In light of these results, we discuss the implementation of both methods in R.
Collapse
Affiliation(s)
- Lies Declercq
- Faculty of Psychology and Educational Sciences, KU Leuven
- ITEC, imec research group, KU Leuven
| | - Laleh Jamshidi
- Faculty of Psychology and Educational Sciences, KU Leuven
- ITEC, imec research group, KU Leuven
| | | | - Mariola Moeyaert
- Department of Educational Psychology and Methodology, University at Albany, Albany, NY
| | | | - John M Ferron
- Department of Educational Measurement and Research, University of South Florida
| | - Wim Van den Noortgate
- Faculty of Psychology and Educational Sciences, KU Leuven
- ITEC, imec research group, KU Leuven
| |
Collapse
|
12
|
Somer E, Gische C, Miočević M. Methods for Modeling Autocorrelation and Handling Missing Data in Mediation Analysis in Single Case Experimental Designs (SCEDs). Eval Health Prof 2022; 45:36-53. [PMID: 35225017 PMCID: PMC8980456 DOI: 10.1177/01632787211071136] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Single-Case Experimental Designs (SCEDs) are increasingly recognized as a valuable alternative to group designs. Mediation analysis is useful in SCEDs contexts because it informs researchers about the underlying mechanism through which an intervention influences the outcome. However, methods for conducting mediation analysis in SCEDs have only recently been proposed. Furthermore, repeated measures of a target behavior present the challenges of autocorrelation and missing data. This paper aims to extend methods for estimating indirect effects in piecewise regression analysis in SCEDs by (1) evaluating three methods for modeling autocorrelation, namely, Newey-West (NW) estimation, feasible generalized least squares (FGLS) estimation, and explicit modeling of an autoregressive structure of order one (AR(1)) in the error terms and (2) evaluating multiple imputation in the presence of data that are missing completely at random. FGLS and AR(1) outperformed NW and OLS estimation in terms of efficiency, Type I error rates, and coverage, while OLS was superior to the methods in terms of power for larger samples. The performance of all methods is consistent across 0% and 20% missing data conditions. 50% missing data led to unsatisfactory power and biased estimates. In light of these findings, we provide recommendations for applied researchers.
Collapse
Affiliation(s)
- Emma Somer
- Department of Psychology, 5620McGill University, Montreal, QC, Canada
| | - Christian Gische
- Department of Psychology, 9373Humboldt-Universitätzu Berlin, Berlin, Germany
| | - Milica Miočević
- Department of Psychology, 5620McGill University, Montreal, QC, Canada
| |
Collapse
|
13
|
Friedel JE, Cox A, Galizio A, Swisher M, Small ML, Perez S. Monte Carlo Analyses for Single-Case Experimental Designs: An Untapped Resource for Applied Behavioral Researchers and Practitioners. Perspect Behav Sci 2021; 45:209-237. [DOI: 10.1007/s40614-021-00318-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/01/2021] [Indexed: 11/29/2022] Open
|
14
|
Estimation and statistical inferences of variance components in the analysis of single-case experimental design using multilevel modeling. Behav Res Methods 2021; 54:1559-1579. [PMID: 34508288 DOI: 10.3758/s13428-021-01691-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/21/2021] [Indexed: 11/08/2022]
Abstract
Multilevel models (MLMs) can be used to examine treatment heterogeneity in single-case experimental designs (SCEDs). With small sample sizes, common issues for estimating between-case variance components in MLMs include nonpositive definite matrix, biased estimates, misspecification of covariance structures, and invalid Wald tests for variance components with bounded distributions. To address these issues, unconstrained optimization, model selection procedure based on parametric bootstrap, and restricted likelihood ratio test (RLRT)-based procedure are introduced. Using simulation studies, we compared the performance of two types of optimization methods (constrained vs. unconstrained) when the covariance structures are correctly specified or misspecified. We also examined the performance of a model selection procedure to obtain the optimal covariance structure. The results showed that the unconstrained optimization can avoid nonpositive definite issues to a great extent without a compromise in model convergence. The misspecification of covariance structures would cause biased estimates, especially with small between case variance components. However, the model selection procedure was found to attenuate the magnitude of bias. A practical guideline was generated for empirical researchers in SCEDs, providing conditions under which trustworthy point and interval estimates can be obtained for between-case variance components in MLMs, as well as the conditions under which the RLRT-based procedure can produce acceptable empirical type I error rate and power.
Collapse
|
15
|
The Power to Explain Variability in Intervention Effectiveness in Single-Case Research Using Hierarchical Linear Modeling. Perspect Behav Sci 2021; 45:13-35. [DOI: 10.1007/s40614-021-00304-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/14/2021] [Indexed: 10/20/2022] Open
|
16
|
Dowdy A, Peltier C, Tincani M, Schneider WJ, Hantula DA, Travers JC. Meta-analyses and effect sizes in applied behavior analysis: A review and discussion. J Appl Behav Anal 2021; 54:1317-1340. [PMID: 34219222 DOI: 10.1002/jaba.862] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 06/01/2021] [Accepted: 06/02/2021] [Indexed: 12/28/2022]
Abstract
For more than four decades, researchers have used meta-analyses to synthesize data from multiple experimental studies often to draw conclusions that are not supported by individual studies. More recently, single-case experimental design (SCED) researchers have adopted meta-analysis techniques to answer research questions with data gleaned from SCED experiments. Meta-analyses enable researchers to answer questions regarding intervention efficacy, generality, and condition boundaries. Here we discuss meta-analysis techniques, the rationale for their adaptation with SCED studies, and current indices used to quantify the effect of SCED data in applied behavior analysis.
Collapse
Affiliation(s)
- Art Dowdy
- Department of Teaching and Learning, Temple University
| | - Corey Peltier
- Department of Educational Psychology, University of Oklahoma
| | - Matt Tincani
- Department of Teaching and Learning, Temple University
| | - W Joel Schneider
- Department of Psychological Studies in Education, Temple University
| | | | | |
Collapse
|
17
|
Baek E, Ferron JM. Bayesian Analysis for Multiple-baseline Studies Where the Variance Differs across Cases in OpenBUGS. Dev Neurorehabil 2021; 24:130-143. [PMID: 33393404 DOI: 10.1080/17518423.2020.1858455] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Objective: There is a growing interest in the potential benefits of applying Bayesian estimation for multilevel models of SCED data. Methodological studies have shown that Bayesian estimation resolves convergence issues, can be adequate for the small sample, and can improve the accuracy of the variance components. Despite the potential benefits, the lack of accessibility to software codes makes it difficult for applied researchers to implement Bayesian estimation in their studies. The purpose of this article is to illustrate a feasible way to implement Bayesian estimation using OpenBUGS software to analyze a complex SCED model where within-participants variability and autocorrelation may differ across cases. Method: By using extracted data from a published study, step-by-step guidance in analyzing the data using OpenBUGS software is provided, including (1) model specification, (2) prior distributions, (3) data entering, (4) model estimation, (5) convergence criteria, and (6) posterior inferences and interpretations. Result: Full codes for the analysis are provided.
Collapse
|
18
|
Investigation of Single-Case Multiple-Baseline Randomization Tests of Trend and Variability. EDUCATIONAL PSYCHOLOGY REVIEW 2020. [DOI: 10.1007/s10648-020-09549-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
19
|
Baek E, Ferron JJM. Modeling heterogeneity of the level-1 error covariance matrix in multilevel models for single-case data. METHODOLOGY-EUROPEAN JOURNAL OF RESEARCH METHODS FOR THE BEHAVIORAL AND SOCIAL SCIENCES 2020. [DOI: 10.5964/meth.2817] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Previous research applying multilevel models to single-case data has made a critical assumption that the level-1 error covariance matrix is constant across all participants. However, the level-1 error covariance matrix may differ across participants and ignoring these differences can have an impact on estimation and inferences. Despite the importance of this issue, the effects of modeling between-case variation in the level-1 error structure had not yet been systematically studied. The purpose of this simulation study was to identify the consequences of modeling and not modeling between-case variation in the level-1 error covariance matrices in single-case studies, using Bayesian estimation. The results of this study found that variance estimation was more sensitive to the method used to model the level-1 error structure than fixed effect estimation, with fixed effects only being impacted in the most extreme heterogeneity conditions. Implications for applied single-case researchers and methodologists are discussed.
Collapse
|
20
|
Abstract
In the context of single-case experimental designs, replication is crucial. On the one hand, the replication of the basic effect within a study is necessary for demonstrating experimental control. On the other hand, replication across studies is required for establishing the generality of the intervention effect. Moreover, the "replicability crisis" presents a more general context further emphasizing the need for assessing consistency in replications. In the current text, we focus on replication of effects within a study, and we specifically discuss the consistency of effects. Our proposal for assessing the consistency of effects refers to one of the promising data analytical techniques, multilevel models, also known as hierarchical linear models or mixed effects models. One option is to check, for each case in a multiple-baseline design, whether the confidence interval for the individual treatment effect excludes zero. This is relevant for assessing whether the effect is replicated as being non-null. However, we consider that it is more relevant and informative to assess, for each case, whether the confidence interval for the random effects includes zero (i.e., whether the fixed effect estimate is a plausible value for each individual effect). This is relevant for assessing whether the effect is consistent in size, with the additional requirement that the fixed effect itself is different from zero. The proposal for assessing consistency is illustrated with real data and is implemented in free user-friendly software.
Collapse
|
21
|
Multilevel meta-analysis of multiple regression coefficients from single-case experimental studies. Behav Res Methods 2020; 52:2008-2019. [PMID: 32144730 DOI: 10.3758/s13428-020-01380-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The focus of the current study is on handling the dependence among multiple regression coefficients representing the treatment effects when meta-analyzing data from single-case experimental studies. We compare the results when applying three different multilevel meta-analytic models (i.e., a univariate multilevel model avoiding the dependence, a multivariate multilevel model ignoring covariance at higher levels, and a multivariate multilevel model modeling the existing covariance) to deal with the dependent effect sizes. The results indicate better estimates of the overall treatment effects and variance components when a multivariate multilevel model is applied, independent of modeling or ignoring the existing covariance. These findings confirm the robustness of multilevel modeling to misspecifying the existing covariance at the case and study level in terms of estimating the overall treatment effects and variance components. The results also show that the overall treatment effect estimates are unbiased regardless of the underlying model, but the between-case and between-study variance components are biased in certain conditions. In addition, the between-study variance estimates are particularly biased when the number of studies is smaller than 40 (i.e., 10 or 20) and the true value of the between-case variance is relatively large (i.e., 8). The observed bias is larger for the between-case variance estimates compared to the between-study variance estimates when the true between-case variance is relatively small (i.e., 0.5).
Collapse
|
22
|
Cheung MWL. A Guide to Conducting a Meta-Analysis with Non-Independent Effect Sizes. Neuropsychol Rev 2019; 29:387-396. [PMID: 31446547 PMCID: PMC6892772 DOI: 10.1007/s11065-019-09415-6] [Citation(s) in RCA: 177] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 08/14/2019] [Indexed: 11/25/2022]
Abstract
Conventional meta-analytic procedures assume that effect sizes are independent. When effect sizes are not independent, conclusions based on these conventional procedures can be misleading or even wrong. Traditional approaches, such as averaging the effect sizes and selecting one effect size per study, are usually used to avoid the dependence of the effect sizes. These ad-hoc approaches, however, may lead to missed opportunities to utilize all available data to address the relevant research questions. Both multivariate meta-analysis and three-level meta-analysis have been proposed to handle non-independent effect sizes. This paper gives a brief introduction to these new techniques for applied researchers. The first objective is to highlight the benefits of using these methods to address non-independent effect sizes. The second objective is to illustrate how to apply these techniques with real data in R and Mplus. Researchers may modify the sample R and Mplus code to fit their data.
Collapse
Affiliation(s)
- Mike W-L Cheung
- Department of Psychology, Faculty of Arts and Social Sciences, National University of Singapore, Block AS4, Level 2, 9 Arts Link, Singapore, 117570, Singapore.
| |
Collapse
|
23
|
Joo SH, Ferron JM. Application of the Within- and Between-Series Estimators to Non-normal Multiple-Baseline Data: Maximum Likelihood and Bayesian Approaches. MULTIVARIATE BEHAVIORAL RESEARCH 2019; 54:666-689. [PMID: 30857444 DOI: 10.1080/00273171.2018.1564877] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In single-case research, multiple-baseline (MB) design provides the opportunity to estimate the treatment effect based on not only within-series comparisons of treatment phase to baseline phase observations, but also time-specific between-series comparisons of observations from those that have started treatment to those that are still in the baseline. For analyzing MB studies, two types of linear mixed modeling methods have been proposed: the within- and between-series models. In principle, those models were developed based on normality assumptions, however, normality may not always be found in practical settings. Therefore, this study aimed to investigate the robustness of the within- and between-series models when data were non-normal. A Monte Carlo study was conducted with four statistical approaches. The approaches were defined by the crossing of two analytic decisions: (a) whether to use a within- or between-series estimate of effect and (b) whether to use restricted maximum likelihood or Markov chain Monte Carlo estimations. The results showed the treatment effect estimates of the four approaches had minimal bias, that within-series estimates were more precise than between-series estimates, and that confidence interval coverage was frequently acceptable, but varied across conditions and methods of estimation. Applications and implications were discussed based on the findings.
Collapse
Affiliation(s)
- Seang-Hwane Joo
- Department of Educational and Psychological Studies, University of South Florida
| | - John M Ferron
- Department of Educational and Psychological Studies, University of South Florida
| |
Collapse
|
24
|
A randomization test wrapper for synthesizing single-case experiments using multilevel models: A Monte Carlo simulation study. Behav Res Methods 2019; 52:654-666. [PMID: 31270794 DOI: 10.3758/s13428-019-01266-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multilevel models (MLMs) have been proposed in single-case research, to synthesize data from a group of cases in a multiple-baseline design (MBD). A limitation of this approach is that MLMs require several statistical assumptions that are often violated in single-case research. In this article we propose a solution to this limitation by presenting a randomization test (RT) wrapper for MLMs that offers a nonparametric way to evaluate treatment effects, without making distributional assumptions or an assumption of random sampling. We present the rationale underlying the proposed technique and validate its performance (with respect to Type I error rate and power) as compared to parametric statistical inference in MLMs, in the context of evaluating the average treatment effect across cases in an MBD. We performed a simulation study that manipulated the numbers of cases and of observations per case in a dataset, the data variability between cases, the distributional characteristics of the data, the level of autocorrelation, and the size of the treatment effect in the data. The results showed that the power of the RT wrapper is superior to the power of parametric tests based on F distributions for MBDs with fewer than five cases, and that the Type I error rate of the RT wrapper is controlled for bimodal data, whereas this is not the case for traditional MLMs.
Collapse
|
25
|
MultiSCED: A tool for (meta-)analyzing single-case experimental data with multilevel modeling. Behav Res Methods 2019; 52:177-192. [DOI: 10.3758/s13428-019-01216-2] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
26
|
Moeyaert M, Manolov R, Rodabaugh E. Meta-Analysis of Single-Case Research via Multilevel Models: Fundamental Concepts and Methodological Considerations. Behav Modif 2018; 44:265-295. [PMID: 30360633 DOI: 10.1177/0145445518806867] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Multilevel modeling is an approach that can be used to summarize single-case experimental design (SCED) data. Multilevel models were developed to analyze hierarchical structured data with units at a lower level nested within higher level units. SCEDs use time series data collected from multiple cases (or subjects) within a study that allow researchers to investigate intervention effectiveness at the individual level and also to investigate how these individual intervention effects change over time. There is an increased interest in the field regarding how SCEDs can be used to establish an evidence base for interventions by synthesizing data from a series of intervention studies. Although using multilevel models to meta-analyze SCED studies is promising, application is often hampered by being potentially excessively technical. First, this article provides an accessible description and overview of the potential of multilevel meta-analysis to combine SCED data. Second, a summary of the methodological evidence on the performance of multilevel models for meta-analysis is provided, which is useful given that such evidence is currently scattered over multiple technical articles in the literature. Third, the actual steps to perform a multilevel meta-analysis are outlined in a brief practical guide. Fourth, a suggestion for integrating the quantitative results with a visual representation is provided.
Collapse
|
27
|
Analysis of single-case experimental count data using the linear mixed effects model: A simulation study. Behav Res Methods 2018; 51:2477-2497. [DOI: 10.3758/s13428-018-1091-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
28
|
One by One: Accumulating Evidence by using Meta-Analytical Procedures for Single-Case Experiments. BRAIN IMPAIR 2017. [DOI: 10.1017/brimp.2017.25] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
This paper presents a unilevel and multilevel approach for the analysis and meta-analysis of single-case experiments (SCEs). We propose a definition of SCEs and derive the specific features of SCEs’ data that have to be taken into account when analysing and meta-analysing SCEs. We discuss multilevel models of increasing complexity and propose alternative and complementary techniques based on probability combining and randomisation test wrapping. The proposed techniques are demonstrated with real-life data and corresponding R code.
Collapse
|
29
|
Jahng S, Wood PK. Multilevel Models for Intensive Longitudinal Data with Heterogeneous Autoregressive Errors: The Effect of Misspecification and Correction with Cholesky Transformation. Front Psychol 2017; 8:262. [PMID: 28286490 PMCID: PMC5323419 DOI: 10.3389/fpsyg.2017.00262] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2016] [Accepted: 02/10/2017] [Indexed: 11/15/2022] Open
Abstract
Intensive longitudinal studies, such as ecological momentary assessment studies using electronic diaries, are gaining popularity across many areas of psychology. Multilevel models (MLMs) are most widely used analytical tools for intensive longitudinal data (ILD). Although ILD often have individually distinct patterns of serial correlation of measures over time, inferences of the fixed effects, and random components in MLMs are made under the assumption that all variance and autocovariance components are homogenous across individuals. In the present study, we introduced a multilevel model with Cholesky transformation to model ILD with individually heterogeneous covariance structure. In addition, the performance of the transformation method and the effects of misspecification of heterogeneous covariance structure were investigated through a Monte Carlo simulation. We found that, if individually heterogeneous covariances are incorrectly assumed as homogenous independent or homogenous autoregressive, MLMs produce highly biased estimates of the variance of random intercepts and the standard errors of the fixed intercept and the fixed effect of a level 2 covariate when the average autocorrelation is high. For intensive longitudinal data with individual specific residual covariance, the suggested transformation method showed lower bias in those estimates than the misspecified models when the number of repeated observations within individuals is 50 or more.
Collapse
Affiliation(s)
- Seungmin Jahng
- Department of Psychology, Sungkyunkwan University Seoul, South Korea
| | - Phillip K Wood
- Department of Psychological Sciences, University of Missouri Columbia, MO, USA
| |
Collapse
|
30
|
Manolov R, Moeyaert M. Recommendations for Choosing Single-Case Data Analytical Techniques. Behav Ther 2017; 48:97-114. [PMID: 28077224 DOI: 10.1016/j.beth.2016.04.008] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/23/2015] [Revised: 04/13/2016] [Accepted: 04/30/2016] [Indexed: 11/29/2022]
Abstract
The current paper responds to the need to provide guidance to applied single-case researchers regarding the possibilities of data analysis. The amount of available single-case data analytical techniques has been growing during recent years and a general overview, comparing the possibilities of these techniques, is missing. Such an overview is provided that refers to techniques that yield results in terms of a raw or standardized difference and procedures related to regression analysis, as well as nonoverlap and percentage change indices. The comparison is provided in terms of the type of quantification provided, data features taken into account, conditions in which the techniques are appropriate, possibilities for meta-analysis, and evidence available on their performance. Moreover, we provide a set of recommendations for choosing appropriate analysis techniques, pointing at specific situations (aims, types of data, researchers' resources) and the data analytical techniques that are most appropriate in these situations. The recommendations are contextualized using a variety of published single-case data sets in order to illustrate a range of realistic situations that researchers have faced and may face in their investigations.
Collapse
|
31
|
Manolov R, Moeyaert M. How Can Single-Case Data Be Analyzed? Software Resources, Tutorial, and Reflections on Analysis. Behav Modif 2016; 41:179-228. [DOI: 10.1177/0145445516664307] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The present article aims to present a series of software developments in the quantitative analysis of data obtained via single-case experimental designs (SCEDs), as well as the tutorial describing these developments. The tutorial focuses on software implementations based on freely available platforms such as R and aims to bring statistical advances closer to applied researchers and help them become autonomous agents in the data analysis stage of a study. The range of analyses dealt with in the tutorial is illustrated on a typical single-case dataset, relying heavily on graphical data representations. We illustrate how visual and quantitative analyses can be used jointly, giving complementary information and helping the researcher decide whether there is an intervention effect, how large it is, and whether it is practically significant. To help applied researchers in the use of the analyses, we have organized the data in the different ways required by the different analytical procedures and made these data available online. We also provide Internet links to all free software available, as well as all the main references to the analytical techniques. Finally, we suggest that appropriate and informative data analysis is likely to be a step forward in documenting and communicating results and also for increasing the scientific credibility of SCEDs.
Collapse
|
32
|
Moeyaert M, Maggin D, Verkuilen J. Reliability, Validity, and Usability of Data Extraction Programs for Single-Case Research Designs. Behav Modif 2016; 40:874-900. [DOI: 10.1177/0145445516645763] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Single-case experimental designs (SCEDs) have been increasingly used in recent years to inform the development and validation of effective interventions in the behavioral sciences. An important aspect of this work has been the extension of meta-analytic and other statistical innovations to SCED data. Standard practice within SCED methods is to display data graphically, which requires subsequent users to extract the data, either manually or using data extraction programs. Previous research has examined issues of reliability and validity of data extraction programs in the past, but typically at an aggregate level. Little is known, however, about the coding of individual data points. We focused on four different software programs that can be used for this purpose (i.e., Ungraph, DataThief, WebPlotDigitizer, and XYit), and examined the reliability of numeric coding, the validity compared with real data, and overall program usability. This study indicates that the reliability and validity of the retrieved data are independent of the specific software program, but are dependent on the individual single-case study graphs. Differences were found in program usability in terms of user friendliness, data retrieval time, and license costs. Ungraph and WebPlotDigitizer received the highest usability scores. DataThief was perceived as unacceptable and the time needed to retrieve the data was double that of the other three programs. WebPlotDigitizer was the only program free to use. As a consequence, WebPlotDigitizer turned out to be the best option in terms of usability, time to retrieve the data, and costs, although the usability scores of Ungraph were also strong.
Collapse
Affiliation(s)
- Mariola Moeyaert
- State University of New York, Albany, USA
- The City University of New York, New York City, USA
| | | | | |
Collapse
|
33
|
The consequences of modeling autocorrelation when synthesizing single-case studies using a three-level model. Behav Res Methods 2015; 48:803-12. [DOI: 10.3758/s13428-015-0612-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
34
|
Manolov R, Rochat L. Further developments in summarising and meta-analysing single-case data: An illustration with neurobehavioural interventions in acquired brain injury. Neuropsychol Rehabil 2015. [PMID: 26214248 DOI: 10.1080/09602011.2015.1064452] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Data analysis for single-case designs is an issue that has prompted many researchers to propose a variety of alternatives, including use of randomisation tests, regression-based procedures, and standardised mean difference. Another option consists in computing unstandardised or raw differences between conditions: the changes in slope and in level, or the difference between the projected baseline (including trend) and the actual treatment phase measurements. Apart from the strengths of these procedures (potentially easier interpretation clinically, separate estimations and an overall quantification of effects, reasonable performance), they require further development, such as (a) creating extensions for dealing with methodologically strong designs such as multiple baseline, (b) achieving comparability across studies and making possible meta-analytical integrations, and (c) implementing software for the extensions. The proposals are illustrated herein in the context of a meta-analysis of 28 studies on (neuro)behavioural interventions in adults who have challenging behaviours after acquired brain injury.
Collapse
Affiliation(s)
- Rumen Manolov
- a Department of Behavioural Sciences Methods , University of Barcelona , Barcelona , Spain
| | | |
Collapse
|
35
|
Manolov R, Jamieson M, Evans JJ, Sierra V. Probability and Visual Aids for Assessing Intervention Effectiveness in Single-Case Designs. Behav Modif 2015; 39:691-720. [DOI: 10.1177/0145445515593512] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Single-case data analysis still relies heavily on visual inspection, and, at the same time, it is not clear to what extent the results of different quantitative procedures converge in identifying an intervention effect and its magnitude when applied to the same data; this is the type of evidence provided here for two procedures. One of the procedures, included due to the importance of providing objective criteria to visual analysts, is a visual aid fitting and projecting split-middle trend while taking into account data variability. The other procedure converts several different metrics into probabilities making their results comparable. In the present study, we expore to what extend these two procedures coincide in the magnitude of intervention effect taking place in a set of studies stemming from a recent meta-analysis. The procedures concur to a greater extent with the values of the indices computed and with each other and, to a lesser extent, with our own visual analysis. For distinguishing smaller from larger effects, the probability-based approach seems somewhat better suited. Moreover, the results of the field test suggest that the latter is a reasonably good mechanism for translating different metrics into similar labels. User friendly R code is provided for promoting the use of the visual aid, together with a quantification based on nonoverlap and the label provided by the probability approach.
Collapse
Affiliation(s)
- Rumen Manolov
- University of Barcelona, Spain
- Ramon Llull University, Barcelona, Spain
| | | | | | | |
Collapse
|
36
|
Moeyaert M, Ferron JM, Beretvas SN, Van den Noortgate W. From a single-level analysis to a multilevel analysis of single-case experimental designs. J Sch Psychol 2013; 52:191-211. [PMID: 24606975 DOI: 10.1016/j.jsp.2013.11.003] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2013] [Revised: 11/09/2013] [Accepted: 11/09/2013] [Indexed: 10/25/2022]
Abstract
Multilevel modeling provides one approach to synthesizing single-case experimental design data. In this study, we present the multilevel model (the two-level and the three-level models) for summarizing single-case results over cases, over studies, or both. In addition to the basic multilevel models, we elaborate on several plausible alternative models. We apply the proposed models to real datasets and investigate to what extent the estimated treatment effect is dependent on the modeling specifications and the underlying assumptions. By considering a range of plausible models and assumptions, researchers can determine the degree to which the effect estimates and conclusions are sensitive to the specific assumptions made. If the same conclusions are reached across a range of plausible assumptions, confidence in the conclusions can be enhanced. We advise researchers not to focus on one model but conduct multiple plausible multilevel analyses and investigate whether the results depend on the modeling options.
Collapse
Affiliation(s)
- Mariola Moeyaert
- Faculty of Psychology and Educational Sciences, Katholieke Universiteit Leuven, Belgium.
| | - John M Ferron
- Department of Educational Measurement and Research, University of South Florida, USA
| | | | - Wim Van den Noortgate
- Faculty of Psychology and Educational Sciences, ITEC-iMinds Kortrijk, Katholieke Universiteit Leuven, Belgium
| |
Collapse
|
37
|
Modeling external events in the three-level analysis of multiple-baseline across-participants designs: A simulation study. Behav Res Methods 2012; 45:547-59. [DOI: 10.3758/s13428-012-0274-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|