1
|
Manolov R, Tanious R. Type I error rates and power of two randomization test procedures for the changing criterion design. Behav Res Methods 2024; 56:3637-3657. [PMID: 38082114 PMCID: PMC11133045 DOI: 10.3758/s13428-023-02303-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/21/2023] [Indexed: 05/30/2024]
Abstract
Single-case experimental design (SCED) data can be analyzed following different approaches. One of the first historically proposed options is randomizations tests, benefiting from the inclusion of randomization in the design: a desirable methodological feature. Randomization tests have become more feasible with the availability of computational resources, and such tests have been proposed for all major types of SCEDs: multiple-baseline, reversal/withdrawal, alternating treatments, and changing criterion designs. The focus of the current text is on the last of these, given that they have not been the subject of any previous simulation study. Specifically, we estimate type I error rates and statistical power for two different randomization procedures applicable to changing criterion designs: the phase change moment randomization and the blocked alternating criterion randomization. We include different series lengths, number of phases, levels of autocorrelation, and random variability. The results suggest that type I error rates are generally controlled and that sufficient power can be achieved with as few as 28-30 measurements for independent data, although more measurements are needed in case of positive autocorrelation. The presence of a reversal to a previous criterion level is beneficial. R code is provided for carrying out randomization tests following the two randomization procedures.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of Psychology, University of Barcelona, Passeig de la Vall d'Hebron 171, 08035, Barcelona, Spain.
| | - René Tanious
- Faculty of Psychology and Neuroscience, Maastricht University, Universiteitssingel 40, 6229 ER, Maastricht, the Netherlands
| |
Collapse
|
2
|
Manolov R, Lebrault H, Krasny-Pacini A. How to assess and take into account trend in single-case experimental design data. Neuropsychol Rehabil 2024; 34:388-429. [PMID: 36961228 DOI: 10.1080/09602011.2023.2190129] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 03/07/2023] [Indexed: 03/25/2023]
Abstract
One of the data features that are expected to be assessed when analyzing single-case experimental designs (SCED) data is trend. The current text deals with four different questions that applied researchers can ask themselves when assessing trend and especially when dealing with improving baseline trend: (a) What options exist for assessing the presence of trend?; (b) Once assessed, what criterion can be followed for deciding whether it is necessary to control for baseline trend?; (c) What strategy can be followed for controlling for baseline trend?; and (d) How to proceed in case there is baseline trend only in some A-B comparisons? Several options are reviewed for each of these questions in the context of real data, and tentative recommendations are provided. A new user-friendly website is developed to implement the options for fitting a trend line and a criterion for selecting a specific technique for that purpose. Trend-related and more general data analytical recommendations are provided for applied researchers.Trial registration: ClinicalTrials.gov identifier: NCT04560777.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of Psychology Barcelona, Spain
| | - Hélène Lebrault
- Rehabilitation department for children with congenital neurological injury, Saint Maurice Hospitals Saint Maurice, France
- Sorbonne Université, Laboratoire d'Imagerie Biomédicale, LIB Paris, France
- GRC 24, Handicap Moteur et Cognitif et Réadaptation (HaMCRe); Sorbonne Université Paris, France
| | - Agata Krasny-Pacini
- Pôle de Médecine Physique et de Réadaptation, Institut Universitaire de réadaptation Clemenceau StrasbourgHôpitaux Universitaires de Strasbourg, UF 4372, Strasbourg, France
- Unité INSERM 1114 Neuropsychologie Cognitive et Physiopathologie De La Schizophrénie, Département de Psychiatrie, Hôpital Civil de Strasbourg, Strasbourg, France
- Université de Strasbourg, Faculté de Médecine Strasbourg
| |
Collapse
|
3
|
Vroegindeweij A, Nijhof LN, Onghena P, van de Putte EM, Nijhof SL, Houtveen J. The Permutation Distancing Test for dependent single-case observational AB-phase design data: A Monte Carlo simulation study. Behav Res Methods 2024; 56:2569-2580. [PMID: 37528291 PMCID: PMC10991042 DOI: 10.3758/s13428-023-02167-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/09/2023] [Indexed: 08/03/2023]
Abstract
The Permutation Distancing Test (PDT) is a nonparametric test for evaluating treatment effects in dependent single-case observational design (SCOD) AB-phase data without linear trends. Monte Carlo methods were used to estimate the PDT power and type I error rate, and to compare them to those of the Single-Case Randomization Test (SCRT) assuming a randomly determined intervention point and the traditional permutation test assuming full exchangeability. Data were simulated without linear trends for five treatment effect levels (- 2, - 1, 0, 1, 2), five autocorrelation levels (0, .15, .30, .45, .60), and four observation number levels (30, 60, 90, 120). The power was calculated multiple times for all combinations of factor levels each generating 1000 replications. With 30 observations, the PDT showed sufficient power (≥ 80%) to detect medium treatment effects up to autocorrelation ≤ .45. Using 60 observations, the PDT showed sufficient power to detect medium treatment effects regardless of autocorrelation. With ≥ 90 observations, the PDT could also detect small treatment effects up to autocorrelation ≤ .30. With 30 observations, the type I error rate was 5-7%. With 60 observations and more, the type I error rate was ≤ 5% with autocorrelation < .60. The PDT outperformed the SCRT regarding power, particularly with a small number of observations. The PDT outperformed the traditional permutation test regarding type I error rate control, especially when autocorrelation increased. In conclusion, the PDT is a useful and promising nonparametric test to evaluate treatment effects in dependent SCOD AB-phase data without linear trends.
Collapse
Affiliation(s)
- Anouk Vroegindeweij
- Department of Pediatric Rheumatology/Immunology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Linde N Nijhof
- Department of Pediatrics, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Patrick Onghena
- Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Elise M van de Putte
- Department of Pediatrics, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Sanne L Nijhof
- Department of Pediatrics, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Jan Houtveen
- Department of Pediatrics, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands.
- Altrecht Psychosomatic Medicine, Zeist, The Netherlands.
| |
Collapse
|
4
|
Chen LT, Chen YK, Yang TR, Chiang YS, Hsieh CY, Cheng C, Ding QW, Wu PJ, Peng CYJ. Examining the normality assumption of a design-comparable effect size in single-case designs. Behav Res Methods 2024; 56:379-405. [PMID: 36650402 DOI: 10.3758/s13428-022-02035-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/22/2022] [Indexed: 01/18/2023]
Abstract
What Works Clearinghouse (WWC, 2022) recommends a design-comparable effect size (D-CES; i.e., gAB) to gauge an intervention in single-case experimental design (SCED) studies, or to synthesize findings in meta-analysis. So far, no research has examined gAB's performance under non-normal distributions. This study expanded Pustejovsky et al. (2014) to investigate the impact of data distributions, number of cases (m), number of measurements (N), within-case reliability or intra-class correlation (ρ), ratio of variance components (λ), and autocorrelation (ϕ) on gAB in multiple-baseline (MB) design. The performance of gAB was assessed by relative bias (RB), relative bias of variance (RBV), MSE, and coverage rate of 95% CIs (CR). Findings revealed that gAB was unbiased even under non-normal distributions. gAB's variance was generally overestimated, and its 95% CI was over-covered, especially when distributions were normal or nearly normal combined with small m and N. Large imprecision of gAB occurred when m was small and ρ was large. According to the ANOVA results, data distributions contributed to approximately 49% of variance in RB and 25% of variance in both RBV and CR. m and ρ each contributed to 34% of variance in MSE. We recommend gAB for MB studies and meta-analysis with N ≥ 16 and when either (1) data distributions are normal or nearly normal, m = 6, and ρ = 0.6 or 0.8, or (2) data distributions are mildly or moderately non-normal, m ≥ 4, and ρ = 0.2, 0.4, or 0.6. The paper concludes with a discussion of gAB's applicability and design-comparability, and sound reporting practices of ES indices.
Collapse
Affiliation(s)
- Li-Ting Chen
- Department of Educational Studies, University of Nevada, Reno, Reno, NV, USA.
| | - Yi-Kai Chen
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Tong-Rong Yang
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Yu-Shan Chiang
- Department of Curriculum & Instruction, Indiana University Bloomington, Bloomington, IN, USA
| | - Cheng-Yu Hsieh
- Department of Psychology, National Taiwan University, Taipei, Taiwan
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Che Cheng
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Qi-Wen Ding
- Institute of Sociology, Academia Sinica, Taipei, Taiwan
| | - Po-Ju Wu
- Department of Counseling and Educational Psychology, Indiana University Bloomington, Bloomington, IN, USA
| | - Chao-Ying Joanne Peng
- Department of Psychology, National Taiwan University, Taipei, Taiwan
- Department of Counseling and Educational Psychology, Indiana University Bloomington, Bloomington, IN, USA
| |
Collapse
|
5
|
Manolov R. Does the choice of a linear trend-assessment technique matter in the context of single-case data? Behav Res Methods 2023; 55:4200-4221. [PMID: 36622560 DOI: 10.3758/s13428-022-02013-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/22/2022] [Indexed: 01/10/2023]
Abstract
Trend is one of the data aspects that is an object of assessment in the context of single-case experimental designs. This assessment can be performed both visually and quantitatively. Given that trend, just like other relevant data features such as level, immediacy, or overlap does not have a single operative definition, a comparison among the existing alternatives is necessary. Previous studies have included illustrations of differences between trend-line fitting techniques using real data. In the current study, I carry out a simulation to study the degree to which different trend-line fitting techniques lead to different degrees of bias, mean square error, and statistical power for a variety of quantifications that entail trend lines. The simulation involves generating both continuous and count data, for several phase lengths, degrees of autocorrelation, and effect sizes (change in level and change in slope). The results suggest that, in general, ordinary least squares estimation performs well in terms of relative bias and mean square error. Especially, a quantification of slope change is associated with better statistical results than quantifying an average difference between conditions on the basis of a projected baseline trend. In contrast, the performance of the split-middle (bisplit) technique is less than optimal.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of Psychology, University of Barcelona, Passeig de la Vall d'Hebron 171, 08035, Barcelona, Spain.
| |
Collapse
|
6
|
Manolov R, Vannest KJ. A Visual Aid and Objective Rule Encompassing the Data Features of Visual Analysis. Behav Modif 2023; 47:1345-1376. [PMID: 31165621 DOI: 10.1177/0145445519854323] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Visual analysis of single-case research is commonly described as a gold standard, but it is often unreliable. Thus, an objective tool for applying visual analysis is necessary, as an alternative to the Conservative Dual Criterion, which presents some drawbacks. The proposed free web-based tool enables assessing change in trend and level between two adjacent phases, while taking data variability into account. The application of the tool results in (a) a dichotomous decision regarding the presence or absence of an immediate effect, a progressive or delayed effect, or an overall effect and (b) a quantification of overlap. The proposal is evaluated by applying it to both real and simulated data, obtaining favorable results. The visual aid and the objective rules are expected to make visual analysis more consistent, but they are not intended as a substitute for the analysts' judgment, as a formal test of statistical significance, or as a tool for assessing social validity.
Collapse
|
7
|
Baek E, Luo W, Lam KH. Meta-Analysis of Single-Case Experimental Design using Multilevel Modeling. Behav Modif 2023; 47:1546-1573. [PMID: 36647266 DOI: 10.1177/01454455221144034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Multilevel modeling (MLM) is an approach for meta-analyzing single-case experimental designs (SCED). In this paper, we provide a step-by-step guideline for using the MLM to meta-analyze SCED time-series data. The MLM approach is first presented using a basic three-level model, then gradually extended to represent more realistic situations of SCED data, such as modeling a time variable, moderators representing different design types and multiple outcomes, and heterogeneous within-case variance. The presented approach is then illustrated using real SCED data. Practical recommendations using the MLM approach are also provided for applied researchers based on the current methodological literature. Available free and commercial software programs to meta-analyze SCED data are also introduced, along with several hands-on software codes for applied researchers to implement their own studies. Potential advantages and limitations of using the MLM approach to meta-analyzing SCED are discussed.
Collapse
Affiliation(s)
| | - Wen Luo
- Texas A&M University, College Station, TX, USA
| | | |
Collapse
|
8
|
Fingerhut J, Moeyaert M, Manolov R, Xu X, Park KH. Systematic Review of Descriptions and Justifications Provided for Single-Case Quantification Techniques. Behav Modif 2023; 47:1115-1143. [PMID: 37254563 DOI: 10.1177/01454455231178469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
There are currently a multitude of quantification techniques that have been developed for use with single-case designs. As a result, choosing an appropriate quantification technique can be overwhelming and it can be difficult for researchers to properly describe and justify their use of quantification techniques. However, providing clear descriptions and justifications is important for enhancing the credibility of single-case research, and allowing others to evaluate the appropriateness of the quantification technique used. The aim of this systematic literature review is to provide an overview of the quantification techniques that are used to analyze single-case designs, with a focus on the descriptions and justifications that are provided. A total of 290 quantifications occurred across 218 articles, and the descriptions and justifications that were provided for the quantification techniques that were used are systematically examined. Results show that certain quantification techniques, such as the non-overlap indices, are more commonly used. Descriptions and justifications provided for using the quantification techniques are sometimes vague or subjective. Single-case researchers are encouraged to complement visual analysis with the use of quantification techniques for which they can provide objective and appropriate descriptions and justifications, and are encouraged to use tools to guide their choice of quantification techniques.
Collapse
Affiliation(s)
| | | | | | - Xinyun Xu
- State University of New York, Albany, USA
| | | |
Collapse
|
9
|
De TK, Onghena P. The randomized marker method for single-case randomization tests: Handling data missing at random and data missing not at random. Behav Res Methods 2022; 54:2905-2938. [PMID: 35132582 DOI: 10.3758/s13428-021-01781-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/22/2021] [Indexed: 12/16/2022]
Abstract
Single-case experiments are frequently plagued by missing data problems. In a recent study, the randomized marker method was found to be valid and powerful for single-case randomization tests when the missing data were missing completely at random. However, in real-life experiments, it is difficult for researchers to ascertain the missing data mechanism. For analyzing such experiments, it is essential that the missing data handling method is valid and powerful for various missing data mechanisms. Hence, we examined the performance of the randomized marker method for data that are missing at random and data that are missing not at random. In addition, we compared the randomized marker method with multiple imputation, because the latter is often considered the gold standard among imputation techniques. To compare and evaluate these two methods under various simulation conditions, we calculated the type I error rate and statistical power in single-case randomization tests using these two methods of handling missing data and compared them to the type I error rate and statistical power using complete datasets. The results indicate that while multiple imputation presents an advantage in the presence of strongly correlated covariate data, the randomized marker method remains valid and results in sufficient statistical power for most of the missing data conditions simulated in this study.
Collapse
Affiliation(s)
- Tamal Kumar De
- Methodology of Educational Sciences Research Group, Faculty of Psychology and Educational Sciences, KU Leuven, Tiensestraat 102 Box 3762, 3000, Leuven, Belgium.
| | - Patrick Onghena
- Methodology of Educational Sciences Research Group, Faculty of Psychology and Educational Sciences, KU Leuven, Tiensestraat 102 Box 3762, 3000, Leuven, Belgium
| |
Collapse
|
10
|
Richman DM, Barnard-Brak L, Watkins L. Interpreting treatment effect size from single case experimental design data: a preliminary analysis of differential effects of treatments designed to increase or decrease behaviour. JOURNAL OF INTELLECTUAL DISABILITY RESEARCH : JIDR 2022; 66:743-755. [PMID: 35959751 DOI: 10.1111/jir.12966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 07/15/2022] [Accepted: 07/20/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Estimates of treatment effect size from single case experimental design (SCED) data may be impacted by the direction for treatment effects (i.e. ascending or descending slope for the dependent variable). Estimating effect sizes for treatments designed to decrease behaviour are potentially more restricted because the intended direction for treatment is zero (i.e. an absolute basal). Conversely, effect sizes for interventions that increase behaviour are less restricted due to a relatively unconstrained ceiling from a pure measurement standpoint (i.e. no absolute ceiling). That is, treatments that increase behaviour have a broader range of possible effect size values as the ceiling is only limited by demand characteristics and the learners' skills and motivation to exhibit the behaviour. METHOD The current study represents a preliminary analysis of the mean and range of SCED effect sizes for treatments designed to either increase or decrease target behaviour. A within-case Cohen's d measure that was developed for SCED data was used to estimate treatment effect sizes. RESULTS Results indicated that the mean and range of effect size values for treatments that increased behaviour were significantly greater compared with treatments that decreased behaviour. CONCLUSIONS Results are discussed in terms of developing standards, or best practices, specific to interpreting effect size values and meeting quality control requirements for inclusion of the data set in future SCED meta-analytic studies estimating treatment effect size. Specifically, preliminary results suggest that benchmarks for low, medium and high SCED effect size values need to be developed separately for treatments that increase or decrease levels of the dependent variable.
Collapse
Affiliation(s)
- D M Richman
- Special Education, Texas Tech University, Lubbock, TX, USA
| | - L Barnard-Brak
- Special Education, University of Alabama, Tuscaloosa, AL, USA
| | - L Watkins
- Special Education, University of Alabama, Tuscaloosa, AL, USA
| |
Collapse
|
11
|
Saul J, Norbury C. A Randomized Case Series Approach to Testing Efficacy of Interventions for Minimally Verbal Autistic Children. Front Psychol 2021; 12:621920. [PMID: 34108905 PMCID: PMC8182798 DOI: 10.3389/fpsyg.2021.621920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 04/22/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Randomized Controlled Trials (RCTs) are the gold standard for assessing whether an intervention is effective; however, they require large sample sizes in order to detect small effects. For rare or complex populations, we advocate a case series approach as a more realistic and useful first step for intervention evaluation. We consider the importance of randomization to such designs, and advocate for the use of Randomization Tests and Between Case Effect Sizes to provide a robust and statistically powerful evaluation of outcomes. In this tutorial, we describe the method, procedures, and analysis code necessary to conduct robust single case series, using an empirical example with minimally verbal autistic children. METHOD We applied a pre-registered (https://osf.io/9gvbs) randomized baseline design with between-case effect size to a case series (n = 19), to test the efficacy of a novel, parent-mediated, app-based speech production intervention (BabbleBooster) for minimally verbal autistic children. Parent-rated probe scores were used to densely sample performance accuracy over time. RESULTS Parents were able to reliably code their children's speech productions using BabbleBooster. A non-significant Randomization Test and small Between-Case Effect Size (d = 0.267), suggested there was no evidence that BabbleBooster improved speech production in minimally verbal autistic children, relative to baseline scores, during this brief period of intervention. CONCLUSION The current analyses exemplify a more robust approach to examining treatment effects in rare or complex populations, where RCT may be difficult or premature to implement. To facilitate adoption of this method by researchers and practitioners, we provide analysis code that can be adapted using open source R packages. Future studies could use this case series design to evaluate interventions aiming to improve speech and language outcomes for minimally verbal autistic children, and other heterogeneous and hard to reach populations.
Collapse
Affiliation(s)
- Jo Saul
- Department of Language and Cognition, University College London, London, United Kingdom
| | - Courtenay Norbury
- Department of Language and Cognition, University College London, London, United Kingdom
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| |
Collapse
|
12
|
Manolov R, Tanious R, Onghena P. Quantitative Techniques and Graphical Representations for Interpreting Results from Alternating Treatment Design. Perspect Behav Sci 2021; 45:259-294. [DOI: 10.1007/s40614-021-00289-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/13/2021] [Indexed: 01/11/2023] Open
|
13
|
Barnard‐Brak L, Watkins L, Richman DM. Autocorrelation and estimates of treatment effect size for single‐case experimental design data. BEHAVIORAL INTERVENTIONS 2021. [DOI: 10.1002/bin.1783] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Lucy Barnard‐Brak
- Department of Special Education and Multiple Abilities The University of Alabama Tuscaloosa Alabama USA
| | - Laci Watkins
- Department of Special Education and Multiple Abilities The University of Alabama Tuscaloosa Alabama USA
| | | |
Collapse
|
14
|
Manolov R, Moeyaert M, Fingerhut JE. A Priori Justification for Effect Measures in Single-Case Experimental Designs. Perspect Behav Sci 2021; 45:153-186. [DOI: 10.1007/s40614-021-00282-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/27/2021] [Indexed: 01/31/2023] Open
|
15
|
Accurate models vs. accurate estimates: A simulation study of Bayesian single-case experimental designs. Behav Res Methods 2021; 53:1782-1798. [PMID: 33575987 PMCID: PMC8367899 DOI: 10.3758/s13428-020-01522-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/11/2020] [Indexed: 11/08/2022]
Abstract
Although statistical practices to evaluate intervention effects in single-case experimental design (SCEDs) have gained prominence in recent times, models are yet to incorporate and investigate all their analytic complexities. Most of these statistical models incorporate slopes and autocorrelations, both of which contribute to trend in the data. The question that arises is whether in SCED data that show trend, there is indeterminacy between estimating slope and autocorrelation, because both contribute to trend, and the data have a limited number of observations. Using Monte Carlo simulation, we compared the performance of four Bayesian change-point models: (a) intercepts only (IO), (b) slopes but no autocorrelations (SI), (c) autocorrelations but no slopes (NS), and (d) both autocorrelations and slopes (SA). Weakly informative priors were used to remain agnostic about the parameters. Coverage rates showed that for the SA model, either the slope effect size or the autocorrelation credible interval almost always erroneously contained 0, and the type II errors were prohibitively large. Considering the 0-coverage and coverage rates of slope effect size, intercept effect size, mean relative bias, and second-phase intercept relative bias, the SI model outperformed all other models. Therefore, it is recommended that researchers favor the SI model over the other three models. Research studies that develop slope effect sizes for SCEDs should consider the performance of the statistic by taking into account coverage and 0-coverage rates. These helped uncover patterns that were not realized in other simulation studies. We underline the need for investigating the use of informative priors in SCEDs.
Collapse
|
16
|
A systematic review of applied single-case research published between 2016 and 2018: Study designs, randomization, data aspects, and data analysis. Behav Res Methods 2020; 53:1371-1384. [PMID: 33104956 DOI: 10.3758/s13428-020-01502-4] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/09/2020] [Indexed: 11/08/2022]
Abstract
Single-case experimental designs (SCEDs) have become a popular research methodology in educational science, psychology, and beyond. The growing popularity has been accompanied by the development of specific guidelines for the conduct and analysis of SCEDs. In this paper, we examine recent practices in the conduct and analysis of SCEDs by systematically reviewing applied SCEDs published over a period of three years (2016-2018). Specifically, we were interested in which designs are most frequently used and how common randomization in the study design is, which data aspects applied single-case researchers analyze, and which analytical methods are used. The systematic review of 423 studies suggests that the multiple baseline design continues to be the most widely used design and that the difference in central tendency level is by far most popular in SCED effect evaluation. Visual analysis paired with descriptive statistics is the most frequently used method of data analysis. However, inferential statistical methods and the inclusion of randomization in the study design are not uncommon. We discuss these results in light of the findings of earlier systematic reviews and suggest future directions for the development of SCED methodology.
Collapse
|
17
|
From Boulder to Stockholm in 70 Years: Single Case Experimental Designs in Clinical Research. PSYCHOLOGICAL RECORD 2020. [DOI: 10.1007/s40732-020-00402-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
18
|
Handling missing data in randomization tests for single-case experiments: A simulation study. Behav Res Methods 2020; 52:1355-1370. [DOI: 10.3758/s13428-019-01320-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
19
|
Poncy BC, Solomon B, Duhon G, Skinner C, Moore K, Simons S. An Analysis of Learning Rate and Curricular Scope: Caution When Choosing Academic Interventions Based on Aggregated Outcomes. SCHOOL PSYCHOLOGY REVIEW 2019. [DOI: 10.17105/spr-14-0044.1] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
20
|
Codding RS, Begeny J, Kromminga KR, Edmunds RR, Klaft J, Diggs C, Hansen-Burke A. Do Motivational Strategies Improve the Effects of a Small-Group Math Intervention Program? JOURNAL OF APPLIED SCHOOL PSYCHOLOGY 2019. [DOI: 10.1080/15377903.2019.1682735] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
| | - John Begeny
- North Carolina State University, Raleigh, NC, USA
| | | | | | - Jenna Klaft
- University of Minnesota, Twin Cities, Minneapolis, USA
| | - Calvary Diggs
- University of Minnesota, Twin Cities, Minneapolis, USA
| | | |
Collapse
|
21
|
Joo SH, Ferron JM. Application of the Within- and Between-Series Estimators to Non-normal Multiple-Baseline Data: Maximum Likelihood and Bayesian Approaches. MULTIVARIATE BEHAVIORAL RESEARCH 2019; 54:666-689. [PMID: 30857444 DOI: 10.1080/00273171.2018.1564877] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In single-case research, multiple-baseline (MB) design provides the opportunity to estimate the treatment effect based on not only within-series comparisons of treatment phase to baseline phase observations, but also time-specific between-series comparisons of observations from those that have started treatment to those that are still in the baseline. For analyzing MB studies, two types of linear mixed modeling methods have been proposed: the within- and between-series models. In principle, those models were developed based on normality assumptions, however, normality may not always be found in practical settings. Therefore, this study aimed to investigate the robustness of the within- and between-series models when data were non-normal. A Monte Carlo study was conducted with four statistical approaches. The approaches were defined by the crossing of two analytic decisions: (a) whether to use a within- or between-series estimate of effect and (b) whether to use restricted maximum likelihood or Markov chain Monte Carlo estimations. The results showed the treatment effect estimates of the four approaches had minimal bias, that within-series estimates were more precise than between-series estimates, and that confidence interval coverage was frequently acceptable, but varied across conditions and methods of estimation. Applications and implications were discussed based on the findings.
Collapse
Affiliation(s)
- Seang-Hwane Joo
- Department of Educational and Psychological Studies, University of South Florida
| | - John M Ferron
- Department of Educational and Psychological Studies, University of South Florida
| |
Collapse
|
22
|
Tanious R, De TK, Onghena P. A multiple randomization testing procedure for level, trend, variability, overlap, immediacy, and consistency in single-case phase designs. Behav Res Ther 2019; 119:103414. [DOI: 10.1016/j.brat.2019.103414] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Revised: 05/16/2019] [Accepted: 05/28/2019] [Indexed: 11/15/2022]
|
23
|
A randomization test wrapper for synthesizing single-case experiments using multilevel models: A Monte Carlo simulation study. Behav Res Methods 2019; 52:654-666. [PMID: 31270794 DOI: 10.3758/s13428-019-01266-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multilevel models (MLMs) have been proposed in single-case research, to synthesize data from a group of cases in a multiple-baseline design (MBD). A limitation of this approach is that MLMs require several statistical assumptions that are often violated in single-case research. In this article we propose a solution to this limitation by presenting a randomization test (RT) wrapper for MLMs that offers a nonparametric way to evaluate treatment effects, without making distributional assumptions or an assumption of random sampling. We present the rationale underlying the proposed technique and validate its performance (with respect to Type I error rate and power) as compared to parametric statistical inference in MLMs, in the context of evaluating the average treatment effect across cases in an MBD. We performed a simulation study that manipulated the numbers of cases and of observations per case in a dataset, the data variability between cases, the distributional characteristics of the data, the level of autocorrelation, and the size of the treatment effect in the data. The results showed that the power of the RT wrapper is superior to the power of parametric tests based on F distributions for MBDs with fewer than five cases, and that the Type I error rate of the RT wrapper is controlled for bimodal data, whereas this is not the case for traditional MLMs.
Collapse
|
24
|
Extrapolating baseline trend in single-case data: Problems and tentative solutions. Behav Res Methods 2018; 51:2847-2869. [DOI: 10.3758/s13428-018-1165-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
25
|
Moeyaert M, Manolov R, Rodabaugh E. Meta-Analysis of Single-Case Research via Multilevel Models: Fundamental Concepts and Methodological Considerations. Behav Modif 2018; 44:265-295. [PMID: 30360633 DOI: 10.1177/0145445518806867] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Multilevel modeling is an approach that can be used to summarize single-case experimental design (SCED) data. Multilevel models were developed to analyze hierarchical structured data with units at a lower level nested within higher level units. SCEDs use time series data collected from multiple cases (or subjects) within a study that allow researchers to investigate intervention effectiveness at the individual level and also to investigate how these individual intervention effects change over time. There is an increased interest in the field regarding how SCEDs can be used to establish an evidence base for interventions by synthesizing data from a series of intervention studies. Although using multilevel models to meta-analyze SCED studies is promising, application is often hampered by being potentially excessively technical. First, this article provides an accessible description and overview of the potential of multilevel meta-analysis to combine SCED data. Second, a summary of the methodological evidence on the performance of multilevel models for meta-analysis is provided, which is useful given that such evidence is currently scattered over multiple technical articles in the literature. Third, the actual steps to perform a multilevel meta-analysis are outlined in a brief practical guide. Fourth, a suggestion for integrating the quantitative results with a visual representation is provided.
Collapse
|
26
|
Brossart DF, Laird VC, Armstrong TW. Interpreting Kendall’s Tau and Tau-U for single-case experimental designs. COGENT PSYCHOLOGY 2018. [DOI: 10.1080/23311908.2018.1518687] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
|
27
|
Joo SH, Ferron JM, Beretvas SN, Moeyaert M, Van den Noortgate W. The impact of response-guided baseline phase extensions on treatment effect estimates. RESEARCH IN DEVELOPMENTAL DISABILITIES 2018; 79:77-87. [PMID: 29289405 DOI: 10.1016/j.ridd.2017.12.018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Revised: 12/01/2017] [Accepted: 12/18/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND When developmental disabilities researchers use multiple-baseline designs they are encouraged to delay the start of an intervention until the baseline stabilizes or until preceding cases have responded to intervention. Using ongoing visual analyses to guide the timing of the start of the intervention can help to resolve potential ambiguities in the graphical display; however, these forms of response-guided experimentation have been criticized as a potential source of bias in treatment effect estimation and inference. AIMS AND METHODS Monte Carlo simulations were used to examine the bias and precision of average treatment effect estimates obtained from multilevel models of four-case multiple-baseline studies with series lengths that varied from 19 to 49 observations per case. We varied the size of the average treatment effect, the factors used to guide intervention decisions (baseline stability, response to intervention, both, or neither), and whether the ongoing analysis was masked or not. RESULTS None of the methods of responding to the data led to appreciable bias in the treatment effect estimates. Furthermore, as timing-of-intervention decisions became responsive to more factors, baselines became longer and treatment effect estimates became more precise. CONCLUSIONS Although the study was conducted under limited conditions, the response-guided practices did not lead to substantial bias. By extending baseline phases they reduced estimation error and thus improved the treatment effect estimates obtained from multilevel models.
Collapse
Affiliation(s)
- Seang-Hwane Joo
- Department of Educational and Psychological Studies, University of South Florida, Tampa, FL, USA.
| | - John M Ferron
- Department of Educational and Psychological Studies, University of South Florida, Tampa, FL, USA
| | - S Natasha Beretvas
- Department of Educational Psychology, University of Texas, Austin, TX, USA
| | - Mariola Moeyaert
- Department of Educational Psychology and Methodology, State University of New York, Albany, NY, USA
| | - Wim Van den Noortgate
- Faculty of Psychological and Educational Sciences, Katholieke Universiteit Leuven, Belgium
| |
Collapse
|
28
|
|
29
|
Swan DM, Pustejovsky JE. A Gradual Effects Model for Single-Case Designs. MULTIVARIATE BEHAVIORAL RESEARCH 2018; 53:574-593. [PMID: 29757002 DOI: 10.1080/00273171.2018.1466681] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Single-case designs are a class of repeated measures experiments used to evaluate the effects of interventions for small or specialized populations, such as individuals with low-incidence disabilities. There has been growing interest in systematic reviews and syntheses of evidence from single-case designs, but there remains a need to further develop appropriate statistical models and effect sizes for data from the designs. We propose a novel model for single-case data that exhibit nonlinear time trends created by an intervention that produces gradual effects, which build up and dissipate over time. The model expresses a structural relationship between a pattern of treatment assignment and an outcome variable, making it appropriate for both treatment reversal and multiple baseline designs. It is formulated as a generalized linear model so that it can be applied to outcomes measured as frequency counts or proportions, both of which are commonly used in single-case research, while providing readily interpretable effect size estimates such as log response ratios or log odds ratios. We demonstrate the gradual effects model by applying it to data from a single-case study and examine the performance of proposed estimation methods in a Monte Carlo simulation of frequency count data.
Collapse
|
30
|
Using response ratios for meta-analyzing single-case designs with behavioral outcomes. J Sch Psychol 2018; 68:99-112. [DOI: 10.1016/j.jsp.2018.02.003] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2017] [Revised: 09/29/2017] [Accepted: 02/24/2018] [Indexed: 10/17/2022]
|
31
|
Nonparametric meta-analysis for single-case research: Confidence intervals for combined effect sizes. Behav Res Methods 2018; 51:1145-1160. [DOI: 10.3758/s13428-018-1044-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
32
|
One by One: Accumulating Evidence by using Meta-Analytical Procedures for Single-Case Experiments. BRAIN IMPAIR 2017. [DOI: 10.1017/brimp.2017.25] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
This paper presents a unilevel and multilevel approach for the analysis and meta-analysis of single-case experiments (SCEs). We propose a definition of SCEs and derive the specific features of SCEs’ data that have to be taken into account when analysing and meta-analysing SCEs. We discuss multilevel models of increasing complexity and propose alternative and complementary techniques based on probability combining and randomisation test wrapping. The proposed techniques are demonstrated with real-life data and corresponding R code.
Collapse
|
33
|
Abstract
The frequently used visual analysis of single-case data focuses on data aspects such as level, trend, variability, overlap, immediacy of effect, and consistency of data patterns; most of these aspects are also commonly quantified besides inspecting them visually. The present text focuses on trend, because even linear trend can be operatively defined in several different ways, while there are also different approaches for controlling for baseline trend. We recommend using a quantitative criterion for choosing a trend line fitting technique and comparing baseline and intervention slopes, instead of detrending. We implement our proposal in a free web-based application created specifically for following the What Works Clearinghouse Standards recommendations for visual analysis. This application is especially destined to applied researchers and provides graphical representation of the data, visual aids, and quantifications of the difference between phases in terms of level, trend, and overlap, as well as two quantifications of the immediate effect. An evaluation of the consistency of effects across replications of the AB sequence is also provided. For methodologists and statisticians, we include formulas and examples of the different straight line fitting and detrending techniques to improve the reproducibility of results and simulations.
Collapse
|
34
|
Abstract
Visual analysis is the primary method for detecting the presence of treatment effects in graphically displayed single-case data and it is often referred to as the "gold standard." Although researchers have developed standards for the application of visual analysis (e.g., Horner et al., 2005), over- and underestimation of effect size magnitude is not uncommon among analysts. Several characteristics have been identified as potential contributors to these errors; however, researchers have largely focused on characteristics of the data itself (e.g., autocorrelation), paying less attention to characteristics of the graphic display which are largely in control of the analyst (e.g., ordinate scaling). The current study investigated the impact that differences in ordinate scaling, a graphic display characteristic, had on experts' accuracy in judgments regarding the magnitude of effect present in single-case percentage data. 32 participants were asked to evaluate eight ABAB data sets (2 each presenting null, small, moderate, and large effects) along with three iterations of each (32 graphs in total) in which only the ordinate scale was manipulated. Results suggest that raters are less accurate in their detection of treatment effects as the ordinate scale is constricted. Additionally, raters were more likely to overestimate the size of a treatment effect when the ordinate scale was constricted.
Collapse
Affiliation(s)
- Evan H Dart
- University of Southern Mississippi, United States.
| | | |
Collapse
|
35
|
The conditional power of randomization tests for single-case effect sizes in designs with randomized treatment order: A Monte Carlo simulation study. Behav Res Methods 2017; 50:557-575. [DOI: 10.3758/s13428-017-0885-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
36
|
Bruni TP, Drevon D, Hixson M, Wyse R, Corcoran S, Fursa S. THE EFFECT OF FUNCTIONAL BEHAVIOR ASSESSMENT ON SCHOOL-BASED INTERVENTIONS: A META-ANALYSIS OF SINGLE-CASE RESEARCH. PSYCHOLOGY IN THE SCHOOLS 2017. [DOI: 10.1002/pits.22007] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
37
|
Tarlow KR. An Improved Rank Correlation Effect Size Statistic for Single-Case Designs: Baseline Corrected Tau. Behav Modif 2016; 41:427-467. [DOI: 10.1177/0145445516676750] [Citation(s) in RCA: 104] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Measuring treatment effects when an individual’s pretreatment performance is improving poses a challenge for single-case experimental designs. It may be difficult to determine whether improvement is due to the treatment or due to the preexisting baseline trend. Tau- U is a popular single-case effect size statistic that purports to control for baseline trend. However, despite its strengths, Tau- U has substantial limitations: Its values are inflated and not bound between −1 and +1, it cannot be visually graphed, and its relatively weak method of trend control leads to unacceptable levels of Type I error wherein ineffective treatments appear effective. An improved effect size statistic based on rank correlation and robust regression, Baseline Corrected Tau, is proposed and field-tested with both published and simulated single-case time series. A web-based calculator for Baseline Corrected Tau is also introduced for use by single-case investigators.
Collapse
|
38
|
Manolov R, Moeyaert M. How Can Single-Case Data Be Analyzed? Software Resources, Tutorial, and Reflections on Analysis. Behav Modif 2016; 41:179-228. [DOI: 10.1177/0145445516664307] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The present article aims to present a series of software developments in the quantitative analysis of data obtained via single-case experimental designs (SCEDs), as well as the tutorial describing these developments. The tutorial focuses on software implementations based on freely available platforms such as R and aims to bring statistical advances closer to applied researchers and help them become autonomous agents in the data analysis stage of a study. The range of analyses dealt with in the tutorial is illustrated on a typical single-case dataset, relying heavily on graphical data representations. We illustrate how visual and quantitative analyses can be used jointly, giving complementary information and helping the researcher decide whether there is an intervention effect, how large it is, and whether it is practically significant. To help applied researchers in the use of the analyses, we have organized the data in the different ways required by the different analytical procedures and made these data available online. We also provide Internet links to all free software available, as well as all the main references to the analytical techniques. Finally, we suggest that appropriate and informative data analysis is likely to be a step forward in documenting and communicating results and also for increasing the scientific credibility of SCEDs.
Collapse
|
39
|
Lanovaz MJ, Rapp JT. Using Single-Case Experiments to Support Evidence-Based Decisions: How Much Is Enough? Behav Modif 2015; 40:377-95. [PMID: 26538276 DOI: 10.1177/0145445515613584] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
For practitioners, the use of single-case experimental designs (SCEDs) in the research literature raises an important question: How many single-case experiments are enough to have sufficient confidence that an intervention will be effective with an individual from a given population? Although standards have been proposed to address this question, current guidelines do not appear to be strongly grounded in theory or empirical research. The purpose of our article is to address this issue by presenting guidelines to facilitate evidence-based decisions by adopting a simple statistical approach to quantify the support for interventions that have been validated using SCEDs. Specifically, we propose the use of success rates as a supplement to support evidence-based decisions. The proposed methodology allows practitioners to aggregate the results from single-case experiments to estimate the probability that a given intervention will produce a successful outcome. We also discuss considerations and limitations associated with this approach.
Collapse
|
40
|
Manolov R, Jamieson M, Evans JJ, Sierra V. Probability and Visual Aids for Assessing Intervention Effectiveness in Single-Case Designs. Behav Modif 2015; 39:691-720. [DOI: 10.1177/0145445515593512] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Single-case data analysis still relies heavily on visual inspection, and, at the same time, it is not clear to what extent the results of different quantitative procedures converge in identifying an intervention effect and its magnitude when applied to the same data; this is the type of evidence provided here for two procedures. One of the procedures, included due to the importance of providing objective criteria to visual analysts, is a visual aid fitting and projecting split-middle trend while taking into account data variability. The other procedure converts several different metrics into probabilities making their results comparable. In the present study, we expore to what extend these two procedures coincide in the magnitude of intervention effect taking place in a set of studies stemming from a recent meta-analysis. The procedures concur to a greater extent with the values of the indices computed and with each other and, to a lesser extent, with our own visual analysis. For distinguishing smaller from larger effects, the probability-based approach seems somewhat better suited. Moreover, the results of the field test suggest that the latter is a reasonably good mechanism for translating different metrics into similar labels. User friendly R code is provided for promoting the use of the visual aid, together with a quantification based on nonoverlap and the label provided by the probability approach.
Collapse
Affiliation(s)
- Rumen Manolov
- University of Barcelona, Spain
- Ramon Llull University, Barcelona, Spain
| | | | | | | |
Collapse
|
41
|
Cobb P, Shadish W. Abstract: Assessing Trend in Single-Case Designs Using Generalized Additive Models. MULTIVARIATE BEHAVIORAL RESEARCH 2015; 50:131. [PMID: 26609751 DOI: 10.1080/00273171.2014.988991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Affiliation(s)
- Patrice Cobb
- a School of Social Sciences, Humanities, and Arts , University of California , Merced
| | - William Shadish
- a School of Social Sciences, Humanities, and Arts , University of California , Merced
| |
Collapse
|
42
|
Manolov R, Sierra V, Solanas A, Botella J. Assessing Functional Relations in Single-Case Designs. Behav Modif 2014; 38:878-913. [DOI: 10.1177/0145445514545679] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In the context of the evidence-based practices movement, the emphasis on computing effect sizes and combining them via meta-analysis does not preclude the demonstration of functional relations. For the latter aim, we propose to augment the visual analysis to add consistency to the decisions made on the existence of a functional relation without losing sight of the need for a methodological evaluation of what stimuli and reinforcement or punishment are used to control the behavior. Four options for quantification are reviewed, illustrated, and tested with simulated data. These quantifications include comparing the projected baseline with the actual treatment measurements, on the basis of either parametric or nonparametric statistics. The simulated data used to test the quantifications include nine data patterns in terms of the presence and type of effect and comprise ABAB and multiple-baseline designs. Although none of the techniques is completely flawless in terms of detecting a functional relation only when it is present but not when it is absent, an option based on projecting split-middle trend and considering data variability as in exploratory data analysis proves to be the best performer for most data patterns. We suggest that the information on whether a functional relation has been demonstrated should be included in meta-analyses. It is also possible to use as a weight the inverse of the data variability measure used in the quantification for assessing the functional relation. We offer an easy to use code for open-source software for implementing some of the quantifications.
Collapse
Affiliation(s)
- Rumen Manolov
- Ramon Llull University, Barcelona, Spain
- University of Barcelona, Spain
| | | | | | | |
Collapse
|