1
|
Chen LT, Chen YK, Yang TR, Chiang YS, Hsieh CY, Cheng C, Ding QW, Wu PJ, Peng CYJ. Examining the normality assumption of a design-comparable effect size in single-case designs. Behav Res Methods 2024; 56:379-405. [PMID: 36650402 DOI: 10.3758/s13428-022-02035-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/22/2022] [Indexed: 01/18/2023]
Abstract
What Works Clearinghouse (WWC, 2022) recommends a design-comparable effect size (D-CES; i.e., gAB) to gauge an intervention in single-case experimental design (SCED) studies, or to synthesize findings in meta-analysis. So far, no research has examined gAB's performance under non-normal distributions. This study expanded Pustejovsky et al. (2014) to investigate the impact of data distributions, number of cases (m), number of measurements (N), within-case reliability or intra-class correlation (ρ), ratio of variance components (λ), and autocorrelation (ϕ) on gAB in multiple-baseline (MB) design. The performance of gAB was assessed by relative bias (RB), relative bias of variance (RBV), MSE, and coverage rate of 95% CIs (CR). Findings revealed that gAB was unbiased even under non-normal distributions. gAB's variance was generally overestimated, and its 95% CI was over-covered, especially when distributions were normal or nearly normal combined with small m and N. Large imprecision of gAB occurred when m was small and ρ was large. According to the ANOVA results, data distributions contributed to approximately 49% of variance in RB and 25% of variance in both RBV and CR. m and ρ each contributed to 34% of variance in MSE. We recommend gAB for MB studies and meta-analysis with N ≥ 16 and when either (1) data distributions are normal or nearly normal, m = 6, and ρ = 0.6 or 0.8, or (2) data distributions are mildly or moderately non-normal, m ≥ 4, and ρ = 0.2, 0.4, or 0.6. The paper concludes with a discussion of gAB's applicability and design-comparability, and sound reporting practices of ES indices.
Collapse
Affiliation(s)
- Li-Ting Chen
- Department of Educational Studies, University of Nevada, Reno, Reno, NV, USA.
| | - Yi-Kai Chen
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Tong-Rong Yang
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Yu-Shan Chiang
- Department of Curriculum & Instruction, Indiana University Bloomington, Bloomington, IN, USA
| | - Cheng-Yu Hsieh
- Department of Psychology, National Taiwan University, Taipei, Taiwan
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Che Cheng
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Qi-Wen Ding
- Institute of Sociology, Academia Sinica, Taipei, Taiwan
| | - Po-Ju Wu
- Department of Counseling and Educational Psychology, Indiana University Bloomington, Bloomington, IN, USA
| | - Chao-Ying Joanne Peng
- Department of Psychology, National Taiwan University, Taipei, Taiwan
- Department of Counseling and Educational Psychology, Indiana University Bloomington, Bloomington, IN, USA
| |
Collapse
|
2
|
Bos SE, Powell SR, Maddox SA, Doabler CT. A Synthesis of the Conceptualization and Measurement of Implementation Fidelity in Mathematics Intervention Research. JOURNAL OF LEARNING DISABILITIES 2023; 56:95-115. [PMID: 35068249 DOI: 10.1177/00222194211065498] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In intervention studies, high rates of implementation fidelity are important markers of a study's success; however, the definition of implementation fidelity is both complex and dynamic. In this synthesis, we examined the dimensions of implementation fidelity measured and reported in 99 studies in which researchers utilized a mathematics intervention for elementary students (i.e., Grades 1-5). We examined implementation fidelity following recommendations made by Dane and Schneider (1998), O'Donnell (2008), and DeFouw et al. (2009) to capture a comprehensive representation of the implementation fidelity data collected and reported within mathematics intervention studies. We organized our conceptualization of implementation fidelity into four overarching categories and nine dimensions within those categories: intervention design (i.e., theories of change and logistics), fidelity of implementor (i.e., adherence, quality of delivery, dosage, and implementor knowledge or experience), student engagement, and treatment analysis (i.e., treatment differentiation and analysis of implementation fidelity). Overall, findings indicate many author teams reported adherence data and dosage data, but significantly fewer studies reported quality of delivery data, student engagement data, or treatment differentiation data. In addition, author teams were more likely to report at least one form of logistics, such as implementor support, than theories of change. Implications for research and practice are discussed.
Collapse
|