1
|
Landman W, Bogaerts S, Spreen M. Typicality of Level Change (TLC) as an Additional Effect Measure to NAP and Tau-U in Single Case Research. Behav Modif 2024; 48:51-74. [PMID: 37650389 DOI: 10.1177/01454455231190741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Single case research is a viable way to obtain evidence for social and psychological interventions on an individual level. Across single case research studies various analysis strategies are employed, varying from visual analysis to the calculation of effect sizes. To calculate effect sizes in studies with few measurements per time period (<40 data points with a minimum of five data points in each phase), non-parametric indices such as Nonoverlap of All Pairs (NAP) and Tau-U are recommended. However, both indices have restrictions. This article discusses the restrictions of NAP and Tau-U and presents the description, calculation, and benefits of an additional effect size, called the Typicality of Level Change (TLC) index. In comparison to NAP and Tau-U, the TLC index is more aligned to visual analysis, not restricted by a ceiling effect, and does not overcompensate for problematic trends in data. The TLC index is also sensitive to the typicality of an effect. TLC is an important addition to ease the restrictions of current nonoverlap methods when comparing effect sizes between cases and studies.
Collapse
Affiliation(s)
- Willem Landman
- NHL Stenden University of Applied Sciences, Leeuwarden, The Netherlands
- Tilburg University, The Netherlands
| | | | - Marinus Spreen
- NHL Stenden University of Applied Sciences, Leeuwarden, The Netherlands
| |
Collapse
|
2
|
Manolov R. Does the choice of a linear trend-assessment technique matter in the context of single-case data? Behav Res Methods 2023; 55:4200-4221. [PMID: 36622560 DOI: 10.3758/s13428-022-02013-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/22/2022] [Indexed: 01/10/2023]
Abstract
Trend is one of the data aspects that is an object of assessment in the context of single-case experimental designs. This assessment can be performed both visually and quantitatively. Given that trend, just like other relevant data features such as level, immediacy, or overlap does not have a single operative definition, a comparison among the existing alternatives is necessary. Previous studies have included illustrations of differences between trend-line fitting techniques using real data. In the current study, I carry out a simulation to study the degree to which different trend-line fitting techniques lead to different degrees of bias, mean square error, and statistical power for a variety of quantifications that entail trend lines. The simulation involves generating both continuous and count data, for several phase lengths, degrees of autocorrelation, and effect sizes (change in level and change in slope). The results suggest that, in general, ordinary least squares estimation performs well in terms of relative bias and mean square error. Especially, a quantification of slope change is associated with better statistical results than quantifying an average difference between conditions on the basis of a projected baseline trend. In contrast, the performance of the split-middle (bisplit) technique is less than optimal.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of Psychology, University of Barcelona, Passeig de la Vall d'Hebron 171, 08035, Barcelona, Spain.
| |
Collapse
|
3
|
Manolov R, Onghena P. Defining and assessing immediacy in single-case experimental designs. J Exp Anal Behav 2022; 118:462-492. [PMID: 36106573 PMCID: PMC9825864 DOI: 10.1002/jeab.799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 08/24/2022] [Accepted: 08/28/2022] [Indexed: 01/11/2023]
Abstract
Immediacy is one of six data aspects (alongside level, trend, variability, overlap, and consistency) that has to be accounted for when visually analyzing single-case data. Given that it is one of the aspects that has received considerably less attention than other data aspects, the current text offers a review of the proposed conceptual definitions of immediacy (i.e., what it refers to) and also of the suggested operational definitions (i.e., how exactly is it assessed and/or quantified). Provided that a variety of conceptual and operational definitions is identified, we propose following a sensitivity analysis using a randomization test for assessing immediate effects in single-case experimental designs, by identifying when changes were most clear. In such a sensitivity analysis, the immediate effects are tested for multiple possible intervention points and for different possible operational definitions. Robust immediate effects can be detected if the results for the different operational definitions converge.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of PsychologyUniversity of Barcelona
| | - Patrick Onghena
- Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research GroupKU Leuven – University of LeuvenLeuvenBelgium
| |
Collapse
|
4
|
Aydin O, Tanious R. Performance criteria-based effect size (PCES) measurement of single-case experimental designs: A real-world data study. J Appl Behav Anal 2022; 55:891-918. [PMID: 35593661 DOI: 10.1002/jaba.928] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 04/06/2022] [Indexed: 12/20/2022]
Abstract
Visual analysis and nonoverlap-based effect sizes are predominantly used in analyzing single case experimental designs (SCEDs). Although they are popular analytical methods for SCEDs, they have certain limitations. In this study, a new effect size calculation model for SCEDs, named performance criteria-based effect size (PCES), is proposed considering the limitations of 4 nonoverlap-based effect size measures, widely accepted in the literature and that blend well with visual analysis. In the field test of PCES, actual data from published studies were utilized, and the relations between PCES, visual analysis, and the 4 nonoverlap-based methods were examined. In determining the data to be used in the field test, 1,052 tiers (AB phases) were identified from 6 journals. The results revealed a weak or moderate relation between PCES and nonoverlap-based methods due to its focus on performance criteria. Although PCES has some weaknesses, it promises to eliminate the causes that may create issues in nonoverlap-based methods, using quantitative data to determine socially important changes in behavior and to complement visual analysis.
Collapse
|
5
|
Manolov R, Tanious R, Fernández-Castilla B. A proposal for the assessment of replication of effects in single-case experimental designs. J Appl Behav Anal 2022; 55:997-1024. [PMID: 35467023 PMCID: PMC9324994 DOI: 10.1002/jaba.923] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 03/15/2022] [Accepted: 03/18/2022] [Indexed: 11/23/2022]
Abstract
In science in general and in the context of single‐case experimental designs, replication of the effects of the intervention within and/or across participants or experiments is crucial for establishing causality and for assessing the generality of the intervention effect. Specific developments and proposals for assessing whether an effect has been replicated or not (or to what extent) are scarce, in the general context of behavioral sciences, and practically null in the single‐case experimental designs context. We propose an extension of the modified Brinley plot for assessing how many of the effects replicate. To make this assessment possible, a definition of replication is suggested, on the basis of expert judgment, rather than on statistical criteria. The definition of replication and its graphical representation are justified, presenting their strengths and limitations, and illustrated with real data. A user‐friendly software is made available for obtaining automatically the graphical representation.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, University of Barcelona
| | - René Tanious
- Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven - University of Leuven, Leuven, Belgium
| | - Belén Fernández-Castilla
- Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven - University of Leuven, Leuven, Belgium
| |
Collapse
|
6
|
Abstract
Abstract. Individual differences perspectives have dominated the scientific study of creativity since the 1950’s. These perspectives, however, mainly emphasize group-level variations or inter-individual differences, with limited interest in individual-level variations. Yet, (1) group-level findings are often used to make inferences at the person-level, which might not apply consistently across individuals, and (2) a focus on intra-individual variations could supplement knowledge based on inter-individual differences and accurately inform creativity as a dynamic and multifaceted psychological construct. Indeed, when observed at the individual level, creativity can vary from moment to moment, task to task, and even item to item, which is not well reflected in the current understanding of creativity. After introducing the historical context for the study of individual differences in creativity, this article presents and illustrates three fundamental and distinct aspects of intra-individual variability as they apply to creativity, namely (in)consistency (or processing fluctuation), dispersion, and intraindividual change. While doing so, recent developments in apparatus and methods to assess creativity as a more dynamic phenomenon are presented. The article concludes by discussing the promise of accounting for intra-individual variability in creative performance and potential and the new knowledge it may elicit for both creativity research and practice.
Collapse
Affiliation(s)
- Baptiste Barbot
- Psychological Sciences Research Institute, UCLouvain, Belgium
- Child Study Center, Yale University, New Haven, CT, USA
| |
Collapse
|
7
|
Tanious R, Onghena P. Applied hybrid single-case experiments published between 2016 and 2020: A systematic review. METHODOLOGICAL INNOVATIONS 2022. [DOI: 10.1177/20597991221077910] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Single-case experimental designs (SCEDs) are frequently used research designs in psychology, (special) education, and related fields. Hybrid designs are formed by combining two or more of the basic SCED forms (i.e. phase designs, alternation designs, multiple baseline designs, and changing criterion designs). Hybrid designs have the potential to tackle complex research questions and increase internal validity, but relatively little is known about their use in actual research practice. Therefore, we systematically reviewed SCED hybrid designs published between 2016 and 2020. The systematic review of 67 studies indicates that a hybrid of phase designs and multiple baseline designs is most popular. Hybrid designs are most frequently analyzed by means of visual analysis paired with descriptive statistics. Randomization in the study design is common only for one particular kind of hybrid design. Examples of hybrid studies reveal that these designs are particularly popular in educational research. We compare some of the results of the systematic review to those obtained by Hammond and Gast, Shadish and Sullivan, and Tanious and Onghena. Finally, we discuss the results of the present systematic review in light of the need for specific guidelines for hybrid designs, including analytical methods, design specific randomization and reporting, and the need for terminological clarification.
Collapse
Affiliation(s)
- René Tanious
- Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven, Leuven, Belgium
| | - Patrick Onghena
- Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven, Leuven, Belgium
| |
Collapse
|
8
|
Tanious R, Manolov R. A practitioner's guide to conducting and analysing embedded randomized single-case experimental designs. Neuropsychol Rehabil 2022; 33:613-645. [PMID: 35179088 DOI: 10.1080/09602011.2022.2035774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Single-case experimental designs (SCEDs) are a class of experimental designs suited for answering research questions at an individual level. The main designs available in SCED research are phase designs, multiple baseline designs, alternation designs, and changing criterion designs. Embedded designs, also referred to as combination or hybrid designs, consist of one of these basic designs forms embedded in another design (e.g., a changing criterion design embedded in a multiple baseline design). Systematic reviews of SCEDs have repeatedly indicated that embedded designs are frequently used in applied SCED research. In spite of their popularity, specific recommendations on the conduct and analysis of embedded SCED designs are lacking to date. The purpose of the present article is therefore to provide guidance to applied researchers wishing to conduct embedded SCED designs in terms of design options, design requirements, randomization, and data analysis.
Collapse
Affiliation(s)
- René Tanious
- Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven, Leuven, Belgium
| | - Rumen Manolov
- Faculty of Psychology, Department of Social Psychology and Quantitative Psychology, University of Barcelona, Barcelona, Spain
| |
Collapse
|
9
|
Manolov R, Moeyaert M, Fingerhut JE. A Priori Justification for Effect Measures in Single-Case Experimental Designs. Perspect Behav Sci 2021; 45:153-186. [DOI: 10.1007/s40614-021-00282-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/27/2021] [Indexed: 01/31/2023] Open
|
10
|
Manolov R, Tanious R. Assessing Consistency in Single-Case Data Features Using Modified Brinley Plots. Behav Modif 2020; 46:581-627. [PMID: 33371723 DOI: 10.1177/0145445520982969] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The current text deals with the assessment of consistency of data features from experimentally similar phases and consistency of effects in single-case experimental designs. Although consistency is frequently mentioned as a critical feature, few quantifications have been proposed so far: namely, under the acronyms CONDAP (consistency of data patterns in similar phases) and CONEFF (consistency of effects). Whereas CONDAP allows assessing the consistency of data patterns, the proposals made here focus on the consistency of data features such as level, trend, and variability, as represented by summary measures (mean, ordinary least squares slope, and standard deviation, respectively). The assessment of consistency of effect is also made in terms of these three data features, while also including the study of the consistency of an immediate effect (if expected). The summary measures are represented as points on a modified Brinley plot and their similarity is assessed via quantifications of distance. Both absolute and relative measures of consistency are proposed: the former expressed in the same measurement units as the outcome variable and the latter as a percentage. Illustrations with real data sets (multiple baseline, ABAB, and alternating treatments designs) show the wide applicability of the proposals. We developed a user-friendly website to offer both the graphical representations and the quantifications.
Collapse
|
11
|
A systematic review of applied single-case research published between 2016 and 2018: Study designs, randomization, data aspects, and data analysis. Behav Res Methods 2020; 53:1371-1384. [PMID: 33104956 DOI: 10.3758/s13428-020-01502-4] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/09/2020] [Indexed: 11/08/2022]
Abstract
Single-case experimental designs (SCEDs) have become a popular research methodology in educational science, psychology, and beyond. The growing popularity has been accompanied by the development of specific guidelines for the conduct and analysis of SCEDs. In this paper, we examine recent practices in the conduct and analysis of SCEDs by systematically reviewing applied SCEDs published over a period of three years (2016-2018). Specifically, we were interested in which designs are most frequently used and how common randomization in the study design is, which data aspects applied single-case researchers analyze, and which analytical methods are used. The systematic review of 423 studies suggests that the multiple baseline design continues to be the most widely used design and that the difference in central tendency level is by far most popular in SCED effect evaluation. Visual analysis paired with descriptive statistics is the most frequently used method of data analysis. However, inferential statistical methods and the inclusion of randomization in the study design are not uncommon. We discuss these results in light of the findings of earlier systematic reviews and suggest future directions for the development of SCED methodology.
Collapse
|
12
|
Hesam-Shariati N, Newton-John T, Singh AK, Tirado Cortes CA, Do TTN, Craig A, Middleton JW, Jensen MP, Trost Z, Lin CT, Gustin SM. Evaluation of the Effectiveness of a Novel Brain-Computer Interface Neuromodulative Intervention to Relieve Neuropathic Pain Following Spinal Cord Injury: Protocol for a Single-Case Experimental Design With Multiple Baselines. JMIR Res Protoc 2020; 9:e20979. [PMID: 32990249 PMCID: PMC7556378 DOI: 10.2196/20979] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 08/30/2020] [Accepted: 09/01/2020] [Indexed: 01/01/2023] Open
Abstract
BACKGROUND Neuropathic pain is a debilitating secondary condition for many individuals with spinal cord injury. Spinal cord injury neuropathic pain often is poorly responsive to existing pharmacological and nonpharmacological treatments. A growing body of evidence supports the potential for brain-computer interface systems to reduce spinal cord injury neuropathic pain via electroencephalographic neurofeedback. However, further studies are needed to provide more definitive evidence regarding the effectiveness of this intervention. OBJECTIVE The primary objective of this study is to evaluate the effectiveness of a multiday course of a brain-computer interface neuromodulative intervention in a gaming environment to provide pain relief for individuals with neuropathic pain following spinal cord injury. METHODS We have developed a novel brain-computer interface-based neuromodulative intervention for spinal cord injury neuropathic pain. Our brain-computer interface neuromodulative treatment includes an interactive gaming interface, and a neuromodulation protocol targeted to suppress theta (4-8 Hz) and high beta (20-30 Hz) frequency powers, and enhance alpha (9-12 Hz) power. We will use a single-case experimental design with multiple baselines to examine the effectiveness of our self-developed brain-computer interface neuromodulative intervention for the treatment of spinal cord injury neuropathic pain. We will recruit 3 participants with spinal cord injury neuropathic pain. Each participant will be randomly allocated to a different baseline phase (ie, 7, 10, or 14 days), which will then be followed by 20 sessions of a 30-minute brain-computer interface neuromodulative intervention over a 4-week period. The visual analog scale assessing average pain intensity will serve as the primary outcome measure. We will also assess pain interference as a secondary outcome domain. Generalization measures will assess quality of life, sleep quality, and anxiety and depressive symptoms, as well as resting-state electroencephalography and thalamic γ-aminobutyric acid concentration. RESULTS This study was approved by the Human Research Committees of the University of New South Wales in July 2019 and the University of Technology Sydney in January 2020. We plan to begin the trial in October 2020 and expect to publish the results by the end of 2021. CONCLUSIONS This clinical trial using single-case experimental design methodology has been designed to evaluate the effectiveness of a novel brain-computer interface neuromodulative treatment for people with neuropathic pain after spinal cord injury. Single-case experimental designs are considered a viable alternative approach to randomized clinical trials to identify evidence-based practices in the field of technology-based health interventions when recruitment of large samples is not feasible. TRIAL REGISTRATION Australian New Zealand Clinical Trials Registry (ANZCTR) ACTRN12620000556943; https://bit.ly/2RY1jRx. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/20979.
Collapse
Affiliation(s)
- Negin Hesam-Shariati
- Centre for Pain IMPACT, Neuroscience Research Australia, Sydney, Australia.,School of Psychology, University of New South Wales, Sydney, Australia
| | - Toby Newton-John
- Graduate School of Health, University of Technology Sydney, Sydney, Australia
| | - Avinash K Singh
- School of Computer Science, University of Technology Sydney, Sydney, Australia
| | | | | | - Ashley Craig
- John Walsh Centre for Rehabilitation Research, Northern Clinical School, University of Sydney, Kolling Institute, Sydney, Australia
| | - James W Middleton
- John Walsh Centre for Rehabilitation Research, Northern Clinical School, University of Sydney, Kolling Institute, Sydney, Australia
| | - Mark P Jensen
- Department of Rehabilitation Medicine, University of Washington, Seattle, WA, United States
| | - Zina Trost
- Department of Physical Medicine and Rehabilitation, Virginia Commonwealth University, Richmond, VA, United States
| | - Chin-Teng Lin
- School of Computer Science, University of Technology Sydney, Sydney, Australia
| | - Sylvia M Gustin
- Centre for Pain IMPACT, Neuroscience Research Australia, Sydney, Australia.,School of Psychology, University of New South Wales, Sydney, Australia
| |
Collapse
|
13
|
From Boulder to Stockholm in 70 Years: Single Case Experimental Designs in Clinical Research. PSYCHOLOGICAL RECORD 2020. [DOI: 10.1007/s40732-020-00402-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
14
|
Abstract
In the context of single-case experimental designs, replication is crucial. On the one hand, the replication of the basic effect within a study is necessary for demonstrating experimental control. On the other hand, replication across studies is required for establishing the generality of the intervention effect. Moreover, the "replicability crisis" presents a more general context further emphasizing the need for assessing consistency in replications. In the current text, we focus on replication of effects within a study, and we specifically discuss the consistency of effects. Our proposal for assessing the consistency of effects refers to one of the promising data analytical techniques, multilevel models, also known as hierarchical linear models or mixed effects models. One option is to check, for each case in a multiple-baseline design, whether the confidence interval for the individual treatment effect excludes zero. This is relevant for assessing whether the effect is replicated as being non-null. However, we consider that it is more relevant and informative to assess, for each case, whether the confidence interval for the random effects includes zero (i.e., whether the fixed effect estimate is a plausible value for each individual effect). This is relevant for assessing whether the effect is consistent in size, with the additional requirement that the fixed effect itself is different from zero. The proposal for assessing consistency is illustrated with real data and is implemented in free user-friendly software.
Collapse
|
15
|
Clanchy KM, Tweedy SM, Tate RL, Sterling M, Day MA, Nikles J, Ritchie C. Evaluation of a novel intervention to improve physical activity for adults with whiplash associated disorders: Protocol for a multiple-baseline, single case experimental study. Contemp Clin Trials Commun 2019; 16:100455. [PMID: 31650075 PMCID: PMC6804503 DOI: 10.1016/j.conctc.2019.100455] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Revised: 09/13/2019] [Accepted: 09/21/2019] [Indexed: 01/08/2023] Open
Abstract
Half of individuals with a whiplash injury experience ongoing pain and disability. Many are insufficiently active for good health, increasing their risk of preventable morbidity and mortality, and compounding the effects of the whiplash injury. This paper describes a protocol for evaluating the efficacy of a physical activity promotion intervention in adults with whiplash associated disorders. A multiple-baseline, single case experimental design will be used to evaluate the effects of a physical activity (PA) intervention that includes evidence-based behaviour change activities and relapse prevention strategies for six adults with chronic whiplash. A structured visual analysis supplemented with statistical analysis will be used to analyse: accelerometer-measured PA, confidence completing PA in the presence of neck pain, and pain interference.
Collapse
Affiliation(s)
- Kelly M. Clanchy
- School of Allied Health Sciences, Griffith University, Southport, Australia
| | - Sean M. Tweedy
- School of Human Movement and Nutrition Sciences, The University of Queensland, St Lucia, Australia
- I.M. Sechenov First Moscow State Medical University, Russia
| | - Robyn L. Tate
- John Walsh Centre for Rehabilitation Research, The University of Sydney, Sydney, Australia
| | - Michele Sterling
- Recover Injury Research Centre, The University of Queensland, Herston, Australia
| | - Melissa A. Day
- School of Psychology, The University of Queensland, St Lucia, Australia
| | - Jane Nikles
- Recover Injury Research Centre, The University of Queensland, Herston, Australia
| | - Carrie Ritchie
- Recover Injury Research Centre, The University of Queensland, Herston, Australia
| |
Collapse
|
16
|
Tanious R, Manolov R, Onghena P. The Assessment of Consistency in Single-Case Experiments: Beyond A-B-A-B Designs. Behav Modif 2019; 45:560-580. [PMID: 31619052 DOI: 10.1177/0145445519882889] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Quality standards for single-case experimental designs (SCEDs) recommend inspecting six data aspects: level, trend, variability, overlap, immediacy, and consistency of data patterns. The data aspect consistency has long been neglected by visual and statistical analysts of SCEDs despite its importance for inferring a causal relationship. However, recently a first quantification has been proposed in the context of A-B-A-B designs, called CONsistency of DAta Patterns (CONDAP). In the current paper, we extend the existing CONDAP measure for assessing consistency in designs with more than two successive A-B elements (e.g., A-B-A-B-A-B), multiple baseline designs, and changing criterion designs. We illustrate each quantification with published research.
Collapse
Affiliation(s)
- René Tanious
- Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven - University of Leuven, Leuven, Belgium
| | - Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of Psychology, University of Barcelona, Barcelona, Spain
| | - Patrick Onghena
- Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven - University of Leuven, Leuven, Belgium
| |
Collapse
|