1
|
Manolov R, Onghena P. Testing delayed, gradual, and temporary treatment effects in randomized single-case experiments: A general response function framework. Behav Res Methods 2024; 56:3915-3936. [PMID: 37749426 PMCID: PMC11133040 DOI: 10.3758/s13428-023-02230-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/29/2023] [Indexed: 09/27/2023]
Abstract
Randomization tests represent a class of significance tests to assess the statistical significance of treatment effects in randomized single-case experiments. Most applications of single-case randomization tests concern simple treatment effects: immediate, abrupt, and permanent changes in the level of the outcome variable. However, researchers are confronted with delayed, gradual, and temporary treatment effects; in general, with "response functions" that are markedly different from single-step functions. We here introduce a general framework that allows specifying a test statistic for a randomization test based on predicted response functions that is sensitive to a wide variety of data patterns beyond immediate and sustained changes in level: different latencies (degrees of delay) of effect, abrupt versus gradual effects, and different durations of the effect (permanent or temporary). There may be reasonable expectations regarding the kind of effect (abrupt or gradual), entailing a different focal data feature (e.g., level or slope). However, the exact amount of latency and the exact duration of a temporary effect may not be known a priori, justifying an exploratory approach studying the effect of specifying different latencies or delayed effects and different durations for temporary effects. We provide illustrations of the proposal with real data, and we present a user-friendly freely available web application implementing it.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of Psychology, University of Barcelona, Passeig de la Vall d'Hebron 171, 08035, Barcelona, Spain.
| | - Patrick Onghena
- Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven, Tiensestraat 102, 3000, Leuven, Belgium
| |
Collapse
|
2
|
Landman W, Bogaerts S, Spreen M. Typicality of Level Change (TLC) as an Additional Effect Measure to NAP and Tau-U in Single Case Research. Behav Modif 2024; 48:51-74. [PMID: 37650389 DOI: 10.1177/01454455231190741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Single case research is a viable way to obtain evidence for social and psychological interventions on an individual level. Across single case research studies various analysis strategies are employed, varying from visual analysis to the calculation of effect sizes. To calculate effect sizes in studies with few measurements per time period (<40 data points with a minimum of five data points in each phase), non-parametric indices such as Nonoverlap of All Pairs (NAP) and Tau-U are recommended. However, both indices have restrictions. This article discusses the restrictions of NAP and Tau-U and presents the description, calculation, and benefits of an additional effect size, called the Typicality of Level Change (TLC) index. In comparison to NAP and Tau-U, the TLC index is more aligned to visual analysis, not restricted by a ceiling effect, and does not overcompensate for problematic trends in data. The TLC index is also sensitive to the typicality of an effect. TLC is an important addition to ease the restrictions of current nonoverlap methods when comparing effect sizes between cases and studies.
Collapse
Affiliation(s)
- Willem Landman
- NHL Stenden University of Applied Sciences, Leeuwarden, The Netherlands
- Tilburg University, The Netherlands
| | | | - Marinus Spreen
- NHL Stenden University of Applied Sciences, Leeuwarden, The Netherlands
| |
Collapse
|
3
|
Rauwenhoff JCC, Bol Y, Peeters F, van den Hout AJHC, Geusgens CAV, van Heugten CM. Acceptance and commitment therapy for individuals with depressive and anxiety symptoms following acquired brain injury: A non-concurrent multiple baseline design across four cases. Neuropsychol Rehabil 2023; 33:1018-1048. [PMID: 35332849 PMCID: PMC10292126 DOI: 10.1080/09602011.2022.2053169] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 03/09/2022] [Indexed: 10/18/2022]
Abstract
Patients with acquired brain injury (ABI) often experience symptoms of anxiety and depression. Until now, evidence-based treatment is scarce. This study aimed to investigate the effectiveness of Acceptance and Commitment Therapy (ACT) for patients with ABI. To evaluate the effect of ACT for people with ABI, a non-concurrent multiple baseline design across four cases was used. Participants were randomly assigned to a baseline period, followed by treatment and then follow-up phases. Anxiety and depressive symptoms were repeatedly measured. During six measurement moments over a year, participants filled in questionnaires measuring anxiety, depression, stress, participation, quality of life, and ACT-related processes. Randomization tests and NAP scores were used to calculate the level of change across phases. Clinically significant change was defined with the Reliable Change Index. Three out of four participants showed medium to large decreases in anxiety and depressive symptoms (NAP = 0.85 till 0.99). Furthermore, participants showed improvements regarding stress, cognitive fusion, and quality of life. There were no improvements regarding psychological flexibility, value-driven behaviour, or social participation. This study shows that ACT is possibly an effective treatment option for people experiencing ABI-related anxiety and depression symptoms. Replication with single case or large scale group studies is needed to confirm these findings.
Collapse
Affiliation(s)
- Johanne C. C. Rauwenhoff
- School for Mental Health and Neuroscience, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, Netherlands
- Limburg Brain Injury Centre, Maastricht, Netherlands
| | - Yvonne Bol
- Department of Clinical and Medical Psychology, Zuyderland Medical Centre, Sittard-Geleen/Heerlen, Netherlands
| | - Frenk Peeters
- Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Anja J. H. C. van den Hout
- Department of Clinical and Medical Psychology, Zuyderland Medical Centre, Sittard-Geleen/Heerlen, Netherlands
| | - Chantal A. V. Geusgens
- Department of Clinical and Medical Psychology, Zuyderland Medical Centre, Sittard-Geleen/Heerlen, Netherlands
| | - Caroline M. van Heugten
- School for Mental Health and Neuroscience, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, Netherlands
- Limburg Brain Injury Centre, Maastricht, Netherlands
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
4
|
Krasny-Pacini A. Single-case experimental designs for child neurological rehabilitation and developmental disability research. Dev Med Child Neurol 2023; 65:611-624. [PMID: 36721909 DOI: 10.1111/dmcn.15513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 12/14/2022] [Accepted: 12/15/2022] [Indexed: 02/02/2023]
Abstract
Single-case experimental designs (SCEDs) are a group of methodologies of growing interest, aiming to test the effectiveness of an intervention at the single-participant level, using a rigorous and prospective methodology. SCEDs may promote flexibility on how we design research protocols and inform clinical decision-making, especially for personalized outcome measures, inclusion of families with challenging needs, measurement of children's progress in relation to parental implementation of interventions, and focus on personal goals. Design options for SCEDs are discussed in relation to an expected on/off effect of the intervention (e.g. school/environmental adaptation, assistive technology devices) or, alternatively, on an expected carry-on/maintenance of effects (interventions aiming to develop or restore a function). Randomization in multiple-baseline designs and 'power' calculations are explained. The most frequent reasons for not detecting an intervention effect in SCEDs are also presented, especially in relation to baseline length, trend, and instability. The use of SCEDs on the front and back ends of randomized controlled trials is discussed.
Collapse
Affiliation(s)
- Agata Krasny-Pacini
- Pôle de Médecine Physique et de Réadaptation UF 4372, Hôpitaux Universitaires de Strasbourg, France.,Service EMOI TC, Institut Universitaire de réadaptation Clemenceau, Illkirch, France.,Unité INSERM 1114 Neuropsychologie Cognitive et Physiopathologie De La Schizophrénie, Strasbourg, France.,Université de Strasbourg, Faculté de Médecine, Strasbourg, France
| |
Collapse
|
5
|
van Diest SL, den Oudsten BL, Aaronson NK, Beaulen A, Verboon P, Aarnoudse B, van Lankveld JJDM. Emotionally focused couple therapy in cancer survivor couples with marital and sexual problems: a replicated single-case experimental design. Front Psychol 2023; 14:1123821. [PMID: 37205090 PMCID: PMC10187887 DOI: 10.3389/fpsyg.2023.1123821] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 04/03/2023] [Indexed: 05/21/2023] Open
Abstract
Objective The current research examined the effect of Emotionally Focused Couples Therapy (EFCT) on perceived intimacy, affect, and dyadic connection in cancer survivor couples with relationship challenges. Method In this longitudinal replicated single-case study, positive and negative affect, intimacy, partner responsiveness, and expression of attachment-based emotional needs were reported every 3 days before and during treatment. Thirteen couples, with one partner having survived colorectal cancer or breast cancer, participated for the full duration of the study. Statistical analysis of the data was performed using randomization tests, piecewise regression, and multilevel analyses. Results Adherence to the therapeutic protocol was tested and found adequate. Compared with baseline, significant positive effects on affect variables were found during the therapeutic process. Positive affect increased and negative affect decreased. Partner responsiveness, perceived intimacy, and the expression of attachment-based emotional needs improved, but only in the later phase of treatment. Results at the group level were statistically significant, whereas effects at the individual level were not. Discussion This study found positive group-level effects of EFCT on affect and dyadic outcome measures in cancer survivors. The positive results warrant further research, including randomized clinical trials, to replicate these effects of EFCT in cancer survivor couples experiencing marital and sexual problems.
Collapse
Affiliation(s)
- Selma L. van Diest
- Department of Clinical Psychology, Open University of the Netherlands, Heerlen, Netherlands
| | - Brenda L. den Oudsten
- Department of Medical and Clinical Psychology, Tilburg University, Tilburg, Netherlands
| | - Neil K. Aaronson
- Department of Psychosocial Research, University of Amsterdam, Amsterdam, Netherlands
| | - Audrey Beaulen
- Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, Netherlands
| | - Peter Verboon
- Department of Methodology and Statistics, Open University of the Netherlands, Heerlen, Netherlands
| | | | - Jacques J. D. M. van Lankveld
- Department of Clinical Psychology, Open University of the Netherlands, Heerlen, Netherlands
- *Correspondence: Jacques J. D. M. van Lankveld,
| |
Collapse
|
6
|
Manolov R, Onghena P. Defining and assessing immediacy in single-case experimental designs. J Exp Anal Behav 2022; 118:462-492. [PMID: 36106573 PMCID: PMC9825864 DOI: 10.1002/jeab.799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 08/24/2022] [Accepted: 08/28/2022] [Indexed: 01/11/2023]
Abstract
Immediacy is one of six data aspects (alongside level, trend, variability, overlap, and consistency) that has to be accounted for when visually analyzing single-case data. Given that it is one of the aspects that has received considerably less attention than other data aspects, the current text offers a review of the proposed conceptual definitions of immediacy (i.e., what it refers to) and also of the suggested operational definitions (i.e., how exactly is it assessed and/or quantified). Provided that a variety of conceptual and operational definitions is identified, we propose following a sensitivity analysis using a randomization test for assessing immediate effects in single-case experimental designs, by identifying when changes were most clear. In such a sensitivity analysis, the immediate effects are tested for multiple possible intervention points and for different possible operational definitions. Robust immediate effects can be detected if the results for the different operational definitions converge.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of PsychologyUniversity of Barcelona
| | - Patrick Onghena
- Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research GroupKU Leuven – University of LeuvenLeuvenBelgium
| |
Collapse
|
7
|
Manolov R, Tanious R, Fernández-Castilla B. A proposal for the assessment of replication of effects in single-case experimental designs. J Appl Behav Anal 2022; 55:997-1024. [PMID: 35467023 PMCID: PMC9324994 DOI: 10.1002/jaba.923] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 03/15/2022] [Accepted: 03/18/2022] [Indexed: 11/23/2022]
Abstract
In science in general and in the context of single‐case experimental designs, replication of the effects of the intervention within and/or across participants or experiments is crucial for establishing causality and for assessing the generality of the intervention effect. Specific developments and proposals for assessing whether an effect has been replicated or not (or to what extent) are scarce, in the general context of behavioral sciences, and practically null in the single‐case experimental designs context. We propose an extension of the modified Brinley plot for assessing how many of the effects replicate. To make this assessment possible, a definition of replication is suggested, on the basis of expert judgment, rather than on statistical criteria. The definition of replication and its graphical representation are justified, presenting their strengths and limitations, and illustrated with real data. A user‐friendly software is made available for obtaining automatically the graphical representation.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, University of Barcelona
| | - René Tanious
- Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven - University of Leuven, Leuven, Belgium
| | - Belén Fernández-Castilla
- Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven - University of Leuven, Leuven, Belgium
| |
Collapse
|
8
|
Tanious R, Onghena P. Applied hybrid single-case experiments published between 2016 and 2020: A systematic review. METHODOLOGICAL INNOVATIONS 2022. [DOI: 10.1177/20597991221077910] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Single-case experimental designs (SCEDs) are frequently used research designs in psychology, (special) education, and related fields. Hybrid designs are formed by combining two or more of the basic SCED forms (i.e. phase designs, alternation designs, multiple baseline designs, and changing criterion designs). Hybrid designs have the potential to tackle complex research questions and increase internal validity, but relatively little is known about their use in actual research practice. Therefore, we systematically reviewed SCED hybrid designs published between 2016 and 2020. The systematic review of 67 studies indicates that a hybrid of phase designs and multiple baseline designs is most popular. Hybrid designs are most frequently analyzed by means of visual analysis paired with descriptive statistics. Randomization in the study design is common only for one particular kind of hybrid design. Examples of hybrid studies reveal that these designs are particularly popular in educational research. We compare some of the results of the systematic review to those obtained by Hammond and Gast, Shadish and Sullivan, and Tanious and Onghena. Finally, we discuss the results of the present systematic review in light of the need for specific guidelines for hybrid designs, including analytical methods, design specific randomization and reporting, and the need for terminological clarification.
Collapse
Affiliation(s)
- René Tanious
- Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven, Leuven, Belgium
| | - Patrick Onghena
- Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven, Leuven, Belgium
| |
Collapse
|
9
|
Barnard-Brak L, Watkins L, Richman D. Optimal number of baseline sessions before changing phases within single-case experimental designs. Behav Processes 2021; 191:104461. [PMID: 34280482 DOI: 10.1016/j.beproc.2021.104461] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 07/12/2021] [Indexed: 12/23/2022]
Abstract
Recommendations vary considerably for the minimum or optimal number of baseline sessions to conduct within single-case experimental design clinical analyses or research studies. We examined the optimal number of baseline sessions that produced minimal bias. First, we examined the relation between the number of baseline sessions and the degree of bias in calculating estimates of treatment effect size. As the number of baseline sessions increased, the bias in effect size estimates decreased, r = -0.36, p < 0.001. s, we examined what would be the minimum number of baseline sessions associated with varying levels of bias. Bias of approximately ten percent was associated with four to five baseline sessions. Bias of about five percent was associated with six to seven baseline sessions. Third, we examined the relation between standard deviation and varying levels of bias. As the number of baseline sessions increases, the standard deviation for the phase decreased, r = -0.89, p < 0.001. Fourth, we examined what value of standard deviation in the baseline phase was associated with equal to or more than five versus ten percent bias. When considering five or ten percent bias, the optimal level of standard deviation was 0.59 or less.
Collapse
Affiliation(s)
- Lucy Barnard-Brak
- The University of Alabama, Capital Hall 1807, Box 870232, Tuscaloosa, AL, 35487, United States.
| | - Laci Watkins
- The University of Alabama, Capital Hall 1807, Box 870232, Tuscaloosa, AL, 35487, United States
| | - David Richman
- Texas Tech University, PO Box 41071, Lubbock, TX, 79409, United States
| |
Collapse
|
10
|
Manolov R, Moeyaert M, Fingerhut JE. A Priori Justification for Effect Measures in Single-Case Experimental Designs. Perspect Behav Sci 2021; 45:153-186. [DOI: 10.1007/s40614-021-00282-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/27/2021] [Indexed: 01/31/2023] Open
|
11
|
A systematic review of applied single-case research published between 2016 and 2018: Study designs, randomization, data aspects, and data analysis. Behav Res Methods 2020; 53:1371-1384. [PMID: 33104956 DOI: 10.3758/s13428-020-01502-4] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/09/2020] [Indexed: 11/08/2022]
Abstract
Single-case experimental designs (SCEDs) have become a popular research methodology in educational science, psychology, and beyond. The growing popularity has been accompanied by the development of specific guidelines for the conduct and analysis of SCEDs. In this paper, we examine recent practices in the conduct and analysis of SCEDs by systematically reviewing applied SCEDs published over a period of three years (2016-2018). Specifically, we were interested in which designs are most frequently used and how common randomization in the study design is, which data aspects applied single-case researchers analyze, and which analytical methods are used. The systematic review of 423 studies suggests that the multiple baseline design continues to be the most widely used design and that the difference in central tendency level is by far most popular in SCED effect evaluation. Visual analysis paired with descriptive statistics is the most frequently used method of data analysis. However, inferential statistical methods and the inclusion of randomization in the study design are not uncommon. We discuss these results in light of the findings of earlier systematic reviews and suggest future directions for the development of SCED methodology.
Collapse
|
12
|
Carlsen AN, Maslovat D, Kaga K. An unperceived acoustic stimulus decreases reaction time to visual information in a patient with cortical deafness. Sci Rep 2020; 10:5825. [PMID: 32242039 PMCID: PMC7118083 DOI: 10.1038/s41598-020-62450-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 03/13/2020] [Indexed: 11/16/2022] Open
Abstract
Responding to multiple stimuli of different modalities has been shown to reduce reaction time (RT), yet many different processes can potentially contribute to multisensory response enhancement. To investigate the neural circuits involved in voluntary response initiation, an acoustic stimulus of varying intensities (80, 105, or 120 dB) was presented during a visual RT task to a patient with profound bilateral cortical deafness and an intact auditory brainstem response. Despite being unable to consciously perceive sound, RT was reliably shortened (~100 ms) on trials where the unperceived acoustic stimulus was presented, confirming the presence of multisensory response enhancement. Although the exact locus of this enhancement is unclear, these results cannot be attributed to involvement of the auditory cortex. Thus, these data provide new and compelling evidence that activation from subcortical auditory processing circuits can contribute to other cortical or subcortical areas responsible for the initiation of a response, without the need for conscious perception.
Collapse
Affiliation(s)
| | - Dana Maslovat
- School of Kinesiology, University of British Columbia, Vancouver, Canada
| | - Kimitaka Kaga
- National Institute of Sensory Organs, National Tokyo Medical Center, Tokyo, Japan
| |
Collapse
|
13
|
Tanious R, Onghena P. Randomized Single-Case Experimental Designs in Healthcare Research: What, Why, and How? Healthcare (Basel) 2019; 7:E143. [PMID: 31766188 PMCID: PMC6955662 DOI: 10.3390/healthcare7040143] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 09/26/2019] [Accepted: 11/07/2019] [Indexed: 11/17/2022] Open
Abstract
Health problems are often idiosyncratic in nature and therefore require individualized diagnosis and treatment. In this paper, we show how single-case experimental designs (SCEDs) can meet the requirement to find and evaluate individually tailored treatments. We give a basic introduction to the methodology of SCEDs and provide an overview of the available design options. For each design, we show how an element of randomization can be incorporated to increase the internal and statistical conclusion validity and how the obtained data can be analyzed using visual tools, effect size measures, and randomization inference. We illustrate each design and data analysis technique using applied data sets from the healthcare literature.
Collapse
Affiliation(s)
- René Tanious
- Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven—University of Leuven, 3000 Leuven, Belgium;
| | | |
Collapse
|
14
|
Tanious R, De TK, Onghena P. A multiple randomization testing procedure for level, trend, variability, overlap, immediacy, and consistency in single-case phase designs. Behav Res Ther 2019; 119:103414. [DOI: 10.1016/j.brat.2019.103414] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Revised: 05/16/2019] [Accepted: 05/28/2019] [Indexed: 11/15/2022]
|
15
|
Onghena P, Tanious R, De TK, Michiels B. Randomization tests for changing criterion designs. Behav Res Ther 2019; 117:18-27. [DOI: 10.1016/j.brat.2019.01.005] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2018] [Revised: 01/07/2019] [Accepted: 01/10/2019] [Indexed: 11/29/2022]
|
16
|
Tanious R, De TK, Michiels B, Van den Noortgate W, Onghena P. Assessing Consistency in Single-Case A-B-A-B Phase Designs. Behav Modif 2019; 44:518-551. [PMID: 30931585 DOI: 10.1177/0145445519837726] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Previous research has introduced several effect size measures (ESMs) to quantify data aspects of single-case experimental designs (SCEDs): level, trend, variability, overlap, and immediacy. In the current article, we extend the existing literature by introducing two methods for quantifying consistency in single-case A-B-A-B phase designs. The first method assesses the consistency of data patterns across phases implementing the same condition, called CONsistency of DAta Patterns (CONDAP). The second measure assesses the consistency of the five other data aspects when changing from baseline to experimental phase, called CONsistency of the EFFects (CONEFF). We illustrate the calculation of both measures for four A-B-A-B phase designs from published literature and demonstrate how CONDAP and CONEFF can supplement visual analysis of SCED data. Finally, we discuss directions for future research.
Collapse
|
17
|
Robert C. L’utilisation de protocoles individuels expérimentaux et quasi-expérimentaux en psychologie : aspects théoriques et méthodologiques. ANNEE PSYCHOLOGIQUE 2019. [DOI: 10.3917/anpsy1.191.0055] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
18
|
Vannest KJ, Peltier C, Haas A. Results reporting in single case experiments and single case meta-analysis. RESEARCH IN DEVELOPMENTAL DISABILITIES 2018; 79:10-18. [PMID: 29960830 DOI: 10.1016/j.ridd.2018.04.029] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2017] [Revised: 04/24/2018] [Accepted: 04/30/2018] [Indexed: 06/08/2023]
Abstract
Single Case Experimental Design is a discipline grounded in applied behavior analysis where the needs of individual clients and the application of scientific inquiry are fundamental tenets. These two principles remain tantamount in the conduct of research using this methodology and the expansion of the method into evidence-based practice determinations. Although recommendations for quality indicators are widespread, implementation is not. Concurrent to the rise of quality indicators is an increasing interest in analysis methodology. Visual analysis has a history of application and validity, newer forms of analysis less so. While some argue for concordance between the two, it may be the differences that are worth exploration in understanding characteristics of trend and variability in much of the published literature. Design choice and visual analysis decisions are rarely fully articulated. Statistical analyses are likewise inadequately justified or described. Recommendations for the explicit language of reporting as derived from prior meta-analysis and a current review of two leading journals provides a scaffold consistent with existing guidelines but additive in detail, exemplars, and justification. This is intended to improve reporting of results for individual studies and their potential use in future meta-analytic work.
Collapse
|
19
|
|
20
|
Manolov R, Solanas A. Quantifying differences between conditions in single-case designs: Possible analysis and meta-analysis. Dev Neurorehabil 2018; 21:238-252. [PMID: 26809851 DOI: 10.3109/17518423.2015.1100688] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
The current paper is a call for and illustration of a way of closing the gap between basic research and professional practice in the field of neurorehabilitation. Methodologically, single-case experimental designs and the guidelines created regarding their conduct are highlighted. Statistically, we review two data analytical options, namely (a) indices quantifying the difference between pairs of conditions in the same metric as the target behavior and (b) a formal statistical procedure offering a standardized overall quantification. The paper provides guidance in the analysis and suggests free software in order to illustrate, in the context of data from behavioral interventions with children with developmental disorders, that informative analyses are feasible. We also show how the results of individual studies can be made eligible for meta-analyses, which are useful for establishing the evidence basis of interventions. Nevertheless, we also point at decisions that need to be made during the process of data analysis.
Collapse
Affiliation(s)
- Rumen Manolov
- a Department of Behavioral Sciences Methods , University of Barcelona , Barcelona , Spain.,b Institute for Brain, Cognition and Behavior (IR3C), University of Barcelona , Barcelona , Spain
| | - Antonio Solanas
- a Department of Behavioral Sciences Methods , University of Barcelona , Barcelona , Spain.,b Institute for Brain, Cognition and Behavior (IR3C), University of Barcelona , Barcelona , Spain
| |
Collapse
|
21
|
Nonparametric meta-analysis for single-case research: Confidence intervals for combined effect sizes. Behav Res Methods 2018; 51:1145-1160. [DOI: 10.3758/s13428-018-1044-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
22
|
Krasny-Pacini A, Evans J. Single-case experimental designs to assess intervention effectiveness in rehabilitation: A practical guide. Ann Phys Rehabil Med 2017; 61:164-179. [PMID: 29253607 DOI: 10.1016/j.rehab.2017.12.002] [Citation(s) in RCA: 146] [Impact Index Per Article: 20.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Revised: 11/29/2017] [Accepted: 12/10/2017] [Indexed: 11/15/2022]
Abstract
Single-case experimental designs (SCED) are experimental designs aiming at testing the effect of an intervention using a small number of patients (typically one to three), using repeated measurements, sequential (±randomized) introduction of an intervention and method-specific data analysis, including visual analysis and specific statistics. The aim of this paper is to familiarise professionals working in different fields of rehabilitation with SCEDs and provide practical advice on how to design and implement a SCED in clinical rehabilitation practice. Research questions suitable for SCEDs and the different types of SCEDs (e.g., alternating treatment designs, introduction/withdrawal designs and multiple baseline designs) are reviewed. Practical steps in preparing a SCED design are outlined. Examples from different rehabilitation domains are provided throughout the paper. Challenging issues such as the choice of the repeated measure, assessment of generalisation, randomization, procedural fidelity, replication and generalizability of findings are discussed. Simple rules and resources for data analysis are presented. The utility of SCEDs in physical and rehabilitation medicine (PRM) are discussed.
Collapse
Affiliation(s)
- Agata Krasny-Pacini
- Institut universitaire de réadaptation Clemenceau-Strasbourg, 45, boulevard Clemenceau, 67082 Strasbourg, France; Service de chirurgie orthopédique infantile, hôpital de Hautepierre, CHU de Strasbourg, avenue Molière, 67098 Strasbourg, France; GRC handicap cognitif et réadaptation (HanCRe), hôpitaux universitaires Pitié-Salpêtière, 75013 Paris, France.
| | - Jonathan Evans
- Institute of Health and Wellbeing, University of Glasgow, The Academic Centre, Gartnavel Royal Hospital, 1055 Great Western Road, Glasgow G12 0XH, United Kingdom
| |
Collapse
|
23
|
Analytical Options for Single-Case Experimental Designs: Review and Application to Brain Impairment. BRAIN IMPAIR 2017. [DOI: 10.1017/brimp.2017.17] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Single-case experimental designs meeting evidence standards are useful for identifying empirically-supported practices. Part of the research process entails data analysis, which can be performed both visually and numerically. In the current text, we discuss several statistical techniques focusing on the descriptive quantifications that they provide on aspects such as overlap, difference in level and in slope. In both cases, the numerical results are interpreted in light of the characteristics of the data as identified via visual inspection. Two previously published data sets from patients with traumatic brain injury are re-analysed, illustrating several analytical options and the data patterns for which each of these analytical techniques is especially useful, considering their assumptions and limitations. In order to make the current review maximally informative for applied researchers, we point to free user-friendly web applications of the analytical techniques. Moreover, we offer up-to-date references to the potentially useful analytical techniques not illustrated in the article. Finally, we point to some analytical challenges and offer tentative recommendations about how to deal with them.
Collapse
|
24
|
The conditional power of randomization tests for single-case effect sizes in designs with randomized treatment order: A Monte Carlo simulation study. Behav Res Methods 2017; 50:557-575. [DOI: 10.3758/s13428-017-0885-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
25
|
An Acceptance and Commitment Therapy (ACT) intervention for Chronic Fatigue Syndrome (CFS): A case series approach. JOURNAL OF CONTEXTUAL BEHAVIORAL SCIENCE 2017. [DOI: 10.1016/j.jcbs.2017.04.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
26
|
Manolov R, Moeyaert M. Recommendations for Choosing Single-Case Data Analytical Techniques. Behav Ther 2017; 48:97-114. [PMID: 28077224 DOI: 10.1016/j.beth.2016.04.008] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/23/2015] [Revised: 04/13/2016] [Accepted: 04/30/2016] [Indexed: 11/29/2022]
Abstract
The current paper responds to the need to provide guidance to applied single-case researchers regarding the possibilities of data analysis. The amount of available single-case data analytical techniques has been growing during recent years and a general overview, comparing the possibilities of these techniques, is missing. Such an overview is provided that refers to techniques that yield results in terms of a raw or standardized difference and procedures related to regression analysis, as well as nonoverlap and percentage change indices. The comparison is provided in terms of the type of quantification provided, data features taken into account, conditions in which the techniques are appropriate, possibilities for meta-analysis, and evidence available on their performance. Moreover, we provide a set of recommendations for choosing appropriate analysis techniques, pointing at specific situations (aims, types of data, researchers' resources) and the data analytical techniques that are most appropriate in these situations. The recommendations are contextualized using a variety of published single-case data sets in order to illustrate a range of realistic situations that researchers have faced and may face in their investigations.
Collapse
|
27
|
Manolov R. Reporting single-case design studies: Advice in relation to the designs’ methodological and analytical peculiarities. ANUARIO DE PSICOLOGIA 2017. [DOI: 10.1016/j.anpsic.2017.05.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
28
|
Manolov R, Moeyaert M. How Can Single-Case Data Be Analyzed? Software Resources, Tutorial, and Reflections on Analysis. Behav Modif 2016; 41:179-228. [DOI: 10.1177/0145445516664307] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The present article aims to present a series of software developments in the quantitative analysis of data obtained via single-case experimental designs (SCEDs), as well as the tutorial describing these developments. The tutorial focuses on software implementations based on freely available platforms such as R and aims to bring statistical advances closer to applied researchers and help them become autonomous agents in the data analysis stage of a study. The range of analyses dealt with in the tutorial is illustrated on a typical single-case dataset, relying heavily on graphical data representations. We illustrate how visual and quantitative analyses can be used jointly, giving complementary information and helping the researcher decide whether there is an intervention effect, how large it is, and whether it is practically significant. To help applied researchers in the use of the analyses, we have organized the data in the different ways required by the different analytical procedures and made these data available online. We also provide Internet links to all free software available, as well as all the main references to the analytical techniques. Finally, we suggest that appropriate and informative data analysis is likely to be a step forward in documenting and communicating results and also for increasing the scientific credibility of SCEDs.
Collapse
|
29
|
Confidence intervals for single-case effect size measures based on randomization test inversion. Behav Res Methods 2016; 49:363-381. [DOI: 10.3758/s13428-016-0714-4] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
30
|
Lanovaz MJ, Rapp JT. Using Single-Case Experiments to Support Evidence-Based Decisions: How Much Is Enough? Behav Modif 2015; 40:377-95. [PMID: 26538276 DOI: 10.1177/0145445515613584] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
For practitioners, the use of single-case experimental designs (SCEDs) in the research literature raises an important question: How many single-case experiments are enough to have sufficient confidence that an intervention will be effective with an individual from a given population? Although standards have been proposed to address this question, current guidelines do not appear to be strongly grounded in theory or empirical research. The purpose of our article is to address this issue by presenting guidelines to facilitate evidence-based decisions by adopting a simple statistical approach to quantify the support for interventions that have been validated using SCEDs. Specifically, we propose the use of success rates as a supplement to support evidence-based decisions. The proposed methodology allows practitioners to aggregate the results from single-case experiments to estimate the probability that a given intervention will produce a successful outcome. We also discuss considerations and limitations associated with this approach.
Collapse
|
31
|
The consequences of modeling autocorrelation when synthesizing single-case studies using a three-level model. Behav Res Methods 2015; 48:803-12. [DOI: 10.3758/s13428-015-0612-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
32
|
Manolov R, Rochat L. Further developments in summarising and meta-analysing single-case data: An illustration with neurobehavioural interventions in acquired brain injury. Neuropsychol Rehabil 2015. [PMID: 26214248 DOI: 10.1080/09602011.2015.1064452] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Data analysis for single-case designs is an issue that has prompted many researchers to propose a variety of alternatives, including use of randomisation tests, regression-based procedures, and standardised mean difference. Another option consists in computing unstandardised or raw differences between conditions: the changes in slope and in level, or the difference between the projected baseline (including trend) and the actual treatment phase measurements. Apart from the strengths of these procedures (potentially easier interpretation clinically, separate estimations and an overall quantification of effects, reasonable performance), they require further development, such as (a) creating extensions for dealing with methodologically strong designs such as multiple baseline, (b) achieving comparability across studies and making possible meta-analytical integrations, and (c) implementing software for the extensions. The proposals are illustrated herein in the context of a meta-analysis of 28 studies on (neuro)behavioural interventions in adults who have challenging behaviours after acquired brain injury.
Collapse
Affiliation(s)
- Rumen Manolov
- a Department of Behavioural Sciences Methods , University of Barcelona , Barcelona , Spain
| | | |
Collapse
|
33
|
Meany-Walen KK, Bullis Q, Kottman T, Dillman Taylor D. Group Adlerian Play Therapy With Children With Off-Task Behaviors. JOURNAL FOR SPECIALISTS IN GROUP WORK 2015. [DOI: 10.1080/01933922.2015.1056569] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
34
|
Manolov R, Jamieson M, Evans JJ, Sierra V. Probability and Visual Aids for Assessing Intervention Effectiveness in Single-Case Designs. Behav Modif 2015; 39:691-720. [DOI: 10.1177/0145445515593512] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Single-case data analysis still relies heavily on visual inspection, and, at the same time, it is not clear to what extent the results of different quantitative procedures converge in identifying an intervention effect and its magnitude when applied to the same data; this is the type of evidence provided here for two procedures. One of the procedures, included due to the importance of providing objective criteria to visual analysts, is a visual aid fitting and projecting split-middle trend while taking into account data variability. The other procedure converts several different metrics into probabilities making their results comparable. In the present study, we expore to what extend these two procedures coincide in the magnitude of intervention effect taking place in a set of studies stemming from a recent meta-analysis. The procedures concur to a greater extent with the values of the indices computed and with each other and, to a lesser extent, with our own visual analysis. For distinguishing smaller from larger effects, the probability-based approach seems somewhat better suited. Moreover, the results of the field test suggest that the latter is a reasonably good mechanism for translating different metrics into similar labels. User friendly R code is provided for promoting the use of the visual aid, together with a quantification based on nonoverlap and the label provided by the probability approach.
Collapse
Affiliation(s)
- Rumen Manolov
- University of Barcelona, Spain
- Ramon Llull University, Barcelona, Spain
| | | | | | | |
Collapse
|
35
|
Manolov R, Gast DL, Perdices M, Evans JJ. Single-case experimental designs: reflections on conduct and analysis. Neuropsychol Rehabil 2014; 24:634-60. [PMID: 24779416 DOI: 10.1080/09602011.2014.903199] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
In this editorial discussion we reflect on the issues addressed by, and arising from, the papers in this special issue on Single-Case Experimental Design (SCED) study methodology. We identify areas of consensus and disagreement regarding the conduct and analysis of SCED studies. Despite the long history of application of SCEDs in studies of interventions in clinical and educational settings, the field is still developing. There is an emerging consensus on methodological quality criteria for many aspects of SCEDs, but disagreement on what are the most appropriate methods of SCED data analysis. Our aim is to stimulate this ongoing debate and highlight issues requiring further attention from applied researchers and methodologists. In addition we offer tentative criteria to support decision-making in relation to the selection of analytical techniques in SCED studies. Finally, we stress that large-scale interdisciplinary collaborations, such as the current Special Issue, are necessary if SCEDs are going to play a significant role in the development of the evidence base for clinical practice.
Collapse
Affiliation(s)
- Rumen Manolov
- a Department of Behavioural Sciences Methods , University of Barcelona , Spain
| | | | | | | |
Collapse
|
36
|
Evans JJ, Gast DL, Perdices M, Manolov R. Single case experimental designs: introduction to a special issue of Neuropsychological Rehabilitation. Neuropsychol Rehabil 2014; 24:305-14. [PMID: 24766415 DOI: 10.1080/09602011.2014.903198] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
This paper introduces the Special Issue of Neuropsychological Rehabilitation on Single Case Experimental Design (SCED) methodology. SCED studies have a long history of use in evaluating behavioural and psychological interventions, but in recent years there has been a resurgence of interest in SCED methodology, driven in part by the development of standards for conducting and reporting SCED studies. Although there is consensus on some aspects of SCED methodology, the question of how SCED data should be analysed remains unresolved. This Special Issues includes two papers discussing aspects of conducting SCED studies, five papers illustrating use of SCED methodology in clinical practice, and nine papers that present different methods of SCED data analysis. A final Discussion paper summarises points of agreement, highlights areas where further clarity is needed, and ends with a set of resources that will assist researchers conduct and analyse SCED studies.
Collapse
Affiliation(s)
- Jonathan J Evans
- a Institute of Health and Wellbeing , University of Glasgow , Scotland , UK
| | | | | | | |
Collapse
|
37
|
Heyvaert M, Onghena P. Randomization tests for single-case experiments: State of the art, state of the science, and state of the application. JOURNAL OF CONTEXTUAL BEHAVIORAL SCIENCE 2014. [DOI: 10.1016/j.jcbs.2013.10.002] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|