1
|
Beard J, Cooper Z, Masson P, Mountford VA, Murphy R, Raykos B, Tatham M, Thomas JJ, Turner HM, Wade TD, Waller G. Assessing clinician competence in the delivery of cognitive-behavioural therapy for eating disorders: development of the Cognitive-Behavioural Therapy Scale for Eating Disorders (CBTS-ED). Cogn Behav Ther 2024; 53:29-47. [PMID: 37807843 DOI: 10.1080/16506073.2023.2263640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Accepted: 09/20/2023] [Indexed: 10/10/2023]
Abstract
Evidence-based cognitive-behaviour therapy for eating disorders (CBT-ED) differs from other forms of CBT for psychological disorders, making existing generic CBT measures of therapist competence inadequate for evaluating CBT-ED. This study developed and piloted the reliability of a novel measure of therapist competence in this domain-the Cognitive Behaviour Therapy Scale for Eating Disorders (CBTS-ED). Initially, a team of CBT-ED experts developed a 26-item measure, with general (i.e. present in every session) and specific (context- or case-dependent) items. To determine statistical properties of the measure, nine CBT-ED experts and eight non-experts independently observed six role-played mock CBT-ED therapy sessions, rating the therapists' performance using the CBTS-ED. The inter-item consistency (Cronbach's alpha and McDonald's omega) and inter-rater reliability (ICC) were assessed, as appropriate to the clustering of the items. The CBTS-ED demonstrated good internal consistency and moderate/good inter-rater reliability for the general items, at least comparable to existing generic CBT scales in other domains. An updated version is proposed, where five of the 16 "specific" items are reallocated to the general group. These preliminary results suggest that the CBTS-ED can be used effectively across both expert and non-expert raters, though less experienced raters might benefit from additional training in its use.
Collapse
Affiliation(s)
- Jessica Beard
- Department of Psychology, University of Sheffield, Sheffield, UK
| | - Zafra Cooper
- Department of Psychiatry, Yale University, New Haven, USA
| | - Philip Masson
- Department of Psychology, Western University, London, Canada
| | - Victoria A Mountford
- Sage Clinics, Dubai, UAE
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | - Rebecca Murphy
- Department of Psychiatry, University of Oxford, Oxford, UK
| | - Bronwyn Raykos
- Centre for Clinical Interventions, Northbridge, Western Australia
| | - Madeleine Tatham
- Department of Psychology, University of Sheffield, Sheffield, UK
| | - Jennifer J Thomas
- Department of Psychiatry, Harvard Medical School Massachusetts General Hospital, Boston, USA
| | - Hannah M Turner
- Eating Disorders Service, Southern Health NHS Foundation Trust, Southampton, UK
| | - Tracey D Wade
- College of Education, Psychology and Social Work, Flinders University, Bedford Park, South Australia
| | - Glenn Waller
- Department of Psychology, University of Sheffield, Sheffield, UK
| |
Collapse
|
2
|
Alfonsson S, Karvelas G, Linde J, Beckman M. A new short version of the Cognitive Therapy Scale Revised (CTSR-4): preliminary psychometric evaluation. BMC Psychol 2022; 10:21. [PMID: 35120569 PMCID: PMC8817471 DOI: 10.1186/s40359-022-00730-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 01/27/2022] [Indexed: 11/23/2022] Open
Abstract
Background The value of using comprehensive but cumbersome coding instruments to assess therapeutic competency is unclear. Shorter, more general instruments may enable more research in this important area. The aim of this study was therefore to psychometrically evaluate a shorter version of the Cognitive Therapy Scale-Revised (CTSR) and to compare it with the full-length version. Methods A four-item coding instrument (the CTSR-4) was derived from the CTSR. Four experienced psychotherapists used the CTSR-4 to assess 50 fifteen-minutes samples from audio-recorded CBT sessions. The criterion validity of the CTSR-4 was analyzed by comparing the results with previously expert-rated CTSR scores from the same sessions, and the inter-rater agreement between the three coders was calculated. Results The CTSR-4 showed good criterion validity (ICC = .71–.88) when compared to the expert ratings of the complete CTSR, and the inter-rater agreement was adequate (ICC = .64–.79). Conclusions A condensed version of the CTSR, used to assess CBT competence from shorter samples of therapy sessions, is moderately reliable and may provide similar results as the full-length version. According to preliminary analyses, the CTSR-4 has potential as a low-cost alternative to assess CBT competency in both research and psychotherapist training.
Collapse
Affiliation(s)
- Sven Alfonsson
- Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, 171 77, Stockholm, Sweden. .,Stockholm Health Care Services, Stockholm, Sweden.
| | - Georgios Karvelas
- Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, 171 77, Stockholm, Sweden.,Stockholm Health Care Services, Stockholm, Sweden
| | - Johanna Linde
- Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, 171 77, Stockholm, Sweden.,Stockholm Health Care Services, Stockholm, Sweden
| | - Maria Beckman
- Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, 171 77, Stockholm, Sweden.,Stockholm Health Care Services, Stockholm, Sweden
| |
Collapse
|
3
|
The quality of research exploring in-session measures of CBT competence: a systematic review. Behav Cogn Psychother 2021; 50:40-56. [PMID: 34158144 DOI: 10.1017/s1352465821000242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
BACKGROUND Cognitive behavioural therapy (CBT) is in high demand due to its strong evidence base and cost effectiveness. To ensure CBT is delivered as intended in research, training and practice, fidelity assessment is needed. Fidelity is commonly measured by assessors rating treatment sessions, using CBT competence scales (CCSs). AIMS The current review assessed the quality of the literature examining the measurement properties of CCSs and makes recommendations for future research, training and practice. METHOD Medline, PsychINFO, Scopus and Web of Science databases were systematically searched to identify relevant peer-reviewed, English language studies from 1980 onwards. Relevant studies were those that were primarily examining the measurement properties of CCSs used to assess adult 1:1 CBT treatment sessions. The quality of studies was assessed using a novel tool created for this study, following which a narrative synthesis is presented. RESULTS Ten studies met inclusion criteria, most of which were assessed as being 'fair' methodological quality, primarily due to small sample sizes. Construct validity and responsiveness definitions were applied inconsistently in the studies, leading to confusion over what was being measured. CONCLUSIONS Although CCSs are widely used, we need to pay careful attention to the quality of research exploring their measurement properties. Consistent definitions of measurement properties, consensus about adequate sample sizes and improved reporting of individual properties are required to ensure the quality of future research.
Collapse
|
4
|
Beale S, Vitoratou S, Liness S. An investigation into the factor structure of the Cognitive Therapy Scale - Revised (CTS-R) in a CBT training sample. Behav Cogn Psychother 2021; 49:1-11. [PMID: 33455609 DOI: 10.1017/s1352465820000983] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
BACKGROUND Effective monitoring of cognitive behaviour therapy (CBT) competence depends on psychometrically robust assessment methods. While the UK Cognitive Therapy Scale - Revised (CTS-R; Blackburn et al., 2001) has become a widely used competence measure in CBT training, practice and research, its underlying factor structure has never been investigated. AIMS This study aimed to present the first investigation into the factor structure of the CTS-R based on a large sample of postgraduate CBT trainee recordings. METHOD Trainees (n = 382) provided 746 mid-treatment audio recordings for depression (n = 373) and anxiety (n = 373) cases scored on the CTS-R by expert markers. Tapes were split into two equal samples counterbalanced by diagnosis and with one tape per trainee. Exploratory factor analysis was conducted. The suggested factor structure and a widely used theoretical two-factor model were tested with confirmatory factor analysis. Measurement invariance was assessed by diagnostic group (depression versus anxiety). RESULTS Exploratory factor analysis suggested a single-factor solution (98.68% explained variance), which was supported by confirmatory factor analysis. All 12 CTS-R items were found to contribute to this single factor. The univariate model demonstrated full metric invariance and partial scalar invariance by diagnosis, with one item (item 10 - Conceptual Integration) demonstrating scalar non-invariance. CONCLUSIONS Findings indicate that the CTS-R is a robust homogenous measure and do not support division into the widely used theoretical generic versus CBT-specific competency subscales. Investigation into the CTS-R factor structure in other populations is warranted.
Collapse
Affiliation(s)
- Sarah Beale
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, LondonSE5 8AF, UK
| | - Silia Vitoratou
- Psychometrics & Measurement Lab, Department of Biostatistics and Health Informatics, King's College London, London, UK
| | - Sheena Liness
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, LondonSE5 8AF, UK
| |
Collapse
|
5
|
Kellett S, Simmonds-Buckley M, Limon E, Hague J, Hughes L, Stride C, Millings A. Defining the Assessment and Treatment Competencies to Deliver Low-Intensity Cognitive Behavior Therapy: A Multi-Center Validation Study. Behav Ther 2021; 52:15-27. [PMID: 33483113 DOI: 10.1016/j.beth.2020.01.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 01/24/2020] [Accepted: 01/24/2020] [Indexed: 11/30/2022]
Abstract
Despite the vastly increased dissemination of the low-intensity (LI) version of cognitive behavior therapy (CBT) for the treatment of anxiety and depression, no valid and reliable indices of the LI-CBT clinical competencies currently exist. This research therefore sought to develop and evaluate two measures: the low-intensity assessment competency scale (LIAC) and the low-intensity treatment competency scale (LITC). Inductive and deductive methods were used to construct the competency scales and detailed rating manuals were prepared. Two studies were then completed. The first study used a quantitative, fully-crossed design and the second a multi-center, quantitative longitudinal design. In study one, novice, qualified, and expert LI-CBT practitioners rated an LI-CBT assessment session (using the LIAC) and an LI-CBT treatment session (using the LITC). Study two used the LIAC and LITC across four training sites to analyze the competencies of LI-CBT practitioners over time, across raters, and in relation to the actor/patients' feedback concerning helpfulness, the alliance, and willingness to return. Both the LIAC and LITC were found to be single factor scales with good internal, test-retest reliability and reasonable inter-rater reliability. Both measures were sensitive to measuring change in clinical competence. The LIAC had good concurrent, criterion, discriminant, and predictive validity, while the LITC had good concurrent, criterion, and predictive validity, but limited discriminant validity. A score of 18 accurately delineated a minimum level of competence in LI-CBT assessment and treatment practice, with incompetent practice associated with patient disengagement. These observational ratings scales can contribute to the clinical governance of the burgeoning use of LI-CBT interventions for anxiety and depression in routine services and also in the methods of controlled studies.
Collapse
Affiliation(s)
- Stephen Kellett
- University of Sheffield; Sheffield Health and Social Care NHS Foundation Trust, UK.
| | | | | | | | | | | | | |
Collapse
|
6
|
Serfaty M, Shafran R, Vickerstaff V, Aspden T. A pragmatic approach to measuring adherence in treatment delivery in psychotherapy. Cogn Behav Ther 2020; 49:347-360. [PMID: 32114905 DOI: 10.1080/16506073.2020.1717594] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Measuring therapists' adherence to treatment manuals is recommended for evaluating treatment integrity, yet ways to do this are poorly defined, time consuming, and costly. The aims of the study were to develop a Therapy Component Checklist (TCC) to measure adherence to manualised CBT; to test its application in research and clinical practice; to determine its validity; and consider its cost benefits. We conducted a randomised trial in 230 people with cancer evaluating effectiveness of CBT for depression. In this, therapists delivered manualised treatment. Experts agreed on key components of therapy and therapists were asked to record these after therapy sessions by ticking a TCC. Inter-rater reliability was tested using an independent rater. Therapists delivered 543 CBT sessions. TCCs were completed in 293, of which 39 were assessed by the independent rater. Self-reported TCC data suggested close adherence to the manual. Prevalence-adjusted and bias-adjusted kappa scores suggested substantial agreement, (>0.60) in 38 out of 46 items. Self-rating of adherence saved around £96 per rating. In conclusion the TCC provides a quick and cost effective way of evaluating the components of therapy delivered. This approach could be applied to other psychological treatments and may help with linking therapeutic interventions with outcome.
Collapse
Affiliation(s)
- Marc Serfaty
- Division of Psychiatry, University College London , London, UK
| | - Roz Shafran
- UCL Great Ormond Street Institute of Child Health, University College London , London, UK
| | - Victoria Vickerstaff
- Marie Curie Palliative Care Research Department, University College London , London, UK
| | - Trefor Aspden
- Division of Psychiatry, University College London , London, UK
| |
Collapse
|
7
|
Concordance between clinician, supervisor and observer ratings of therapeutic competence in CBT and treatment as usual: does clinician competence or supervisor session observation improve agreement? Behav Cogn Psychother 2019; 48:350-363. [PMID: 31806076 DOI: 10.1017/s1352465819000699] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
BACKGROUND Lowering the cost of assessing clinicians' competence could promote the scalability of evidence-based treatments such as cognitive behavioral therapy (CBT). AIMS This study examined the concordance between clinicians', supervisors' and independent observers' session-specific ratings of clinician competence in school-based CBT and treatment as usual (TAU). It also investigated the association between clinician competence and supervisory session observation and rater agreement. METHOD Fifty-nine school-based clinicians (90% female, 73% Caucasian) were randomly assigned to implement TAU or modular CBT for youth anxiety. Clinicians rated their confidence after each therapy session (n = 1898), and supervisors rated clinicians' competence after each supervision session (n = 613). Independent observers rated clinicians' competence from audio recordings (n = 395). RESULTS Patterns of rater discrepancies differed between the TAU and CBT groups. Correlations with independent raters were low across groups. Clinician competence and session observation were associated with higher agreement among TAU, but not CBT, supervisors and clinicians. CONCLUSIONS These results support the gold standard practice of obtaining independent ratings of adherence and competence in implementation contexts. Further development of measures and/or rater training methods for clinicians and supervisors is needed.
Collapse
|
8
|
Judging clinical competence using structured observation tools: A cautionary tale. Behav Cogn Psychother 2019; 47:736-744. [DOI: 10.1017/s1352465819000316] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractBackground:One method for appraising the competence with which psychological therapy is delivered is to use a structured assessment tool that rates audio or video recordings of therapist performance against a standard set of criteria.Aims:The present study examines the inter-rater reliability of a well-established instrument (the Cognitive Therapy Scale – Revised) and a newly developed scale for assessing competence in CBT.Method:Six experienced raters working independently and blind to each other’s ratings rated 25 video recordings of therapy being undertaken by CBT therapists in training.Results:Inter-rater reliability was found to be low on both instruments.Conclusions:It is argued that the results represent a realistic appraisal of the accuracy of rating scales, and that the figures often cited for inter-rater reliability are unlikely to be generalizable outside the specific context in which they were achieved. The findings raise concerns about the use of these scales for making summative judgements of clinical competence in both educational and research contexts.
Collapse
|
9
|
Supporting our supervisors: a Summary and Discussion of the Special Issue on CBT supervision. COGNITIVE BEHAVIOUR THERAPIST 2016. [DOI: 10.1017/s1754470x16000106] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractContributors to this Special Issue of the Cognitive Behaviour Therapist have considered the kind of infrastructure that should be in place to best support and guide CBT supervisors, providing practical advice and extensive procedural guidance. Here we briefly summarize and discuss in turn the 10 papers within this Special Issue, including suggestions for further enhancements. The first paper, by Milne and Reiser, conceptualized this infrastructure in terms of an ‘SOS’ (supporting our supervisors) framework, from identifying supervision competencies, to training, evaluation and feedback strategies. The next nine papers illustrate this framework with specific technical innovations, educational enhancements and procedural issues, or through comprehensive quality improvement systems, all designed to support supervisors. These papers suggest an assortment of workable infrastructure developments: two large-scale and comprehensive initiatives, some promising proposals and technologies, and a series of local, exploratory work. Collectively, they provide us with models for further developing evidence-based cognitive-behavioural supervision, and offer practical suggestions for giving supervisors the tools and support to maximize their supervisees’ learning, and to improve the associated client outcomes. Much research and development work remains to be done, and successful implementation will require institutional and political support, as well as cross-cultural adaptations. We conclude with an optimistic assessment of progress toward addressing some of the infrastructure improvements required to adequately support supervisors.
Collapse
|