Olsen KD, Sukhodolsky D, Bikic A. Executive functioning in children with ADHD Investigating the cross-method correlations between performance tests and rating scales.
Scand J Child Adolesc Psychiatr Psychol 2024;
12:1-9. [PMID:
38645570 PMCID:
PMC11027034 DOI:
10.2478/sjcapp-2024-0001]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024] Open
Abstract
Objective
Replicated evidence shows a weak or non-significant correlation between different methods of evaluating executive functions (EF). The current study investigates the association between rating scales and cognitive tests of EF in a sample of children with ADHD and executive dysfunction.
Method
The sample included 139 children (aged 6-13) diagnosed with ADHD and executive dysfunctions. The children completed subtests of the Cambridge Neuropsychological Test Automated Battery (CANTAB). Parents completed the Behavior Rating Inventory of Executive Function (BRIEF) and the Children's Organizational Skills Scale (COSS).
Analysis
Pairwise Spearman correlations were calculated between the composite and separate subscales of cognitive tests and rating scales. In secondary analyses, pairwise Spearman correlations were conducted between all composite scales and subscales, stratified by child sex and child ADHD subtype.
Results
The correlation analyses between composite scores yielded no significant correlations. The results when comparing CANTAB TO and BRIEF GE are r=-.095, p=.289, and r=.042, p=.643 when comparing CANTAB TO and COSS TO. The analyses between all composite scales and subscales found one significant negative correlation (r=-.25, p<.01). There are significant cross-method differences when stratified by the ADHD-Inattentive subtype, showing significant negative correlations (moderate) between CANTAB and BRIEF composite (r=-.355, p=.014) and subscales.
Discussion
It is possible that the different methods measure different underlying constructs of EF. It may be relevant to consider the effects of responder bias and differences in ecological validity in both measurement methods.
Conclusion
The results found no significant correlations. The expectation in research and clinical settings should not be to find the same results when comparing data from cognitive tests and rating scales. Future research might explore novel approaches to EF testing with a higher level of ecological validity, and designing EF rating scales that capture EF behaviors more so than EF cognition.
Collapse