1
|
Rabi R, Chow R, Grange JA, Hasher L, Alain C, Anderson ND. Computational modeling of selective attention differentiates subtypes of amnestic mild cognitive impairment. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2024:1-28. [PMID: 39726302 DOI: 10.1080/13825585.2024.2442786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 12/10/2024] [Indexed: 12/28/2024]
Abstract
Individuals with amnestic mild cognitive impairment (aMCI), a prodromal stage of Alzheimer's disease and other dementias, show inhibition deficits in addition to episodic memory. How the latent processes of selective attention (i.e., from perception to motor response) contribute to these inhibition deficits remains unclear. Therefore, the present study examined contributions of selective attention to aMCI-related inhibition deficits using computational modeling of attentional dynamics. Two models of selective attention - the dual-stage two-phase model and the shrinking spotlight model - were fitted to individual participant data from a flanker task completed by 34 individuals with single-domain aMCI (sdaMCI, 66-86 years), 20 individuals with multiple-domain aMCI (mdaMCI, 68-88 years), and 52 healthy controls (64-88 years). Findings showed greater commission errors in the mdaMCI group compared to controls. Final-fitting model parameters indicated inhibitory and early perceptual deficits in mdaMCI , and impaired spatial allocation of attention in both MCI groups. Model parameters differentiated mdaMCI from sdaMCI and controls with moderate-to-high sensitivity and specificity. Impairments in perception and selective attention may contribute to inhibition deficits in both aMCI subtypes.
Collapse
Affiliation(s)
- Rahel Rabi
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, York University, Ontario, Canada
| | - Ricky Chow
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, York University, Ontario, Canada
| | - James A Grange
- School of Psychology, Keele University, Staffordshire, UK
| | - Lynn Hasher
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
| | - Nicole D Anderson
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Rappaport BI, Shankman SA, Glazer JE, Buchanan SN, Weinberg A, Letkiewicz AM. Psychometrics of drift-diffusion model parameters derived from the Eriksen flanker task: Reliability and validity in two independent samples. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2024:10.3758/s13415-024-01222-8. [PMID: 39443415 DOI: 10.3758/s13415-024-01222-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/19/2024] [Indexed: 10/25/2024]
Abstract
The flanker task is a widely used measure of cognitive control abilities. Drift-diffusion modeling of flanker task behavior can yield separable parameters of cognitive control-related subprocesses, but the parameters' psychometrics are not well-established. We examined the reliability and validity of four behavioral measures: (1) raw accuracy, (2) reaction time (RT) interference, (3) NIH Toolbox flanker score, and (4) two drift-diffusion model (DDM) parameters-drift rate and boundary separation-capturing evidence accumulation efficiency and speed-accuracy trade-off, respectively. Participants from two independent studies - one cross-sectional (N = 381) and one with three timepoints (N = 83) - completed the flanker task while electroencephalography data were collected. Across both studies, drift rate and boundary separation demonstrated comparable split-half and test-retest reliability to accuracy, RT interference, and NIH Toolbox flanker score, but better incremental convergent validity with psychophysiological measures (i.e., the error-related negativity; ERN) and neuropsychological measures of cognitive control than the other behavioral indices. Greater drift rate (i.e., faster and more accurate responses) to congruent and incongruent stimuli, and smaller boundary separation to incongruent stimuli were related to 1) larger ERN amplitudes (in both studies) and 2) faster and more accurate inhibition and set-shifting over and above raw accuracy, reaction time, and NIH Toolbox flanker scores (in Study 1). Computational models, such as DDM, can parse behavioral performance into subprocesses that exhibit comparable reliability to other scoring approaches, but more meaningful relationships with other measures of cognitive control. The application of these computational models may be applied to existing data and enhance the identification of cognitive control deficits in psychiatric disorders.
Collapse
Affiliation(s)
- Brent Ian Rappaport
- Department of Psychiatry, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA.
| | - Stewart A Shankman
- Department of Psychiatry, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - James E Glazer
- Department of Psychiatry, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Savannah N Buchanan
- Department of Psychiatry, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Anna Weinberg
- Department of Psychology, McGill University, Montreal, Canada
| | - Allison M Letkiewicz
- Department of Psychiatry, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| |
Collapse
|
3
|
Chen CS, Vinogradov S. Personalized Cognitive Health in Psychiatry: Current State and the Promise of Computational Methods. Schizophr Bull 2024; 50:1028-1038. [PMID: 38934792 PMCID: PMC11349010 DOI: 10.1093/schbul/sbae108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/28/2024]
Abstract
BACKGROUND Decades of research have firmly established that cognitive health and cognitive treatment services are a key need for people living with psychosis. However, many current clinical programs do not address this need, despite the essential role that an individual's cognitive and social cognitive capacities play in determining their real-world functioning. Preliminary practice-based research in the Early Psychosis Intervention Network early psychosis intervention network shows that it is possible to develop and implement tools that delineate an individuals' cognitive health profile and that help engage the client and the clinician in shared decision-making and treatment planning that includes cognitive treatments. These findings signify a promising shift toward personalized cognitive health. STUDY DESIGN Extending upon this early progress, we review the concept of interindividual variability in cognitive domains/processes in psychosis as the basis for offering personalized treatment plans. We present evidence from studies that have used traditional neuropsychological measures as well as findings from emerging computational studies that leverage trial-by-trial behavior data to illuminate the different latent strategies that individuals employ. STUDY RESULT We posit that these computational techniques, when combined with traditional cognitive assessments, can enrich our understanding of individual differences in treatment needs, which in turn can guide evermore personalized interventions. CONCLUSION As we find clinically relevant ways to decompose maladaptive behaviors into separate latent cognitive elements captured by model parameters, the ultimate goal is to develop and implement approaches that empower clients and their clinical providers to leverage individual's existing learning capacities to improve their cognitive health and well-being.
Collapse
Affiliation(s)
- Cathy S Chen
- Department of Psychiatry & Behavioral Sciences, University of Minnesota Medical School, Minneapolis, MN, USA
| | - Sophia Vinogradov
- Department of Psychiatry & Behavioral Sciences, University of Minnesota Medical School, Minneapolis, MN, USA
| |
Collapse
|
4
|
Löffler C, Frischkorn GT, Hagemann D, Sadus K, Schubert AL. The common factor of executive functions measures nothing but speed of information uptake. PSYCHOLOGICAL RESEARCH 2024; 88:1092-1114. [PMID: 38372769 PMCID: PMC11143038 DOI: 10.1007/s00426-023-01924-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 12/27/2023] [Indexed: 02/20/2024]
Abstract
There is an ongoing debate about the unity and diversity of executive functions and their relationship with other cognitive abilities such as processing speed, working memory capacity, and intelligence. Specifically, the initially proposed unity and diversity of executive functions is challenged by discussions about (1) the factorial structure of executive functions and (2) unfavorable psychometric properties of measures of executive functions. The present study addressed two methodological limitations of previous work that may explain conflicting results: The inconsistent use of (a) accuracy-based vs. reaction time-based indicators and (b) average performance vs. difference scores. In a sample of 148 participants who completed a battery of executive function tasks, we tried to replicate the three-factor model of the three commonly distinguished executive functions shifting, updating, and inhibition by adopting data-analytical choices of previous work. After addressing the identified methodological limitations using drift-diffusion modeling, we only found one common factor of executive functions that was fully accounted for by individual differences in the speed of information uptake. No variance specific to executive functions remained. Our results suggest that individual differences common to all executive function tasks measure nothing more than individual differences in the speed of information uptake. We therefore suggest refraining from using typical executive function tasks to study substantial research questions, as these tasks are not valid for measuring individual differences in executive functions.
Collapse
Affiliation(s)
- Christoph Löffler
- Institute of Psychology, Heidelberg University, Heidelberg, Germany.
- Department of Psychology, University of Mainz, Mainz, Germany.
| | | | - Dirk Hagemann
- Institute of Psychology, Heidelberg University, Heidelberg, Germany
| | - Kathrin Sadus
- Institute of Psychology, Heidelberg University, Heidelberg, Germany
| | | |
Collapse
|
5
|
Viviani G, Visalli A, Finos L, Vallesi A, Ambrosini E. A comparison between different variants of the spatial Stroop task: The influence of analytic flexibility on Stroop effect estimates and reliability. Behav Res Methods 2024; 56:934-951. [PMID: 36894759 PMCID: PMC10830653 DOI: 10.3758/s13428-023-02091-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/10/2023] [Indexed: 03/11/2023]
Abstract
The spatial Stroop task measures the ability to resolve interference between relevant and irrelevant spatial information. We recently proposed a four-choice spatial Stroop task that ensures methodological advantages over the original color-word verbal Stroop task, requiring participants to indicate the direction of an arrow while ignoring its position in one of the screen corners. However, its peripheral spatial arrangement might represent a methodological weakness and could introduce experimental confounds. Thus, aiming at improving our "Peripheral" spatial Stroop, we designed and made available five novel spatial Stroop tasks (Perifoveal, Navon, Figure-Ground, Flanker, and Saliency), wherein the stimuli appeared at the center of the screen. In a within-subjects online study, we compared the six versions to identify which task produced the largest but also the most reliable and robust Stroop effect. Indeed, although internal reliability is frequently overlooked, its estimate is fundamental, also in light of the recently proposed reliability paradox. Data analyses were performed using both the classical general linear model analytical approach and two multilevel modelling approaches (linear mixed models and random coefficient analysis), which specifically served for more accurately estimating the Stroop effect by explaining intra-subject, trial-by-trial variability. We then assessed our results based on their robustness to such analytic flexibility. Overall, our results indicate that the Perifoveal spatial Stroop is the best alternative task for its statistical properties and methodological advantages. Interestingly, our results also indicate that the Peripheral and Perifoveal Stroop effects were not only the largest, but also those with highest and most robust internal reliability.
Collapse
Affiliation(s)
- Giada Viviani
- Department of Neuroscience, University of Padova, 35121, Padova, Italy
- Padova Neuroscience Center, University of Padova, Padova, Italy
| | - Antonino Visalli
- Department of Neuroscience, University of Padova, 35121, Padova, Italy
| | - Livio Finos
- Padova Neuroscience Center, University of Padova, Padova, Italy
- Department of Developmental Psychology and Socialization, University of Padova, Padova, Italy
| | - Antonino Vallesi
- Department of Neuroscience, University of Padova, 35121, Padova, Italy
- Department of Developmental Psychology and Socialization, University of Padova, Padova, Italy
| | - Ettore Ambrosini
- Department of Neuroscience, University of Padova, 35121, Padova, Italy.
- Department of Developmental Psychology and Socialization, University of Padova, Padova, Italy.
- Department of General Psychology, University of Padova, Padova, Italy.
| |
Collapse
|
6
|
Grange JA, Schuch S. A spurious correlation between difference scores in evidence-accumulation model parameters. Behav Res Methods 2023; 55:3348-3369. [PMID: 36138317 PMCID: PMC10615941 DOI: 10.3758/s13428-022-01956-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/09/2022] [Indexed: 11/08/2022]
Abstract
Evidence-accumulation models are a useful tool for investigating the cognitive processes that give rise to behavioural data patterns in reaction times (RTs) and error rates. In their simplest form, evidence-accumulation models include three parameters: The average rate of evidence accumulation over time (drift rate) and the amount of evidence that needs to be accumulated before a response becomes selected (boundary) both characterise the response-selection process; a third parameter summarises all processes before and after the response-selection process (non-decision time). Researchers often compute experimental effects as simple difference scores between two within-subject conditions and such difference scores can also be computed on model parameters. In the present paper, we report spurious correlations between such model parameter difference scores, both in empirical data and in computer simulations. The most pronounced spurious effect is a negative correlation between boundary difference and non-decision difference, which amounts to r = - .70 or larger. In the simulations, we only observed this spurious negative correlation when either (a) there was no true difference in model parameters between simulated experimental conditions, or (b) only drift rate was manipulated between simulated experimental conditions; when a true difference existed in boundary separation, non-decision time, or all three main parameters, the correlation disappeared. We suggest that care should be taken when using evidence-accumulation model difference scores for correlational approaches because the parameter difference scores can correlate in the absence of any true inter-individual differences at the population level.
Collapse
|
7
|
Kucina T, Wells L, Lewis I, de Salas K, Kohl A, Palmer MA, Sauer JD, Matzke D, Aidman E, Heathcote A. Calibration of cognitive tests to address the reliability paradox for decision-conflict tasks. Nat Commun 2023; 14:2234. [PMID: 37076456 PMCID: PMC10115879 DOI: 10.1038/s41467-023-37777-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 03/30/2023] [Indexed: 04/21/2023] Open
Abstract
Standard, well-established cognitive tasks that produce reliable effects in group comparisons also lead to unreliable measurement when assessing individual differences. This reliability paradox has been demonstrated in decision-conflict tasks such as the Simon, Flanker, and Stroop tasks, which measure various aspects of cognitive control. We aim to address this paradox by implementing carefully calibrated versions of the standard tests with an additional manipulation to encourage processing of conflicting information, as well as combinations of standard tasks. Over five experiments, we show that a Flanker task and a combined Simon and Stroop task with the additional manipulation produced reliable estimates of individual differences in under 100 trials per task, which improves on the reliability seen in benchmark Flanker, Simon, and Stroop data. We make these tasks freely available and discuss both theoretical and applied implications regarding how the cognitive testing of individual differences is carried out.
Collapse
Affiliation(s)
- Talira Kucina
- School of Psychological Sciences, University of Tasmania, Hobart, TAS, Australia.
| | - Lindsay Wells
- Games and Creative Technologies Research Group, University of Tasmania, Hobart, TAS, Australia
| | - Ian Lewis
- Games and Creative Technologies Research Group, University of Tasmania, Hobart, TAS, Australia
| | - Kristy de Salas
- Games and Creative Technologies Research Group, University of Tasmania, Hobart, TAS, Australia
| | - Amelia Kohl
- School of Psychological Sciences, University of Tasmania, Hobart, TAS, Australia
| | - Matthew A Palmer
- School of Psychological Sciences, University of Tasmania, Hobart, TAS, Australia
| | - James D Sauer
- School of Psychological Sciences, University of Tasmania, Hobart, TAS, Australia
| | - Dora Matzke
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | - Eugene Aidman
- Defence Science Technology Group, Canberra, NSW, Australia
- School of Biomedical Sciences & Pharmacy, University of Newcastle, Newcastle, NSW, Australia
| | - Andrew Heathcote
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
- School of Psychology, University of Newcastle, Newcastle, NSW, Australia
| |
Collapse
|
8
|
Liesefeld HR, Janczyk M. Same same but different: Subtle but consequential differences between two measures to linearly integrate speed and accuracy (LISAS vs. BIS). Behav Res Methods 2023; 55:1175-1192. [PMID: 35595937 PMCID: PMC10125931 DOI: 10.3758/s13428-022-01843-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/18/2022] [Indexed: 11/08/2022]
Abstract
Condition-specific speed-accuracy trade-offs (SATs) are a pervasive issue in experimental psychology, because they sometimes render impossible an unambiguous interpretation of experimental effects on either mean response times (mean RT) or percentage of correct responses (PC). For between-participants designs, we have recently validated a measure (Balanced Integration Score, BIS) that integrates standardized mean RT and standardized PC and thereby controls for cross-group variation in SAT. Another related measure (Linear Integrated Speed-Accuracy Score, LISAS) did not fulfill this specific purpose in our previous simulation study. Given the widespread and seemingly interchangeable use of the two measures, we here illustrate the crucial differences between LISAS and BIS related to their respective choice of standardization variance. We also disconfirm the recently articulated hypothesis that the differences in the behavior of the two combined performance measures observed in our previous simulation study were due to our choice of a between-participants design and we demonstrate why a previous attempt to validate BIS (and LISAS) for within-participants designs has failed, pointing out several consequential issues in the respective simulations and analyses. In sum, the present study clarifies the differences between LISAS and BIS, demonstrates that the choice of the variance used for standardization is crucial, provides further guidance on the calculation and use of BIS, and refutes the claim that BIS is not useful for attenuating condition-specific SATs in within-participants designs.
Collapse
Affiliation(s)
- Heinrich R Liesefeld
- Department of Psychology, University of Bremen, Hochschulring 18, D-28359, Bremen, Germany.
| | - Markus Janczyk
- Department of Psychology, University of Bremen, Hochschulring 18, D-28359, Bremen, Germany
| |
Collapse
|
9
|
Robinson MM, Steyvers M. Linking computational models of two core tasks of cognitive control. Psychol Rev 2023; 130:71-101. [PMID: 36227284 PMCID: PMC10257386 DOI: 10.1037/rev0000395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Cognitive control refers to the ability to maintain goal-relevant information in the face of distraction, making it a core construct for understanding human thought and behavior. There is great theoretical and practical value in building theories that can be used to explain or to predict variations in cognitive control as a function of experimental manipulations or individual differences. A critical step toward building such theories is determining which latent constructs are shared between laboratory tasks that are designed to measure cognitive control. In the current work, we examine this question in a novel way by formally linking computational models of two canonical cognitive control tasks, the Eriksen flanker and task-switching task. Specifically, we examine whether model parameters that capture cognitive control processes in one task can be swapped across models to make predictions about individual differences in performance on another task. We apply our modeling and analysis to a large scale data set from an online cognitive training platform, which optimizes our ability to detect individual differences in the data. Our results suggest that the flanker and task-switching tasks probe common control processes. This finding supports the view that higher level cognitive control processes as opposed to solely strategies in speed and accuracy tradeoffs, or perceptual processing and motor response speed are shared across the two tasks. We discuss how our computational modeling substitution approach addresses limitations of prior efforts to relate performance across different cognitive control tasks, and how our findings inform current theories of cognitive control. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Mark Steyvers
- Department of Cognitive Sciences, University of California, Irvine
| |
Collapse
|
10
|
Draheim C, Pak R, Draheim AA, Engle RW. The role of attention control in complex real-world tasks. Psychon Bull Rev 2022; 29:1143-1197. [PMID: 35167106 PMCID: PMC8853083 DOI: 10.3758/s13423-021-02052-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/14/2021] [Indexed: 11/15/2022]
Abstract
Working memory capacity is an important psychological construct, and many real-world phenomena are strongly associated with individual differences in working memory functioning. Although working memory and attention are intertwined, several studies have recently shown that individual differences in the general ability to control attention is more strongly predictive of human behavior than working memory capacity. In this review, we argue that researchers would therefore generally be better suited to studying the role of attention control rather than memory-based abilities in explaining real-world behavior and performance in humans. The review begins with a discussion of relevant literature on the nature and measurement of both working memory capacity and attention control, including recent developments in the study of individual differences of attention control. We then selectively review existing literature on the role of both working memory and attention in various applied settings and explain, in each case, why a switch in emphasis to attention control is warranted. Topics covered include psychological testing, cognitive training, education, sports, police decision-making, human factors, and disorders within clinical psychology. The review concludes with general recommendations and best practices for researchers interested in conducting studies of individual differences in attention control.
Collapse
Affiliation(s)
- Christopher Draheim
- Department of Psychology, Lawrence University, Appleton, WI, USA.
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA.
| | - Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Amanda A Draheim
- Department of Psychology, Lawrence University, Appleton, WI, USA
| | - Randall W Engle
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
11
|
Schuch S, Philipp AM, Maulitz L, Koch I. On the reliability of behavioral measures of cognitive control: retest reliability of task-inhibition effect, task-preparation effect, Stroop-like interference, and conflict adaptation effect. PSYCHOLOGICAL RESEARCH 2021; 86:2158-2184. [PMID: 34921344 PMCID: PMC8683338 DOI: 10.1007/s00426-021-01627-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 12/01/2021] [Indexed: 12/02/2022]
Abstract
This study examined the reliability (retest and split-half) of four common behavioral measures of cognitive control. In Experiment 1 (N = 96), we examined N – 2 task repetition costs as a marker of task-level inhibition, and the cue-stimulus interval (CSI) effect as a marker of time-based task preparation. In Experiment 2 (N = 48), we examined a Stroop-like face-name interference effect as a measure of distractor interference control, and the sequential congruency effect (“conflict adaptation effect”) as a measure of conflict-triggered adaptation of cognitive control. In both experiments, the measures were assessed in two sessions on the same day, separated by a 10 min-long unrelated filler task. We observed substantial experimental effects with medium to large effect sizes. At the same time, split-half reliabilities were moderate, and retest reliabilities were poor, for most measures, except for the CSI effect. Retest reliability of the Stroop-like effect was improved when considering only trials preceded by congruent trials. Together, the data suggest that these cognitive control measures are well suited for assessing group-level effects of cognitive control. Yet, except for the CSI effect, these measures do not seem suitable for reliably assessing interindividual differences in the strength of cognitive control, and therefore are not suited for correlational approaches. We discuss possible reasons for the discrepancy between robustness at the group level and reliability at the level of interindividual differences.
Collapse
Affiliation(s)
- Stefanie Schuch
- Institute of Psychology, RWTH Aachen University, Jaegerstrasse 17/19, 52066, Aachen, Germany.
| | - Andrea M Philipp
- Institute of Psychology, RWTH Aachen University, Jaegerstrasse 17/19, 52066, Aachen, Germany
| | - Luisa Maulitz
- Institute of Psychology, RWTH Aachen University, Jaegerstrasse 17/19, 52066, Aachen, Germany
| | - Iring Koch
- Institute of Psychology, RWTH Aachen University, Jaegerstrasse 17/19, 52066, Aachen, Germany.
| |
Collapse
|