1
|
Schneider S, Hernandez R, Junghaenel DU, Jin H, Lee PJ, Gao H, Maupin D, Orriens B, Meijer E, Stone AA. Can you tell people's cognitive ability level from their response patterns in questionnaires? Behav Res Methods 2024:10.3758/s13428-024-02388-2. [PMID: 38528247 DOI: 10.3758/s13428-024-02388-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/02/2024] [Indexed: 03/27/2024]
Abstract
Questionnaires are ever present in survey research. In this study, we examined whether an indirect indicator of general cognitive ability could be developed based on response patterns in questionnaires. We drew on two established phenomena characterizing connections between cognitive ability and people's performance on basic cognitive tasks, and examined whether they apply to questionnaires responses. (1) The worst performance rule (WPR) states that people's worst performance on multiple sequential tasks is more indicative of their cognitive ability than their average or best performance. (2) The task complexity hypothesis (TCH) suggests that relationships between cognitive ability and performance increase with task complexity. We conceptualized items of a questionnaire as a series of cognitively demanding tasks. A graded response model was used to estimate respondents' performance for each item based on the difference between the observed and model-predicted response ("response error" scores). Analyzing data from 102 items (21 questionnaires) collected from a large-scale nationally representative sample of people aged 50+ years, we found robust associations of cognitive ability with a person's largest but not with their smallest response error scores (supporting the WPR), and stronger associations of cognitive ability with response errors for more complex than for less complex questions (supporting the TCH). Results replicated across two independent samples and six assessment waves. A latent variable of response errors estimated for the most complex items correlated .50 with a latent cognitive ability factor, suggesting that response patterns can be utilized to extract a rough indicator of general cognitive ability in survey research.
Collapse
Affiliation(s)
- Stefan Schneider
- Dornsife Center for Self-Report Science, and Center for Economic & Social Research, University of Southern California, 635 Downey Way, Los Angeles, CA, 90089-3332, USA.
- Department of Psychology, University of Southern California, Los Angeles, CA, USA.
- Leonard Davis School of Gerontology, University of Southern California, Los Angeles, CA, USA.
| | - Raymond Hernandez
- Dornsife Center for Self-Report Science, and Center for Economic & Social Research, University of Southern California, 635 Downey Way, Los Angeles, CA, 90089-3332, USA
| | - Doerte U Junghaenel
- Dornsife Center for Self-Report Science, and Center for Economic & Social Research, University of Southern California, 635 Downey Way, Los Angeles, CA, 90089-3332, USA
- Department of Psychology, University of Southern California, Los Angeles, CA, USA
- Leonard Davis School of Gerontology, University of Southern California, Los Angeles, CA, USA
| | - Haomiao Jin
- School of Health Sciences, Faculty of Health and Medical Sciences, University of Surrey, Guildford, UK
| | - Pey-Jiuan Lee
- Dornsife Center for Self-Report Science, and Center for Economic & Social Research, University of Southern California, 635 Downey Way, Los Angeles, CA, 90089-3332, USA
| | - Hongxin Gao
- School of Health Sciences, Faculty of Health and Medical Sciences, University of Surrey, Guildford, UK
| | - Danny Maupin
- School of Health Sciences, Faculty of Health and Medical Sciences, University of Surrey, Guildford, UK
| | - Bart Orriens
- Center for Economic and Social Research, University of Southern California, Los Angeles, CA, USA
| | - Erik Meijer
- Center for Economic and Social Research, University of Southern California, Los Angeles, CA, USA
| | - Arthur A Stone
- Dornsife Center for Self-Report Science, and Center for Economic & Social Research, University of Southern California, 635 Downey Way, Los Angeles, CA, 90089-3332, USA
- Department of Psychology, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
2
|
Welhaf MS, Kane MJ. A Nomothetic Span Approach to the Construct Validation of Sustained Attention Consistency: Re-Analyzing Two Latent-Variable Studies of Performance Variability and Mind-Wandering Self-Reports. PSYCHOLOGICAL RESEARCH 2024; 88:39-80. [PMID: 37314574 PMCID: PMC10805875 DOI: 10.1007/s00426-023-01820-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 03/21/2023] [Indexed: 06/15/2023]
Abstract
The ability to sustain attention consistency is frequently assessed using either objective behavioral measures, such as reaction time (RT) variability, or subjective self-report measures, such as rates of task-unrelated thought (TUT). The current studies examined whether the individual-difference covariation in these measures provides a more construct valid assessment of attention consistency than does either alone. We argue that performance and self-report measures mutually validate each other; each measurement approach has its own sources of error, so their shared variance should best reflect the attention consistency construct. We reanalyzed two latent-variable studies where RT variability and TUTs were measured in multiple tasks (Kane et al. in J Exp Psychol Gen 145:1017-1048, 2016; Unsworth et al. in J Exp Psychol Gen 150:1303-1331, 2021), along with several nomological network constructs to test the convergent and discriminant validity of a general attention consistency factor. Confirmatory factor analyses assessing bifactor (preregistered) and hierarchical (non-preregistered) models suggested that attention consistency can be modeled as the shared variance among objective and subjective measures. This attention consistency factor was related to working memory capacity, attention (interference) control, processing speed, state motivation and alertness, and self-reported cognitive failures and positive schizotypy. Although bifactor models of general attention consistency provide the most compelling construct validity evidence for a specific ability to sustain attention, multiverse analyses of outlier decisions suggested they are less robust than hierarchical models. The results provide evidence for the general ability to sustain attention consistency and suggestions for improving its measurement.
Collapse
Affiliation(s)
- Matthew S Welhaf
- Department of Psychological and Brain Sciences, Washington University in St. Louis, CB 1125 One Brookings Drive, St. Louis, MO, 63130-4899, USA.
| | - Michael J Kane
- Department of Psychology, University of North Carolina at Greensboro, P.O. Box 26170, Greensboro, NC, 27402-6170, USA
| |
Collapse
|
3
|
Welhaf MS, Kane MJ. A combined experimental-correlational approach to the construct validity of performance-based and self-report-based measures of sustained attention. Atten Percept Psychophys 2024; 86:109-145. [PMID: 38012476 PMCID: PMC11134599 DOI: 10.3758/s13414-023-02786-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/04/2023] [Indexed: 11/29/2023]
Abstract
The ability to sustain attention is often measured with either objective performance indicators, like within-person RT variability, or subjective self-reports, like mind wandering propensity. A more construct valid approach, however, may be to assess the covariation in these performance and self-report measures, given that each of these is influenced by different sources of measurement error. If the correlation between performance-variability and self-report measures reflects the sustained attention construct, then task manipulations aimed at reducing the sustained attention demands of tasks should reduce the correlation between them (in addition to reducing mean levels of variability and mind wandering). The current study investigated this claim with a combined experimental-correlation approach. In two experiments (Ns ~ 1,500 each), participants completed tasks that either maximized or minimized the demand for sustained attention. Our demand manipulations successfully reduced the mean levels of sustained attention failures in both the objective and subjective measures, in both experiments. In neither experiment, however, did the covariation between these measures change as a function of the sustained attention demands of the tasks. We can therefore claim only minimal support for the construct validity of our measurement approach to sustained attention.
Collapse
Affiliation(s)
- Matthew S. Welhaf
- Department of Psychology, University of North Carolina at Greensboro, P.O. Box 26170, Greensboro, NC 27402-6170, USA
| | - Michael J. Kane
- Department of Psychology, University of North Carolina at Greensboro, P.O. Box 26170, Greensboro, NC 27402-6170, USA
| |
Collapse
|
4
|
Draheim C, Pak R, Draheim AA, Engle RW. The role of attention control in complex real-world tasks. Psychon Bull Rev 2022; 29:1143-1197. [PMID: 35167106 PMCID: PMC8853083 DOI: 10.3758/s13423-021-02052-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/14/2021] [Indexed: 11/15/2022]
Abstract
Working memory capacity is an important psychological construct, and many real-world phenomena are strongly associated with individual differences in working memory functioning. Although working memory and attention are intertwined, several studies have recently shown that individual differences in the general ability to control attention is more strongly predictive of human behavior than working memory capacity. In this review, we argue that researchers would therefore generally be better suited to studying the role of attention control rather than memory-based abilities in explaining real-world behavior and performance in humans. The review begins with a discussion of relevant literature on the nature and measurement of both working memory capacity and attention control, including recent developments in the study of individual differences of attention control. We then selectively review existing literature on the role of both working memory and attention in various applied settings and explain, in each case, why a switch in emphasis to attention control is warranted. Topics covered include psychological testing, cognitive training, education, sports, police decision-making, human factors, and disorders within clinical psychology. The review concludes with general recommendations and best practices for researchers interested in conducting studies of individual differences in attention control.
Collapse
Affiliation(s)
- Christopher Draheim
- Department of Psychology, Lawrence University, Appleton, WI, USA.
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA.
| | - Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Amanda A Draheim
- Department of Psychology, Lawrence University, Appleton, WI, USA
| | - Randall W Engle
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
5
|
Do Attentional Lapses Account for the Worst Performance Rule? J Intell 2021; 10:jintelligence10010002. [PMID: 35076568 PMCID: PMC8788519 DOI: 10.3390/jintelligence10010002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 11/20/2021] [Accepted: 12/21/2021] [Indexed: 12/18/2022] Open
Abstract
The worst performance rule (WPR) describes the phenomenon that individuals’ slowest responses in a task are often more predictive of their intelligence than their fastest or average responses. To explain this phenomenon, it was previously suggested that occasional lapses of attention during task completion might be associated with particularly slow reaction times. Because less intelligent individuals should experience lapses of attention more frequently, reaction time distribution should be more heavily skewed for them than for more intelligent people. Consequently, the correlation between intelligence and reaction times should increase from the lowest to the highest quantile of the response time distribution. This attentional lapses account has some intuitive appeal, but has not yet been tested empirically. Using a hierarchical modeling approach, we investigated whether the WPR pattern would disappear when including different behavioral, self-report, and neural measurements of attentional lapses as predictors. In a sample of N = 85, we found that attentional lapses accounted for the WPR, but effect sizes of single covariates were mostly small to very small. We replicated these results in a reanalysis of a much larger previously published data set. Our findings render empirical support to the attentional lapses account of the WPR.
Collapse
|
6
|
Silvia PJ, Beaty RE. When Figurative Language Goes off the Rails and under the Bus: Fluid Intelligence, Openness to Experience, and the Production of Poor Metaphors. J Intell 2021; 9:jintelligence9010002. [PMID: 33466268 PMCID: PMC7838778 DOI: 10.3390/jintelligence9010002] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/07/2020] [Accepted: 12/29/2020] [Indexed: 11/16/2022] Open
Abstract
The present research examined the varieties of poor metaphors to gain insight into the cognitive processes involved in generating creative ones. Drawing upon data from two published studies as well as a new sample, adults’ open-ended responses to different metaphor prompts were categorized. Poor metaphors fell into two broad clusters. Non-metaphors—responses that failed to meet the basic task requirements—consisted of “adjective slips” (describing the topic adjectivally instead of figuratively), “wayward attributes” (attributing the wrong property to the topic), and “off-topic idioms” (describing the wrong topic). Bad metaphors—real metaphors that were unanimously judged as uncreative—consisted of “exemplary exemplars” (vehicles that lacked semantic distance and thus seemed trite) and “retrieved clichés” (pulling a dead metaphor from memory). Overall, people higher in fluid intelligence (Gf) were more likely to generate a real metaphor, and their metaphor was less likely to be a bad one. People higher in Openness to Experience, in contrast, were more likely to generate real metaphors but not more or less likely to generate bad ones. Scraping the bottom of the response barrel suggests that creative metaphor production is a particularly complex form of creative thought.
Collapse
Affiliation(s)
- Paul J. Silvia
- Department of Psychology, University of North Carolina at Greensboro, Greensboro, NC 27402, USA
- Correspondence:
| | - Roger E. Beaty
- Department of Psychology, Pennsylvania State University, 140 Moore Building, University Park, PA 16801, USA;
| |
Collapse
|
7
|
Silvia PJ, Eddington KM, Harper KL, Burgin CJ, Kwapil TR. Reward-Seeking Deficits in Major Depression: Unpacking Appetitive Task Performance with Ex-Gaussian Response Time Variability Analysis. MOTIVATION SCIENCE 2020; 7:219-224. [PMID: 34504900 DOI: 10.1037/mot0000208] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Major depressive disorder (MDD) has extensive ties to motivation, including impaired response time (RT) performance. Average RT, however, conflates response speed and variability, so RT differences can be complex. Because recent studies have shown inconsistent effects of MDD on RT variability, the present research sought to unpack RT performance with several key improvements: (1) a sample of adults (n = 78; 18 MDD, 60 Control) free of antidepressant medication; (2) an unambiguously appetitive task with appealing incentives at stake; and (3) ex-Gaussian RT modeling, which can unconfound speed and variability by estimating parameters for the mean (Mu) and standard deviation (Sigma) of the normal component and the mean of the exponential component (Tau). The groups had comparable Mu and Sigma parameters, but the MDD group had a significantly larger Tau, reflecting greater intraindividual RT variability. The findings suggest that MDD's effect on average RT can stem from greater intraindividual variability, not from overall slowness. Possible mechanisms, such as impaired executive processes in MDD and difficulties maintaining stable mental representations of incentives, are considered.
Collapse
Affiliation(s)
- Paul J Silvia
- Department of Psychology, University of North Carolina at Greensboro
| | - Kari M Eddington
- Department of Psychology, University of North Carolina at Greensboro
| | - Kelly L Harper
- National Center for PTSD, Behavioral Science Division, VA Boston Healthcare System
| | - Chris J Burgin
- Department of Counseling and Psychology, Tennessee Tech University
| | - Thomas R Kwapil
- Department of Psychology, University of North Carolina at Greensboro
| |
Collapse
|