1
|
Goldstein HW, Yusko KP, Scherbaum CA, Larson EC. Reducing Black–White Racial Differences on Intelligence Tests Used in Hiring for Public Safety Jobs. J Intell 2023; 11:jintelligence11040062. [PMID: 37103247 PMCID: PMC10143281 DOI: 10.3390/jintelligence11040062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 03/17/2023] [Accepted: 03/21/2023] [Indexed: 03/31/2023] Open
Abstract
This paper explores whether a diversity and inclusion strategy focused on using modern intelligence tests can assist public safety organizations in hiring a talented diverse workforce. Doing so may offer strategies for mitigating the issues of systematic racism with which these occupations have historically struggled. Past meta-analytic research shows that traditional forms of intelligence tests, which are often used in this sector, have not consistently demonstrated predictive validity but have negatively impacted Black candidates. As an alternative, we examine a modern intelligence test that consists of novel unfamiliar cognitive problems that test takers must solve without relying on their prior experience. Across six studies of varying public safety jobs (e.g., police, firefighter) in different organizations, we found a pattern of results that supports the criterion-related validity of the modern intelligence test. In addition to consistently predicting job performance and training success, the modern intelligence test also substantially mitigated the observed Black–White group differences. The implications of these findings are discussed in terms of how to alter the legacy of I/O psychology and human resource fields when it comes to our impact on facilitating employment opportunities for Black citizens, particularly in public safety positions.
Collapse
|
2
|
Perfect is the enemy of good enough: Putting the side effects of intelligence testing in perspective. INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE 2022. [DOI: 10.1017/iop.2021.126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
3
|
|
4
|
Kell HJ. Noncognitive proponents' conflation of “cognitive skills” and “cognition” and its implications. PERSONALITY AND INDIVIDUAL DIFFERENCES 2018. [DOI: 10.1016/j.paid.2018.05.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
5
|
Nye CD, Chernyshenko OS, Stark S, Drasgow F, Phillips HL, Phillips JB, Campbell JS. More than
g
: Evidence for the Incremental Validity of Performance‐Based Assessments for Predicting Training Performance. APPLIED PSYCHOLOGY-AN INTERNATIONAL REVIEW-PSYCHOLOGIE APPLIQUEE-REVUE INTERNATIONALE 2018. [DOI: 10.1111/apps.12171] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
6
|
Eid M, Krumm S, Koch T, Schulze J. Bifactor Models for Predicting Criteria by General and Specific Factors: Problems of Nonidentifiability and Alternative Solutions. J Intell 2018; 6:E42. [PMID: 31162469 PMCID: PMC6480823 DOI: 10.3390/jintelligence6030042] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Revised: 08/29/2018] [Accepted: 09/05/2018] [Indexed: 11/22/2022] Open
Abstract
The bifactor model is a widely applied model to analyze general and specific abilities. Extensions of bifactor models additionally include criterion variables. In such extended bifactor models, the general and specific factors can be correlated with criterion variables. Moreover, the influence of general and specific factors on criterion variables can be scrutinized in latent multiple regression models that are built on bifactor measurement models. This study employs an extended bifactor model to predict mathematics and English grades by three facets of intelligence (number series, verbal analogies, and unfolding). We show that, if the observed variables do not differ in their loadings, extended bifactor models are not identified and not applicable. Moreover, we reveal that standard errors of regression weights in extended bifactor models can be very large and, thus, lead to invalid conclusions. A formal proof of the nonidentification is presented. Subsequently, we suggest alternative approaches for predicting criterion variables by general and specific factors. In particular, we illustrate how (1) composite ability factors can be defined in extended first-order factor models and (2) how bifactor(S-1) models can be applied. The differences between first-order factor models and bifactor(S-1) models for predicting criterion variables are discussed in detail and illustrated with the empirical example.
Collapse
Affiliation(s)
- Michael Eid
- Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany.
| | - Stefan Krumm
- Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany.
| | - Tobias Koch
- Methodology Center, Leuphana Universität Lüneburg, 21335 Lüneburg, Germany.
| | - Julian Schulze
- Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany.
| |
Collapse
|
7
|
Truninger M, Fernández-i-Marín X, Batista-Foguet JM, Boyatzis RE, Serlavós R. The Power of EI Competencies Over Intelligence and Individual Performance: A Task-Dependent Model. Front Psychol 2018; 9:1532. [PMID: 30245651 PMCID: PMC6137254 DOI: 10.3389/fpsyg.2018.01532] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2018] [Accepted: 08/02/2018] [Indexed: 11/20/2022] Open
Abstract
Prior research on emotional intelligence (EI) has highlighted the use of incremental models that assume EI and general intelligence (or g) make independent contributions to performance. Questioning this assumption, we study EI's moderation power over the relationship between g and individual performance, by designing and testing a task-dependent interaction model. Reconciling divergent findings in previous studies, we propose that whenever social tasks are at stake, g has a greater effect on performance as EI increases. By contrast, in analytic tasks, a compensatory (or negative) interaction is expected, whereby at higher levels of EI, g contributes to performance at a lesser extent. Based on a behavioral approach to EI, using 360-degree assessments of EI competencies, our findings show that EI moderates the effect of g on the classroom performance of 864 MBA business executives. Whilst in analytic tasks g has a stronger effect on performance at lower levels of EI competencies, our data comes short to show a positive interaction of EI and g in affecting performance on social tasks. Contributions and implications to research and practice are discussed.
Collapse
Affiliation(s)
- Margarida Truninger
- ESADE Business School, Barcelona, Spain
- Center for Creative Leadership, Greensboro, NC, United States
| | | | | | - Richard E. Boyatzis
- Department of Organizational Behavior, Case Western Reserve University, Cleveland, OH, United States
| | | |
Collapse
|
8
|
More Than g-Factors: Second-Stratum Factors Should Not Be Ignored. INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE 2015. [DOI: 10.1017/iop.2015.66] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Ree, Carretta, and Teachout (2015) outlined a compelling argument for the pervasiveness of dominant general factors (DGFs) in psychological measurement. We agree that DGFs are important and that they are found for various constructs (e.g., cognitive abilities, work withdrawal), especially when an “unrotated principal components” analysis is conducted (Ree et al., p. 8). When studying hierarchical constructs, however, a narrow emphasis on uncovering DGFs would be incomplete at best and detrimental at worst. This commentary largely echoes the arguments made by Wee, Newman, and Joseph (2014), and Schneider and Newman (2015), who provided reasons for considering second-stratum cognitive abilities. We believe these same arguments in favor of second-stratum factors in the ability domain can be applied to hierarchical constructs more generally.
Collapse
|
9
|
Poropat AE. The validity of Performance Environment Perception Scales: Environmental predictors of citizenship performance. JOURNAL OF MANAGEMENT & ORGANIZATION 2015. [DOI: 10.5172/jmo.16.1.180] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
AbstractThis research examined the validity of Performance Environment Perception Scales (PEPS), a new instrument designed to assess performance-relevant aspects of the work environment. A sample of 156 employees of an Australian university completed PEPS and their supervisors rated their task and citizenship performance. Confirmatory Factor Analysis showed PEPS to have a valid factor structure, and PEPS were found to be significantly correlated with citizenship performance, but not with task performance. Although this finding is consistent with theoretical predictions, PEPS are apparently the first measures of work environment perceptions that have confirmed this. Thus PEPS show promise as measures for use in future research and organizational development projects that focus on relationships between the work environment and performance. Limitations of the research and implications for the validity of PEPS, as well as for future research and practice, are discussed.
Collapse
|
10
|
Scherbaum CA, Goldstein HW, Yusko KP, Ryan R, Hanges PJ. Intelligence 2.0: Reestablishing a Research Program on g in I–O Psychology. INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE 2015. [DOI: 10.1111/j.1754-9434.2012.01419.x] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Intelligence (i.e., g, general mental ability) is an individual difference that is arguably more important than ever for success in the constantly changing, ever more complex world of business (Boal, 2004; Gatewood, Field, & Barrick, 2011). Although the field of industrial–organizational (I–O) psychology initially made substantial contributions to the study of intelligence and its use in applied settings (e.g., Hunter, 1980; Schmidt & Hunter, 1981), we have done relatively little in recent times about studying the nature of the intelligence construct and its measurement. Instead, we have focused predominately on using intelligence to predict performance outcomes and examine racial subgroup differences on intelligence test scores. Although the field of I–O psychology continues to approach intelligence at a surface level, other fields (e.g., clinical psychology, developmental and educational research, and neuropsychology) have continued to study this construct with greater depth and have consequently made more substantial progress in understanding this critical and complex construct. The purpose of this article is to note this lack of progress in I–O psychology and to challenge our field to mount new research initiatives on this critical construct.
Collapse
|
11
|
Murphy KR. Content Validation Is Useful for Many Things, but Validity Isn't One of Them. INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE 2015. [DOI: 10.1111/j.1754-9434.2009.01173.x] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Content-oriented validation strategies establish the validity of selection tests as predictors of performance by comparing the content of the tests with the content of the job. These comparisons turn out to have little if any bearing on the predictive validity of selection tests. There is little empirical support for the hypothesis that the match between job content and test content influences validity, and there are often structural factors in selection (e.g., positive correlations among selection tests) that strongly limit the possible influence of test content on validity. Comparisons between test content and job content have important implications for the acceptability of testing, the defensibility of tests in legal proceedings, and the transparency of test development and validation, but these comparisons have little if any bearing on validity.
Collapse
|
12
|
LANG JONASWB, KERSTING MARTIN, HÜLSHEGER UTER, LANG JESSICA. GENERAL MENTAL ABILITY, NARROWER COGNITIVE ABILITIES, AND JOB PERFORMANCE: THE PERSPECTIVE OF THE NESTED-FACTORS MODEL OF COGNITIVE ABILITIES. PERSONNEL PSYCHOLOGY 2010. [DOI: 10.1111/j.1744-6570.2010.01182.x] [Citation(s) in RCA: 77] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
13
|
The validity of Performance Environment Perception Scales: Environmental predictors of citizenship performance. JOURNAL OF MANAGEMENT & ORGANIZATION 2010. [DOI: 10.1017/s1833367200002352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
AbstractThis research examined the validity of Performance Environment Perception Scales (PEPS), a new instrument designed to assess performance-relevant aspects of the work environment. A sample of 156 employees of an Australian university completed PEPS and their supervisors rated their task and citizenship performance. Confirmatory Factor Analysis showed PEPS to have a valid factor structure, and PEPS were found to be significantly correlated with citizenship performance, but not with task performance. Although this finding is consistent with theoretical predictions, PEPS are apparently the first measures of work environment perceptions that have confirmed this. Thus PEPS show promise as measures for use in future research and organizational development projects that focus on relationships between the work environment and performance. Limitations of the research and implications for the validity of PEPS, as well as for future research and practice, are discussed.
Collapse
|
14
|
Feredoes E, Tononi G, Postle BR. Direct evidence for a prefrontal contribution to the control of proactive interference in verbal working memory. Proc Natl Acad Sci U S A 2006; 103:19530-4. [PMID: 17151200 PMCID: PMC1748259 DOI: 10.1073/pnas.0604509103] [Citation(s) in RCA: 72] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Controlling the effects of proactive interference (PI), the deleterious effect of prior mental activity on current memory representations, is believed to be a key function of the prefrontal cortex. This view is supported by neuroimaging evidence for a correlation between the longer reaction times caused by high PI conditions of a working memory task and increased activity in left inferior frontal gyrus (IFG) of the prefrontal cortex. An alternative that has never been ruled out, however, is that this left IFG effect may merely reflect sensitivity to such nonspecific factors as difficulty and/or time on task. To resolve this confound, we applied the interference methodology of repetitive transcranial magnetic stimulation (rTMS) to the left IFG and two control regions while subjects performed delayed letter recognition. rTMS was guided with high-resolution magnetic resonance images and was time-locked to the onset of the memory probe. The effect of rTMS, a disruption of accuracy restricted to high-PI probes, was specific to the left IFG. These results demonstrate that unpredictable, phasic disruption of the left IFG selectively disrupts control of responses to high-conflict verbal working memory probes, and they conclusively reject nonspecific alternative accounts.
Collapse
Affiliation(s)
- Eva Feredoes
- Department of Psychology, University of Wisconsin, Madison, WI 53703, USA.
| | | | | |
Collapse
|
15
|
Kuncel NR, Hezlett SA, Ones DS. Academic performance, career potential, creativity, and job performance: can one construct predict them all? J Pers Soc Psychol 2004; 86:148-61. [PMID: 14717633 DOI: 10.1037/0022-3514.86.1.148] [Citation(s) in RCA: 320] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This meta-analysis addresses the question of whether 1 general cognitive ability measure developed for predicting academic performance is valid for predicting performance in both educational and work domains. The validity of the Miller Analogies Test (MAT; W. S. Miller, 1960) for predicting 18 academic and work-related criteria was examined. MAT correlations with other cognitive tests (e.g., Raven's Matrices [J. C. Raven, 1965]; Graduate Record Examinations) also were meta-analyzed. The results indicate that the abilities measured by the MAT are shared with other cognitive ability instruments and that these abilities are generalizably valid predictors of academic and vocational criteria, as well as evaluations of career potential and creativity. These findings contradict the notion that intelligence at work is wholly different from intelligence at school, extending the voluminous literature that supports the broad importance of general cognitive ability (g).
Collapse
Affiliation(s)
- Nathan R Kuncel
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA.
| | | | | |
Collapse
|