1
|
Hambrick DZ, Burgoyne AP, Altmann EM, Matteson TJ. Explaining the Validity of the ASVAB for Job-Relevant Multitasking Performance: The Role of Placekeeping Ability. J Intell 2023; 11:225. [PMID: 38132843 PMCID: PMC10744611 DOI: 10.3390/jintelligence11120225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Revised: 11/16/2023] [Accepted: 12/02/2023] [Indexed: 12/23/2023] Open
Abstract
Scores on the Armed Services Vocational Aptitude Battery (ASVAB) predict military job (and training) performance better than any single variable so far identified. However, it remains unclear what factors explain this predictive relationship. Here, we investigated the contributions of fluid intelligence (Gf) and two executive functions-placekeeping ability and attention control-to the relationship between the Armed Forces Qualification Test (AFQT) score from the ASVAB and job-relevant multitasking performance. Psychometric network analyses revealed that Gf and placekeeping ability independently contributed to and largely explained the AFQT-multitasking performance relationship. The contribution of attention control to this relationship was negligible. However, attention control did relate positively and significantly to Gf and placekeeping ability, consistent with the hypothesis that it is a cognitive "primitive" underlying the individual differences in higher-level cognition. Finally, hierarchical regression analyses revealed stronger evidence for the incremental validity of Gf and placekeeping ability in the prediction of multitasking performance than for the incremental validity of attention control. The results shed light on factors that may underlie the predictive validity of global measures of cognitive ability and suggest how the ASVAB might be augmented to improve its predictive validity.
Collapse
Affiliation(s)
- David Z. Hambrick
- Department of Psychology, Michigan State University, East Lansing, MI 48824, USA;
| | | | - Erik M. Altmann
- Department of Psychology, Michigan State University, East Lansing, MI 48824, USA;
| | - Tyler J. Matteson
- Department of Psychology, Stanford University, Stanford, CA 94305, USA;
| |
Collapse
|
2
|
Sparfeldt JR, Becker N, Greiff S, Kersting M, König CJ, Lang JWB, Beauducel A. Intelligenz(tests) verstehen und missverstehen. PSYCHOLOGISCHE RUNDSCHAU 2022. [DOI: 10.1026/0033-3042/a000597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Zusammenfassung. Die vorliegende Standortbestimmung zeigt die hohe wissenschaftliche Qualität der Intelligenzforschung und von Intelligenztests. Es werden aber auch mögliche Missverständnisse und Einseitigkeiten der Ergebnisrezeption und -interpretation thematisiert. Im Einzelnen werden (1) die hohe prognostische und kriterienbezogene Validität bei gleichzeitigen Vorbehalten wie teils niedriger Akzeptanz bzw. Augenscheinvalidität, (2) die Darstellung empirischer Befunde aus der Perspektive ausgewählter Theorien sowie (3) die Bedeutung von Umwelteinflüssen und hohen Erblichkeitskoeffizienten eingehender betrachtet. Für jeden dieser Bereiche wird verdeutlicht, dass vor allem Präzision bei der Rezeption und Darstellung von Forschungsergebnissen notwendig ist, um Einseitigkeiten, Missverständnisse und Instrumentalisierungen zu vermeiden. Der vorliegende Beitrag zeigt, dass einiges, was als Problem der Intelligenzforschung und von Intelligenztests kritisiert wird, letztendlich auf die dargestellten Missverständnisse zurückzuführen ist. Vor diesem Hintergrund wird der Unterschied zwischen der qualitativ hochwertigen Intelligenzforschung und Intelligenztestung einerseits sowie den Missverständnissen und Einseitigkeiten bei der Rezeption andererseits herausgearbeitet. Weiterhin werden berechtigte Kritikpunkte an der Intelligenzforschung und an Intelligenztests sowie Forschungsdesiderata benannt.
Collapse
|
3
|
Hambrick DZ, Macnamara BN, Oswald FL. Is the Deliberate Practice View Defensible? A Review of Evidence and Discussion of Issues. Front Psychol 2020; 11:1134. [PMID: 33013494 PMCID: PMC7461852 DOI: 10.3389/fpsyg.2020.01134] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2019] [Accepted: 05/04/2020] [Indexed: 11/13/2022] Open
Abstract
The question of what explains individual differences in expertise within complex domains such as music, games, sports, science, and medicine is currently a major topic of interest in a diverse range of fields, including psychology, education, and sports science, to name just a few. Ericsson and colleagues' deliberate practice view is a highly influential perspective in the literature on expertise and expert performance-but is it viable as a testable scientific theory? Here, reviewing more than 25 years of Ericsson and colleagues' writings, we document critical inconsistencies in the definition of deliberate practice, along with apparent shifts in the standard for evidence concerning deliberate practice. We also consider the impact of these issues on progress in the field of expertise, focusing on the empirical testability and falsifiability of the deliberate practice view. We then discuss a multifactorial perspective on expertise, and how open science practices can accelerate progress in research guided by this perspective.
Collapse
Affiliation(s)
- David Z. Hambrick
- Department of Psychology, Michigan State University, East Lansing, MI, United States
| | - Brooke N. Macnamara
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH, United States
| | - Frederick L. Oswald
- Department of Psychological Sciences, Rice University, Houston, TX, United States
| |
Collapse
|
4
|
Preckel F, Golle J, Grabner R, Jarvin L, Kozbelt A, Müllensiefen D, Olszewski-Kubilius P, Schneider W, Subotnik R, Vock M, Worrell FC. Talent Development in Achievement Domains: A Psychological Framework for Within- and Cross-Domain Research. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2020; 15:691-722. [DOI: 10.1177/1745691619895030] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Achievement in different domains, such as academics, music, or visual arts, plays a central role in all modern societies. Different psychological models aim to describe and explain achievement and its development in different domains. However, there remains a need for a framework that guides empirical research within and across different domains. With the talent-development-in-achievement-domains (TAD) framework, we provide a general talent-development framework applicable to a wide range of achievement domains. The overarching aim of this framework is to support empirical research by focusing on measurable psychological constructs and their meaning at different levels of talent development. Furthermore, the TAD framework can be used for constructing domain-specific talent-development models. With examples for the application of the TAD framework to the domains of mathematics, music, and visual arts, the review provided supports the suitability of the TAD framework for domain-specific model construction and indicates numerous research gaps and open questions that should be addressed in future research.
Collapse
Affiliation(s)
| | - Jessika Golle
- Hector Research Institute of Education Sciences and Psychology, University of Tuebingen
| | | | | | - Aaron Kozbelt
- Department of Psychology, Brooklyn College, City University of New York
| | | | - Paula Olszewski-Kubilius
- Center for Talent Development and School of Education and Social Policy, Northwestern University
| | | | - Rena Subotnik
- Center for Psychology in Schools and Education, American Psychological Association, Washington, DC
| | - Miriam Vock
- Department of Educational Sciences, University of Potsdam
| | - Frank C. Worrell
- Graduate School of Education, University of California, Berkeley
| |
Collapse
|
5
|
Dalal RS, Alaybek B, Lievens F. Within-Person Job Performance Variability Over Short Timeframes: Theory, Empirical Research, and Practice. ANNUAL REVIEW OF ORGANIZATIONAL PSYCHOLOGY AND ORGANIZATIONAL BEHAVIOR 2020. [DOI: 10.1146/annurev-orgpsych-012119-045350] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
We begin by charting the evolution of the dominant perspective on job performance from one that viewed performance as static to one that viewed it as dynamic over long timeframes (e.g., months, years, decades) to one that views it as dynamic over not just long but also short timeframes (e.g., minutes, hours, days, weeks)—and that accordingly emphasizes the within-person level of analysis. The remainder of the article is devoted to the newer, short-timeframe research on within-person variability in job performance. We emphasize personality states and affective states as motivational antecedents. We provide accessible reviews of relevant theories and highlight the convergence of theorizing across the personality and affect antecedent domains. We then focus on several major avenues for future research. Finally, we discuss the implications of these perspectives for personnel selection and performance management in organizations as well as for employees aiming to optimize their job performance.
Collapse
Affiliation(s)
- Reeshad S. Dalal
- Department of Psychology, George Mason University, Fairfax, Virginia 22030, USA
| | - Balca Alaybek
- Department of Psychology, George Mason University, Fairfax, Virginia 22030, USA
| | - Filip Lievens
- Lee Kong Chian School of Business, Singapore Management University, Singapore 178899
| |
Collapse
|
6
|
Salgado JF, Moscoso S. Meta-Analysis of the Validity of General Mental Ability for Five Performance Criteria: Hunter and Hunter (1984) Revisited. Front Psychol 2019; 10:2227. [PMID: 31681072 PMCID: PMC6811658 DOI: 10.3389/fpsyg.2019.02227] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2019] [Accepted: 09/17/2019] [Indexed: 11/20/2022] Open
Abstract
This paper presents a series of meta-analyses of the validity of general mental ability (GMA) for predicting five occupational criteria, including supervisory ratings of job performance, production records, work sample tests, instructor ratings, and grades. The meta-analyses were conducted with a large database of 467 technical reports of the validity of the General Aptitude Test Battery (GATB) which included 630 independent samples. GMA showed to be a consistent predictor of the five criteria, but the magnitude of the operational validity was not the same across the five criteria. Results also showed that job complexity is a moderator of the GMA validity for the performance criteria. We also found that the GMA validity estimates are slightly smaller than the previous ones obtained by Hunter and Hunter (1984). Finally, we discuss the implications of these findings for the research and practice of personnel selection.
Collapse
Affiliation(s)
- Jesús F Salgado
- Faculty of Labor Relations, University of Santiago de Compostela, Santiago de Compostela, Spain
| | - Silvia Moscoso
- Faculty of Labor Relations, University of Santiago de Compostela, Santiago de Compostela, Spain
| |
Collapse
|
7
|
Hambrick DZ, Burgoyne AP, Macnamara BN, Ullén F. Toward a multifactorial model of expertise: beyond born versus made. Ann N Y Acad Sci 2018; 1423:284-295. [PMID: 29446457 DOI: 10.1111/nyas.13586] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Revised: 12/04/2017] [Accepted: 12/04/2017] [Indexed: 01/09/2023]
Abstract
The debate over the origins of individual differences in expertise has raged for over a century in psychology. The "nature" view holds that expertise reflects "innate talent"-that is, genetically determined abilities. The "nurture" view counters that, if talent even exists, its effects on ultimate performance are negligible. While no scientist takes seriously a strict nature-only view of expertise, the nurture view has gained tremendous popularity over the past several decades. This environmentalist view holds that individual differences in expertise reflect training history, with no important contribution to ultimate performance by innate ability ("talent"). Here, we argue that, despite its popularity, this view is inadequate to account for the evidence concerning the origins of expertise that has accumulated since the view was first proposed. More generally, we argue that the nature versus nurture debate in research on expertise is over-or certainly should be, as it has been in other areas of psychological research for decades. We describe a multifactorial model for research on the nature and nurture of expertise, which we believe will provide a progressive direction for future research on expertise.
Collapse
Affiliation(s)
- David Z Hambrick
- Department of Psychology, Michigan State University, East Lansing, Michigan
| | | | - Brooke N Macnamara
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, Ohio
| | - Fredrik Ullén
- Department of Neuroscience, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
8
|
|
9
|
Finkel D, Davis DW, Turkheimer E, Dickens WT. Applying Biometric Growth Curve Models to Developmental Synchronies in Cognitive Development: The Louisville Twin Study. Behav Genet 2015; 45:600-9. [PMID: 26392369 PMCID: PMC4641789 DOI: 10.1007/s10519-015-9747-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2015] [Accepted: 09/09/2015] [Indexed: 10/23/2022]
Abstract
Biometric latent growth curve models were applied to data from the LTS in order to replicate and extend Wilson's (Child Dev 54:298-316, 1983) findings. Assessments of cognitive development were available from 8 measurement occasions covering the period 4-15 years for 1032 individuals. Latent growth curve models were fit to percent correct for 7 subscales: information, similarities, arithmetic, vocabulary, comprehension, picture completion, and block design. Models were fit separately to WPPSI (ages 4-6 years) and WISC-R (ages 7-15). Results indicated the expected increases in heritability in younger childhood, and plateaus in heritability as children reached age 10 years. Heritability of change, per se (slope estimates), varied dramatically across domains. Significant genetic influences on slope parameters that were independent of initial levels of performance were found for only information and picture completion subscales. Thus evidence for both genetic continuity and genetic innovation in the development of cognitive abilities in childhood were found.
Collapse
Affiliation(s)
- Deborah Finkel
- Department of Psychology, Indiana University Southeast, 4201 Grant Line Road, New Albany, IN, USA.
| | | | - Eric Turkheimer
- Department of Psychology, University of Virginia, Charlottesville, VA, USA
| | | |
Collapse
|
10
|
Abstract
Construct proliferation—the accumulation of ostensibly different but potentially identical constructs representing organizational phenomena—is a salient problem in contemporary research. While a number of construct validation procedures exist, relatively few validation studies conduct comprehensive assessments of the discriminant validity of theoretically distinct constructs. In this article, we outline the key considerations a researcher must take into account when attempting to establish the empirical distinctness of new or existing constructs and provide a step-by-step guide on how to assess the discriminant validity of constructs while accounting for three major sources of measurement error: random error, specific factor error, and transient error. Using a number of popular measures from the leadership literature, we provide an illustrative example of how to conduct a study of discriminant validity. We include several analytic strategies in our study and discuss the similarities and differences between the results they yield. We also discuss several additional issues related to this type of research and make recommendations for conducting discriminant validity analyses.
Collapse
Affiliation(s)
- Jonathan A. Shaffer
- Department of Management, Marketing, and General Business, West Texas A&M University, Canyon, TX, USA
| | - David DeGeest
- Department of HRM & OB, University of Groningen, Groningen, The Netherlands
| | - Andrew Li
- Department of Management, Marketing, and General Business, West Texas A&M University, Canyon, TX, USA
| |
Collapse
|
11
|
Kieng S, Rossier J, Favez N, Geistlich S, Lecerf T. Stabilité à long terme des scores du WISC-IV : forces et faiblesses personnelles. PRAT PSYCHOL 2015. [DOI: 10.1016/j.prps.2015.03.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
12
|
Longitudinal Invariance of the Wechsler Intelligence Scale for Children–Fourth Edition in a Referral Sample. JOURNAL OF PSYCHOEDUCATIONAL ASSESSMENT 2014. [DOI: 10.1177/0734282914538802] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Measurement invariance of the Wechsler Intelligence Scale for Children–Fourth Edition (WISC-IV) was investigated with a group of 352 students eligible for psychoeducational evaluations tested, on average, 2.8 years apart. Configural, metric, and scalar invariance were found. However, the error variance of the Coding subtest was not constant across time, allowing only partial strict invariance. This indicates that the WISC-IV (a) was measuring similar constructs at both test occasions, (b) constructs had the same meaning across time, (c) scores that changed across time can be attributed to change in the constructs being measured and not to changes in the structure of the test itself, and (d) measures the same constructs equally well across time with the possible exception of Processing Speed due to the noninvariance of the Coding subtest’s residual variance. This investigation provided support for intelligence as an enduring trait and for the validity of the WISC-IV.
Collapse
|
13
|
Is health literacy an example of construct proliferation? A conceptual and empirical evaluation of its redundancy with general cognitive ability. INTELLIGENCE 2014. [DOI: 10.1016/j.intell.2014.03.004] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|