1
|
Sackett PR, Walmsley PT, Koch AJ, Beatty AS, Kuncel NR. Predictor content matters for knowledge testing: Evidence supporting content validation. HUMAN PERFORMANCE 2016. [DOI: 10.1080/08959285.2015.1120307] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
2
|
Murphy KR. Is Content-Related Evidence Useful in Validating Selection Tests? INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE 2015. [DOI: 10.1111/j.1754-9434.2009.01186.x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The 12 papers commenting on K. R. Murphy (2009a) raise a number of important issues, most of which can be subsumed in one of four themes. First, papers examining content-oriented validation strategies are still necessary and useful, in part because of the frequent use of these strategies in the practice of industrial–organizational (I–O) psychology. Second, the term “content validity” means many different things both within and beyond the field of I–O psychology, and it is useful to understand what sorts of inferences examinations of test content do and do not support. Third, these 12 papers present very little evidence that content validation, as typically carried out by I–O psychologists, actually provides information about the likelihood that people who do well on the test will do well on the job. Finally, I believe that the best use of content-related evidence in validating selection tests is in developing hypotheses about relationships between test scores and criteria rather than in testing these hypotheses.
Collapse
|