1
|
Tenbrink AP, Speer AB. Accountability during Performance Appraisals: The Development and Validation of the Rater Accountability Scale. HUMAN PERFORMANCE 2022. [DOI: 10.1080/08959285.2021.2023876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Affiliation(s)
- Andrew P. Tenbrink
- Department of Psychology, Wayne State University, 5057 Woodward Avenue, Detroit, Michigan, 48202, United States
| | - Andrew B. Speer
- Department of Psychology, Wayne State University, 5057 Woodward Avenue, Detroit, Michigan, 48202, United States
| |
Collapse
|
2
|
Speer AB, Tenbrink AP, Schwendeman MG. Creation and Validation of the Performance Appraisal Motivation Scale (PAMS). HUMAN PERFORMANCE 2020. [DOI: 10.1080/08959285.2020.1776713] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
3
|
Speer AB. Scoring Dimension-Level Job Performance From Narrative Comments: Validity and Generalizability When Using Natural Language Processing. ORGANIZATIONAL RESEARCH METHODS 2020. [DOI: 10.1177/1094428120930815] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Performance appraisal narratives are qualitative descriptions of employee job performance. This data source has seen increased research attention due to the ability to efficiently derive insights using natural language processing (NLP). The current study details the development of NLP scoring for performance dimensions from narrative text and then investigates validity and generalizability evidence for those scores. Specifically, narrative valence scores were created to measure a priori performance dimensions. These scores were derived using bag of words and word embedding features and then modeled using modern prediction algorithms. Construct validity evidence was investigated across three samples, revealing that the scores converged with independent human ratings of the text, aligned numerical performance ratings made during the appraisal, and demonstrated some degree of discriminant validity. However, construct validity evidence differed based on which NLP algorithm was used to derive scores. In addition, valence scores generalized to both downward and upward rating contexts. Finally, the performance valence algorithms generalized better in contexts where the same qualitative survey design was used compared with contexts where different instructions were given to elicit narrative text.
Collapse
|
4
|
Speer AB, Tenbrink AP, Schwendeman MG. Let’s Talk it Out: The Effects of Calibration Meetings on Performance Ratings. HUMAN PERFORMANCE 2019. [DOI: 10.1080/08959285.2019.1609477] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
5
|
Stone DL, Rosopa PJ. The Advantages and Limitations of Using Meta-analysis in Human Resource Management Research. HUMAN RESOURCE MANAGEMENT REVIEW 2017. [DOI: 10.1016/j.hrmr.2016.09.001] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|