1
|
Galura SJ, Horan KA, Parchment J, Penoyer D, Schlotzhauer A, Dye K, Hill E. Frame of reference training for content analysis with structured teams (FORT‐CAST): A framework for content analysis of open‐ended survey questions using multidisciplinary coders. Res Nurs Health 2022; 45:477-487. [DOI: 10.1002/nur.22227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 03/16/2022] [Accepted: 03/26/2022] [Indexed: 11/07/2022]
Affiliation(s)
- Sandra J. Galura
- College of Nursing University of Central Florida Orlando Florida USA
| | - Kristin A. Horan
- Psychology Department University of Central Florida Orlando Florida USA
| | - Joy Parchment
- College of Nursing University of Central Florida Orlando Florida USA
| | - Daleen Penoyer
- Center for Nursing Research and Advanced Nursing Practice Orlando Health Orlando Florida USA
| | - Ann Schlotzhauer
- Psychology Department University of Central Florida Orlando Florida USA
| | - Kenzie Dye
- Psychology Department University of Central Florida Orlando Florida USA
| | - Emily Hill
- Psychology Department University of Central Florida Orlando Florida USA
| |
Collapse
|
2
|
Breil SM, Lievens F, Forthmann B, Back MD. Interpersonal behavior in assessment center role‐play exercises: investigating structure, consistency, and effectiveness. PERSONNEL PSYCHOLOGY 2022. [DOI: 10.1111/peps.12507] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
3
|
Rosales Sánchez C, Díaz-Cabrera D, Hernández-Fernaud E. Does effectiveness in performance appraisal improve with rater training? PLoS One 2019; 14:e0222694. [PMID: 31536562 PMCID: PMC6752840 DOI: 10.1371/journal.pone.0222694] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Accepted: 09/05/2019] [Indexed: 12/03/2022] Open
Abstract
Performance appraisal is a complex process by which an organization can determine the extent to which employees are performing their work effectively. However, this appraisal may not be accurate if there is no reduction in the impact of problems caused by possibly subjective rater judgements. The main objective of this work is to check the effectiveness—separately and jointly—of the following four training programmes in the extant literature aimed at improving the accuracy of performance assessment: 1) Performance Dimension Training, 2) Frame-of-Reference, 3) Rater Error Training, and 4) Behavioural Observation Training. Based on these training strategies, three programmes were designed and applied separately. A fourth programme was a combination of the other three. We analyzed two studies using different samples (85 students and 42 employees) for the existence of differences in the levels of knowledge of performance and its dimensions, rater errors, observational accuracy, and accuracy of task and citizenship performance appraisal, according to the type of training raters receive. First, the main results show that training based on performance dimensions and the creation of a common framework, in addition to the training that includes the four programmes (Training_4_programmes), increases the level of knowledge of performance and its dimensions. Second, groups that receive training in rater error score higher in knowledge of biases than the other groups, whether or not they have received training. Third, participants’ observational accuracy improves with each new moment measure (post-training and follow-up), though not because of the type of training received. Fourth, participants who receive training through the programme that combine the other four gave a task performance appraisal that was closer to the one undertaken by the judges-experts than the other groups. And finally, students’ citizenship performance appraisal does not vary according to type of training or to different moment measures, whereas the group of employees who received all four types of training gave a more accurate citizenship performance assessment.
Collapse
|
4
|
Kleinmann M, Ingold PV. Toward a Better Understanding of Assessment Centers: A Conceptual Review. ANNUAL REVIEW OF ORGANIZATIONAL PSYCHOLOGY AND ORGANIZATIONAL BEHAVIOR 2019. [DOI: 10.1146/annurev-orgpsych-012218-014955] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Assessment centers (ACs) are employed for selecting and developing employees and leaders. They are interpersonal at their core because they consist of interactive exercises. Minding this perspective, this review focuses on the role of the assessee, the assessor, and the AC design, as well as their interplay in the interpersonal situation of the AC. Therefore, it addresses which conceptual perspectives have increased our understanding of ACs in this context. Building on this, we review relevant empirical findings. On this basis, the review contributes to an empirically driven understanding of the interpersonal nature of ACs and provides directions for practice and future research avenues on this topic as well as on technology in ACs and cross-cultural applications.
Collapse
Affiliation(s)
- Martin Kleinmann
- Department of Psychology, University of Zurich, CH-8050 Zurich, Switzerland;,
| | - Pia V. Ingold
- Department of Psychology, University of Zurich, CH-8050 Zurich, Switzerland;,
| |
Collapse
|
5
|
Hauenstein NMA, McCusker ME. Rater training: Understanding effects of training content, practice ratings, and feedback. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2017. [DOI: 10.1111/ijsa.12177] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
6
|
O’Neill G, Travaglione A, McShane S, Hancock J, Chang J. Converting values awareness to values enactment through frame-of-reference training. INTERNATIONAL JOURNAL OF ORGANIZATIONAL ANALYSIS 2017. [DOI: 10.1108/ijoa-02-2016-0975] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
This paper aims to investigate whether values enactment could be increased through frame-of-reference (FOR) training configured around values prototyping and behavioural domain training for managers within an Australian public sector organisation.
Design/methodology/approach
Employees from an Australian public sector organisation were studied to ascertain the effect of values training and development via a three-way longitudinal design with a control group.
Findings
The findings indicate that FOR training can increase employee values enactment clarity and, thereby, have a positive impact upon organisational values enactment.
Practical implications
The application of FOR training constitutes a new approach to supporting the development of employee values clarity, which, in turn, can support the achievement of organisational values enactment. Through FOR training, employees can learn to apply organisational values in their decision-making and other behaviours irrespective of whether they are highly congruent with their personal values.
Originality/value
Empirical research into values management is limited and there is a lack of consensus to what is needed to create a values-driven organisation. The article shows that FOR training can be a beneficial component of a broader human resource strategy aimed at increasing organisational values enactment. With reference to the resource-based view of the firm, it is argued that values enactment constitutes a distinctive capability that may confer sustained organisational advantage.
Collapse
|
7
|
Abstract
Abstract. The purpose of this research was to examine frame-of-reference (FOR) training retention in an assessment center (AC) rater training context. In this study, we extended Gorman and Rentsch’s (2009) research showing FOR training effects on performance schemas by examining the effects immediately after training and again after a two-week nonuse period. We examined the retention effects of FOR training on performance ratings and on performance schema accuracy. The results indicated that the FOR training condition, compared to a control condition, yielded performance ratings and performance schemas more similar to expert ratings and to an expert schema, respectively. FOR training also had positive effects on ratings and performance schema accuracy assessed two weeks after training. These results support and extend the theory of FOR training, which posits that the instructed theory of performance replaces the preexisting rater schemas ( Lievens, 2001 ), and they contribute to the research on FOR training within AC contexts.
Collapse
Affiliation(s)
- C. Allen Gorman
- Department of Management and Marketing, East Tennessee State University, Johnson City, TN, USA
| | - Joan R. Rentsch
- School of Communication Studies, University of Tennessee, Knoxville, TN, USA
| |
Collapse
|
8
|
Vanhove AJ, Gibbons AM, Kedharnath U. Rater agreement, accuracy, and experienced cognitive load: Comparison of distributional and traditional assessment approaches to rating performance. HUMAN PERFORMANCE 2016. [DOI: 10.1080/08959285.2016.1192632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
9
|
Leugnerova M, Vaculik M, Prochazka J. The Influence of Candidate Social Effectiveness on Assessment Center Performance Ratings: A field study. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2016. [DOI: 10.1111/ijsa.12137] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Marcela Leugnerova
- Department of Psychology, Faculty of Social Studies; Masaryk University; Brno Czech Republic
| | - Martin Vaculik
- Department of Psychology, Faculty of Social Studies; Masaryk University; Brno Czech Republic
| | - Jakub Prochazka
- Department of Psychology, Faculty of Social Studies; Masaryk University; Brno Czech Republic
| |
Collapse
|
10
|
Borteyrou X, Lievens F, Bruchon-Schweitzer M, Congard A, Rascle N. Incremental Validity of Leaderless Group Discussion Ratings Over and Above General Mental Ability and Personality in Predicting Promotion. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2015. [DOI: 10.1111/ijsa.12121] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Affiliation(s)
| | - Filip Lievens
- Department of Personnel Management and Work and Organizational Psychology, Ghent University; Belgium
| | | | | | | |
Collapse
|
11
|
Campbell JP, Wiernik BM. The Modeling and Assessment of Work Performance. ANNUAL REVIEW OF ORGANIZATIONAL PSYCHOLOGY AND ORGANIZATIONAL BEHAVIOR 2015. [DOI: 10.1146/annurev-orgpsych-032414-111427] [Citation(s) in RCA: 197] [Impact Index Per Article: 21.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Individual work role performance drives the entire economy. It is organizational psychology and organizational behavior’s (OP/OB’s) most crucial dependent variable. In this review, alternative specifications for the definition and latent structure of individual performance are reviewed and summarized. Setting aside differences in terminology, the alternatives are remarkably similar. The Campbell (2012) model is offered as a synthesized description of the content of the latent structure. Issues pertaining to performance dynamics are then reviewed, along with the role played by individual adaptability to changing performance requirements. Using the synthesized model of the latent content structure and dynamics of performance as a backdrop, issues pertaining to the assessment of performance are summarized. The alternative goals of performance assessment, general measurement issues, and the construct validity of specific methods (e.g., ratings, simulations) are reviewed and described. Cross-cultural issues and future research needs are noted.
Collapse
Affiliation(s)
- John P. Campbell
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455
| | - Brenton M. Wiernik
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455
| |
Collapse
|
12
|
Lance CE. Why Assessment Centers Do Not Work the Way They Are Supposed To. INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE 2015. [DOI: 10.1111/j.1754-9434.2007.00017.x] [Citation(s) in RCA: 72] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Assessment centers (ACs) are often designed with the intent of measuring a number of dimensions as they are assessed in various exercises, but after 25 years of research, it is now clear that AC ratings that are completed at the end of each exercise (commonly known as postexercise dimension ratings) substantially reflect the effects of the exercises in which they were completed and not the dimensions they were designed to reflect. This is the crux of the long-standing “construct validity problem” for AC ratings. I review the existing research on AC construct validity and conclude that (a) contrary to previous notions, AC candidate behavior is inherently cross-situationally (i.e., cross-exercise) specific, not cross-situationally consistent as was once thought, (b) assessors rather accurately assess candidate behavior, and (c) these facts should be recognized in the redesign of ACs toward task- or role-based ACs and away from traditional dimension-based ACs.
Collapse
|
13
|
Day DV. Executive Selection Is a Process Not a Decision. INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE 2015. [DOI: 10.1111/j.1754-9434.2009.01126.x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
14
|
Melchers KG, König CJ. It Is Not Yet Time to Dismiss Dimensions in Assessment Centers. INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE 2015. [DOI: 10.1111/j.1754-9434.2007.00023.x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
15
|
Lance CE. Where Have We Been, How Did We Get There, and Where Shall We Go? INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE 2015. [DOI: 10.1111/j.1754-9434.2007.00028.x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Commentators expressed a wide variety of views on my evaluation of the state of assessment center (AC) research and practice. In this response, I first trace the evolution of the construct validity paradox “urban legend.” Next, I consider the commentators’ comments as they relate to (a) my recommendation to abandon dimension-based ACs in lieu of task- or role-based structures (b) my recommendation to discontinue design fix attempts toward making ACs conform to multitrait–multimethod construct validity criteria, and (c) considerations of construct validity and validation evidence. Finally, I offer some directions for future AC research and practice.
Collapse
|
16
|
Wirz A, Melchers KG, Schultheiss S, Kleinmann M. Are Improvements in Assessment Center Construct-Related Validity Paralleled by Improvements in Criterion-Related Validity? JOURNAL OF PERSONNEL PSYCHOLOGY 2014. [DOI: 10.1027/1866-5888/a000115] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Previous studies have found that factors that improved assessment center (AC) construct-related validity also had beneficial effects on criterion-related validity. However, some factors might have diverging effects on construct- and criterion-related validity. Accordingly, we followed recent calls to evaluate construct- and criterion-related validity of ACs simultaneously by examining the effects of exercise similarity on both aspects of validity within a single study. Data were collected in an AC (N = 92) that consisted of two different types of exercises. Convergent validity was better for similar exercises than it was for dissimilar exercises. However, regarding criterion-related validity, we did not find differences between similar and dissimilar exercises. Hence, this study revealed that improvements in AC construct-related validity are not necessarily paralleled by improvements in criterion-related validity.
Collapse
Affiliation(s)
- Andreja Wirz
- Department of Psychology, University of Zurich, Switzerland
| | | | | | | |
Collapse
|
17
|
Sutton A, Watson S. Can competencies at selection predict performance and development needs? JOURNAL OF MANAGEMENT DEVELOPMENT 2013. [DOI: 10.1108/jmd-02-2012-0032] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
18
|
Monahan EL, Hoffman BJ, Lance CE, Jackson DJR, Foster MR. Now You See Them, Now You Do Not: The Influence of Indicator-Factor Ratio on Support for Assessment Center Dimensions. PERSONNEL PSYCHOLOGY 2013. [DOI: 10.1111/peps.12049] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
19
|
Yeates P, O'Neill P, Mann K, Eva K. Seeing the same thing differently: mechanisms that contribute to assessor differences in directly-observed performance assessments. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2013; 18:325-41. [PMID: 22581567 DOI: 10.1007/s10459-012-9372-1] [Citation(s) in RCA: 117] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Accepted: 04/25/2012] [Indexed: 05/14/2023]
Abstract
Assessors' scores in performance assessments are known to be highly variable. Attempted improvements through training or rating format have achieved minimal gains. The mechanisms that contribute to variability in assessors' scoring remain unclear. This study investigated these mechanisms. We used a qualitative approach to study assessors' judgements whilst they observed common simulated videoed performances of junior doctors obtaining clinical histories. Assessors commented concurrently and retrospectively on performances, provided scores and follow-up interviews. Data were analysed using principles of grounded theory. We developed three themes that help to explain how variability arises: Differential Salience-assessors paid attention to (or valued) different aspects of the performances to different degrees; Criterion Uncertainty-assessors' criteria were differently constructed, uncertain, and were influenced by recent exemplars; Information Integration-assessors described the valence of their comments in their own unique narrative terms, usually forming global impressions. Our results (whilst not precluding the operation of established biases) describe mechanisms by which assessors' judgements become meaningfully-different or unique. Our results have theoretical relevance to understanding the formative educational messages that performance assessments provide. They give insight relevant to assessor training, assessors' ability to be observationally "objective" and to the educational value of narrative comments (in contrast to numerical ratings).
Collapse
Affiliation(s)
- Peter Yeates
- School of Translational Medicine, University of Manchester, Manchester, UK.
| | | | | | | |
Collapse
|
20
|
Mulder G, Jorgensen LI, Nel JA, Meiring D. The evaluation of a frame-of-reference training programme for intern psychometrists. SOUTH AFRICAN JOURNAL OF HUMAN RESOURCE MANAGEMENT 2013. [DOI: 10.4102/sajhrm.v11i1.506] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022] Open
Abstract
Orientation: The use of assessment centres (ACs) has drastically increased over the past decade. However, ACs are constantly confronted with the lack of construct validity. One aspect of ACs that could improve the construct validity significantly is that of assessor training. Unfortunately untrained or poorly trained assessors are often used in AC processes. Research purpose: The purpose of this research was to evaluate a frame-of-reference (FOR) programme to train intern psychometrists as assessors at an assessment centre. Motivation of study: The role of an assessor is important in an AC; therefore it is vital for an assessor to be able to evaluate and observe candidates’ behaviour adequately. Commencing with this training in a graduate psychometrist programme gives the added benefit of sending skilled psychometrists to the workplace. Research design, approach and method: A quantitative research approach was implemented, utilising a randomised pre-test-post-test comparison group design. Industrial Psychology postgraduate students (N = 22) at a South African university were used and divided into an experimental group (n = 11) and control group (n = 11). Three typical AC simulations were utilised as pre- and post-tests, and the ratings obtained from both groups were statistically analysed to determine the effect of the FOR training programme. Main findings: The data indicated that there was a significant increase in the familiarity of the participants with the one-on-one simulation and the group discussion simulation. Practical/managerial implications: Training intern psychometrists in a FOR programme could assist organisations in the appointment of more competent assessors. Contribution/value-add: To design an assessor training programme using FOR training for intern psychometrists in the South African context, specifically by incorporating this programme into the training programme for Honours students at universities.
Collapse
|
21
|
Hoffman BJ, Gorman CA, Blair CA, Meriac JP, Overstreet B, Atchley EK. Evidence for the Effectiveness of an Alternative Multisource Performance Rating Methodology. PERSONNEL PSYCHOLOGY 2012. [DOI: 10.1111/j.1744-6570.2012.01252.x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
22
|
|
23
|
Roch SG, Woehr DJ, Mishra V, Kieszczynska U. Rater training revisited: An updated meta-analytic review of frame-of-reference training. JOURNAL OF OCCUPATIONAL AND ORGANIZATIONAL PSYCHOLOGY 2011. [DOI: 10.1111/j.2044-8325.2011.02045.x] [Citation(s) in RCA: 118] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
24
|
Lanik M, Mitchell Gibbons A. Guidelines for cross-cultural assessor training in multicultural assessment centers. PSYCHOLOGIST-MANAGER JOURNAL 2011. [DOI: 10.1080/10887156.2011.595970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
25
|
HIRSCHFELD ROBERTR, THOMAS CHRISTOPHERH. AGE- AND GENDER-BASED ROLE INCONGRUENCE: IMPLICATIONS FOR KNOWLEDGE MASTERY AND OBSERVED LEADERSHIP POTENTIAL AMONG PERSONNEL IN A LEADERSHIP DEVELOPMENT PROGRAM. PERSONNEL PSYCHOLOGY 2011. [DOI: 10.1111/j.1744-6570.2011.01222.x] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
26
|
Consequences of autonomous and team-oriented forms of dispositional proactivity for demonstrating advancement potential. JOURNAL OF VOCATIONAL BEHAVIOR 2011. [DOI: 10.1016/j.jvb.2010.09.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
27
|
MELCHERS KLAUSG, LIENHARDT NADJA, VON AARBURG MIRIAM, KLEINMANN MARTIN. IS MORE STRUCTURE REALLY BETTER? A COMPARISON OF FRAME-OF-REFERENCE TRAINING AND DESCRIPTIVELY ANCHORED RATING SCALES TO IMPROVE INTERVIEWERS’ RATING QUALITY. PERSONNEL PSYCHOLOGY 2011. [DOI: 10.1111/j.1744-6570.2010.01202.x] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
28
|
Stillman JA, Jackson DJR. A detection theory approach to the evaluation of assessors in assessment centres. JOURNAL OF OCCUPATIONAL AND ORGANIZATIONAL PSYCHOLOGY 2011. [DOI: 10.1348/096317905x26147] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
29
|
Melchers KG, Kleinmann M, Prinz MA. Do Assessors Have Too Much on their Plates? The Effects of Simultaneously Rating Multiple Assessment Center Candidates on Rating Quality. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2010. [DOI: 10.1111/j.1468-2389.2010.00516.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
30
|
Jackson DJR, Stillman JA, Englert P. Task-Based Assessment Centers: Empirical support for a systems model. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2010. [DOI: 10.1111/j.1468-2389.2010.00496.x] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
31
|
Krause DE, Thornton III GC. A Cross-Cultural Look at Assessment Center Practices: Survey Results from Western Europe and North America. APPLIED PSYCHOLOGY-AN INTERNATIONAL REVIEW-PSYCHOLOGIE APPLIQUEE-REVUE INTERNATIONALE 2009. [DOI: 10.1111/j.1464-0597.2008.00371.x] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
32
|
|
33
|
BRUMMEL BRADLEYJ, RUPP DEBORAHE, SPAIN SETHM. CONSTRUCTING PARALLEL SIMULATION EXERCISES FOR ASSESSMENT CENTERS AND OTHER FORMS OF BEHAVIORAL ASSESSMENT. PERSONNEL PSYCHOLOGY 2009. [DOI: 10.1111/j.1744-6570.2008.01132.x] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
34
|
Lievens F. Assessment centres: A tale about dimensions, exercises, and dancing bears. EUROPEAN JOURNAL OF WORK AND ORGANIZATIONAL PSYCHOLOGY 2009. [DOI: 10.1080/13594320802058997] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
35
|
Thornton GC, Krause DE. Selection versus development assessment centers: an international survey of design, execution, and evaluation. INTERNATIONAL JOURNAL OF HUMAN RESOURCE MANAGEMENT 2009. [DOI: 10.1080/09585190802673536] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
36
|
WOO SANGE, SIMS CARRAS, RUPP DEBORAHE, GIBBONS ALYSSAM. DEVELOPMENT ENGAGEMENT WITHIN AND FOLLOWING DEVELOPMENTAL ASSESSMENT CENTERS: CONSIDERING FEEDBACK FAVORABILITY AND SELF-ASSESSOR AGREEMENT. PERSONNEL PSYCHOLOGY 2008. [DOI: 10.1111/j.1744-6570.2008.00129.x] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
37
|
Hirschfeld RR, Jordan MH, Thomas CH, Feild HS. Observed Leadership Potential of Personnel in a Team Setting: Big Five traits and proximal factors as predictors. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2008. [DOI: 10.1111/j.1468-2389.2008.00443.x] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
38
|
Klehe UC, König CJ, Richter GM, Kleinmann M, Melchers KG. Transparency in Structured Interviews: Consequences for Construct and Criterion-Related Validity. HUMAN PERFORMANCE 2008. [DOI: 10.1080/08959280801917636] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
39
|
Abstract
We review developments in personnel selection since the previous review by Hough & Oswald (2000) in the Annual Review of Psychology. We organize the review around a taxonomic structure of possible bases for improved selection, which includes (a) better understanding of the criterion domain and criterion measurement, (b) improved measurement of existing predictor methods or constructs, (c) identification and measurement of new predictor methods or constructs, (d) improved identification of features that moderate or mediate predictor-criterion relationships, (e) clearer understanding of the relationship between predictors or between predictors and criteria (e.g., via meta-analytic synthesis), (f) identification and prediction of new outcome variables, (g) improved ability to determine how well we predict the outcomes of interest, (h) improved understanding of subgroup differences, fairness, bias, and the legal defensibility, (i) improved administrative ease with which selection systems can be used, (j) improved insight into applicant reactions, and (k) improved decision-maker acceptance of selection systems.
Collapse
Affiliation(s)
- Paul R Sackett
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA.
| | | |
Collapse
|
40
|
Bowler MC, Woehr DJ. A meta-analytic evaluation of the impact of dimension and exercise factors on assessment center ratings. ACTA ACUST UNITED AC 2006; 91:1114-24. [PMID: 16953772 DOI: 10.1037/0021-9010.91.5.1114] [Citation(s) in RCA: 72] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Two recent reviews have attempted to summarize findings quantitatively regarding assessment center (AC) construct-related validity (i.e., Lance, Lambert, Gewin, Lievens, & Conway, 2004; Lievens & Conway, 2001). Unlike these previous studies, which reanalyzed individual multitrait-multimethod (MTMM) matrices from previously published research, the authors recoded and combined past matrices into a single MTMM matrix. This matrix, comprised of 6 dimensions each measured by 6 exercises, was then analyzed, providing a more generalizable set of results. Both dimensions and exercises were found to contribute substantially to AC ratings. Specific dimensions (i.e., communication, influencing others, organizing and planning, and problem solving) appeared more construct valid than others (i.e., consideration/awareness of others and drive). Implications for AC design and practice are discussed.
Collapse
Affiliation(s)
- Mark C Bowler
- Industrial/Organizational Psychology Program, Department of Management, The University of Tennessee, Knoxville, TN, USA
| | | |
Collapse
|
41
|
Context effects on group-based employee selection decisions. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2006. [DOI: 10.1016/j.obhdp.2006.01.003] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
42
|
Taggar S, Haines VY. I need you, you need me: a model of initiated task interdependence. JOURNAL OF MANAGERIAL PSYCHOLOGY 2006. [DOI: 10.1108/02683940610659560] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
43
|
Hagan CM, Konopaske R, Bernardin HJ, Tyler CL. Predicting assessment center performance with 360-degree, top-down, and customer-based competency assessments. HUMAN RESOURCE MANAGEMENT 2006. [DOI: 10.1002/hrm.20117] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
44
|
Lance CE, Lambert TA, Gewin AG, Lievens F, Conway JM. Revised Estimates of Dimension and Exercise Variance Components in Assessment Center Postexercise Dimension Ratings. JOURNAL OF APPLIED PSYCHOLOGY 2004; 89:377-85. [PMID: 15065983 DOI: 10.1037/0021-9010.89.2.377] [Citation(s) in RCA: 77] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
The authors reanalyzed assessment center (AC) multitrait-multimethod (MTMM) matrices containing correlations among postexercise dimension ratings (PEDRs) reported by F. Lievens and J. M. Conway (2001). Unlike F. Lievens and J. M. Conway, who used a correlated dimension-correlated uniqueness model, we used a different set of confirmatory-factor-analysis-based models (1-dimension-correlated Exercise and 1-dimension-correlated uniqueness models) to estimate dimension and exercise variance components in AC PEDRs. Results of reanalyses suggest that, consistent with previous narrative reviews, exercise variance components dominate over dimension variance components after all. Implications for AC construct validity and possible redirections of research on the validity of ACs are discussed.
Collapse
Affiliation(s)
- Charles E Lance
- Department of Psychology, University of Georgia, Athens, GA 30602-3013, USA.
| | | | | | | | | |
Collapse
|
45
|
Lance CE, Foster MR, Gentry WA, Thoresen JD. Assessor Cognitive Processes in an Operational Assessment Center. JOURNAL OF APPLIED PSYCHOLOGY 2004; 89:22-35. [PMID: 14769118 DOI: 10.1037/0021-9010.89.1.22] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
The purpose of this study was (a) to provide additional tests of C. E. Lance, Newbolt, et al.'s (2000) situational specificity (vs. method bias) interpretation of exercise effects on assessment center postexercise dimension ratings and (b) to provide competitive tests of salient dimension versus general impression models of assessor within-exercise evaluations of candidate performance. Results strongly support the situational specificity hypothesis and the general impression model of assessor cognitive processes in which assessors first form overall evaluations of candidate performance that then drive more specific dimensional ratings.
Collapse
Affiliation(s)
- Charles E Lance
- Department of Psychology, University of Georgia, Athens, GA 30602-3013, USA.
| | | | | | | |
Collapse
|