1
|
Li M, Zhang B, Mou Y. Though Forced, Still Valid: Examining the Psychometric Performance of Forced-Choice Measurement of Personality in Children and Adolescents. Assessment 2024:10731911241255841. [PMID: 38867477 DOI: 10.1177/10731911241255841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2024]
Abstract
Unveiling the roles personality plays during childhood and adolescence necessitates its accurate measurement, commonly using traditional Likert-type (LK) scales. However, this format is susceptible to various response biases, which can be particularly prevalent in children and adolescents, thus likely undermining measurement accuracy. Forced-choice (FC) scales appear to be a promising alternative because they are largely free from these biases by design. However, some argue that the FC format may not perform satisfactorily in children and adolescents due to its complexity. Little empirical evidence exists regarding the suitability of the FC format for children and adolescents. As such, the current study examined the psychometric performance of an FC measure of the Big Five personality factors in three children and adolescent samples: 5th to 6th graders (N = 428), 7th to 8th graders (N = 449), and 10th to 11th graders (N = 555). Across the three age groups, the FC scale demonstrated a better fit to the Big Five model and better discriminant validity in comparison to the LK counterpart. Personality scores from the FC scale also converged well with those from the LK scale and demonstrated high reliability as well as sizable criterion-related validity. Furthermore, the FC scale had more invariant statements than its LK counterpart across age groups. Overall, we found good evidence showing that FC measurement of personality is suitable for children and adolescents.
Collapse
Affiliation(s)
- Mengtong Li
- University of Illinois Urbana-Champaign, IL, USA
| | - Bo Zhang
- University of Illinois Urbana-Champaign, IL, USA
| | - Yi Mou
- Sun Yat-Sen University, Guangzhou, China
| |
Collapse
|
2
|
Roemer L, Steinmayr R, Ziegler M. Disentangling Stable and Malleable Components—A Latent State-Trait Analysis of Vocational Interests. JOURNAL OF RESEARCH IN PERSONALITY 2023. [DOI: 10.1016/j.jrp.2023.104353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
3
|
Boskovic I, Akca AYE, Giromini L. Symptom coaching and symptom validity tests: An analog study using the structured inventory of malingered symptomatology, Self-Report Symptom Inventory, and Inventory of Problems-29. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-13. [PMID: 35414324 DOI: 10.1080/23279095.2022.2057856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
In this pilot and exploratory study, we tested the robustness of three self-report symptom validity tests (SVTs) to symptom coaching for depression, with and without additional information available on the Internet. Specifically, we divided our sample (N = 193) so that each subject received either the Structured Inventory of Malingered Symptomatology (SIMS; n = 64), the Self-Report Symptom Inventory (SRSI; n = 66), or the Inventory of Problems-29 (IOP-29; n = 63). Within each of the three subgroups, approximately one third of participants were instructed to respond honestly (Genuine Condition, nSIMS = 21; nSRSI = 24; nIOP-29 = 26) and approximately two-thirds were instructed to feign depression. One half of the feigners were presented with a vignette to increase their compliance with instructions and were given information about symptoms of depression (Coached Feigning, nSIMS = 25; nSRSI = 18; nIOP-29 = 21), and the other half were given the same vignette and information about symptoms of depression, plus two Internet links to review before completing the test (Internet-Coached Feigning, nSIMS = 18; nSRSI = 24; nIOP-29 = 16). Overall, the results showed that the genuine conditions yielded the lowest total scores on all three measures, while the two feigning conditions did not significantly differ from each other. Looking at the detection rates for all feigning participants, all three measures showed satisfactory results, with IOP-29 performing slightly better than SIMS and SIMS performing slightly better than SRSI. Internet-Coached Feigners scored slightly lower on all three measures than feigners who were coached without the Internet links. Taken together, the results of this preliminary and exploratory study suggest that all three SVTs examined are sensitive to feigned depression even in the presence of symptom coaching, both with and without additional Internet-based information.
Collapse
Affiliation(s)
- Irena Boskovic
- Forensic Psychology Section, Clinical Psychology Department, Erasmus School of Social and Behavioral Sciences, Erasmus University Rotterdam, Rotterdam, The Netherlands
- Forensic Psychology Section, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | | | | |
Collapse
|
4
|
Torres van Grinsven V. Sources of measurement error in pediatric intelligence testing. METHODOLOGICAL INNOVATIONS 2022. [DOI: 10.1177/20597991221077919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Pediatric intelligence tests, such as the Wechsler Intelligence Scale for Children, are commonly used diagnostic tools used in the process of diagnosing learning and behavior disabilities. Decisions concerning treatment are made based on the results of these tests and they are frequently used in educational and vocational contexts for important decisions that impact persons’ academic or professional lives. Research has however shown that important errors may occur despite the application of validation processes and adherence to quality criteria for psychometric tests. At the same time this evidence seems not to be pervasively acknowledged in psychological practice and research. In this article, I will showcase research that places attention on sources of measurement error in pediatric intelligence testing, discuss a process-performance approach to measurement in intelligence testing, and propose the “pretest methods,” methods stemming from the field of survey methodology commonly used in questionnaire construction, as a method to help address the problem of sources of measurement error in pediatric intelligence testing and improve the development of these intelligence tests.
Collapse
Affiliation(s)
- Vanessa Torres van Grinsven
- Department of Special Education and Rehabilitation, Faculty of Human Sciences, University of Cologne, Cologne, Germany
| |
Collapse
|
5
|
“All or nothing”. PSICO 2021. [DOI: 10.15448/1980-8623.2021.1.35441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
In the current study, we investigated the unique associations between dichotomous thinking, extreme response style (ERS), and the Dark Triad of personality, namely, Machiavellianism, narcissism, and psychopathy. We hypothesized that dichotomous thinking would exhibit a positive effect on ERS, and that dichotomous thinking would be positively associated with the Dark Triad even after accounting for ERS. Participants were 488 adults with a mean age of 29.54 years (SD = 10.38). Results confirmed dichotomous thinking positively predicts ERS, that the relationship between dichotomous thinking and the Dark Triad domains remains positive and significant even after accounting for ERS. Moreover, ERS manifested weak negative associations with the Dark Triad, with a significant relationship only with psychopathy. Findings from the current study help expand the understanding of both the substantive nature of response styles and the cognitive processes underlying the Dark Triad of personality.
Collapse
|
6
|
Nieto MD, Garrido LE, Martínez-Molina A, Abad FJ. Modeling Wording Effects Does Not Help in Recovering Uncontaminated Person Scores: A Systematic Evaluation With Random Intercept Item Factor Analysis. Front Psychol 2021; 12:685326. [PMID: 34149573 PMCID: PMC8206482 DOI: 10.3389/fpsyg.2021.685326] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 04/30/2021] [Indexed: 12/04/2022] Open
Abstract
The item wording (or keying) effect consists of logically inconsistent answers to positively and negatively worded items that tap into similar (but polarly opposite) content. Previous research has shown that this effect can be successfully modeled through the random intercept item factor analysis (RIIFA) model, as evidenced by the improvements in the model fit in comparison to models that only contain substantive factors. However, little is known regarding the capability of this model in recovering the uncontaminated person scores. To address this issue, the study analyzes the performance of the RIIFA approach across three types of wording effects proposed in the literature: carelessness, item verification difficulty, and acquiescence. In the context of unidimensional substantive models, four independent variables were manipulated, using Monte Carlo methods: type of wording effect, amount of wording effect, sample size, and test length. The results corroborated previous findings by showing that the RIIFA models were consistently able to account for the variance in the data, attaining an excellent fit regardless of the amount of bias. Conversely, the models without the RIIFA factor produced increasingly a poorer fit with greater amounts of wording effects. Surprisingly, however, the RIIFA models were not able to better estimate the uncontaminated person scores for any type of wording effect in comparison to the substantive unidimensional models. The simulation results were then corroborated with an empirical dataset, examining the relationship between learning strategies and personality with grade point average in undergraduate studies. The apparently paradoxical findings regarding the model fit and the recovery of the person scores are explained, considering the properties of the factor models examined.
Collapse
Affiliation(s)
- María Dolores Nieto
- Department of Psychology, Faculty of Life and Nature Sciences, Universidad Antonio deNebrija, Madrid, Spain
| | - Luis Eduardo Garrido
- Department of Psychology, Pontificia Universidad Católica Madre y Maestra, Santiago de los Caballeros, Dominican Republic
| | - Agustín Martínez-Molina
- Department of Social Psychology and Methodology, Faculty of Psychology, Universidad Autónoma de Madrid, Madrid, Spain
| | - Francisco José Abad
- Department of Social Psychology and Methodology, Faculty of Psychology, Universidad Autónoma de Madrid, Madrid, Spain
| |
Collapse
|
7
|
Goecke B, Weiss S, Steger D, Schroeders U, Wilhelm O. Testing competing claims about overclaiming. INTELLIGENCE 2020. [DOI: 10.1016/j.intell.2020.101470] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
8
|
|
9
|
Watrin L, Geiger M, Spengler M, Wilhelm O. Forced-Choice Versus Likert Responses on an Occupational Big Five Questionnaire. JOURNAL OF INDIVIDUAL DIFFERENCES 2019. [DOI: 10.1027/1614-0001/a000285] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Abstract. Conventional self-report measures are prone to response biases, which distort measurement in any applied assessment. The forced-choice (FC) format was proposed as a potential remedy for these biases. The purpose of these studies was to develop and evaluate a FC questionnaire for the occupational context based on the five factor model of personality. A single-stimulus Likert questionnaire was contextualized for occupational settings and psychometrically optimized in Study 1 ( N = 401). Considering optimal design strategies, we subsequently used this questionnaire to construct and validate a FC questionnaire in Study 2 ( N = 517). Methodological add-ons to established approaches were applied to achieve decent confirmatory model fit. The new questionnaire shows good psychometric qualities and strong validity. We make suggestions for further applications and studies.
Collapse
Affiliation(s)
| | - Mattis Geiger
- Institute of Psychology and Education, Ulm University, Germany
| | | | - Oliver Wilhelm
- Institute of Psychology and Education, Ulm University, Germany
| |
Collapse
|
10
|
de Leng WE, Stegers‐Jager KM, Born MP, Themmen APN. Faking on a situational judgment test in a medical school selection setting: Effect of different scoring methods? INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2019. [DOI: 10.1111/ijsa.12251] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Affiliation(s)
- W. E. de Leng
- Institute of Medical Education Research Rotterdam Erasmus MC Rotterdam the Netherlands
| | - K. M. Stegers‐Jager
- Institute of Medical Education Research Rotterdam Erasmus MC Rotterdam the Netherlands
| | - M. Ph. Born
- Institute of Psychology Erasmus University Rotterdam Rotterdam the Netherlands
- Optentia and Faculty of Economic and Management Sciences North‐West University Potchefstroom South Africa
| | - A. P. N. Themmen
- Institute of Medical Education Research Rotterdam Erasmus MC Rotterdam the Netherlands
| |
Collapse
|
11
|
Geiger M, Olderbak S, Sauter R, Wilhelm O. The "g" in Faking: Doublethink the Validity of Personality Self-Report Measures for Applicant Selection. Front Psychol 2018; 9:2153. [PMID: 30483179 PMCID: PMC6244170 DOI: 10.3389/fpsyg.2018.02153] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2018] [Accepted: 10/19/2018] [Indexed: 11/13/2022] Open
Abstract
The meta-analytic finding that faking does not affect the criterion validity of self-report measures in applicant selection suggests cognitive abilities are crucial to fake personality to an expected optimal profile in self-report measures. Previous studies in this field typically focus on how the extent of faking changes self-report measurement. However, the effect of faking ability is rarely considered. In Study 1 (n = 151), we link two questionnaires, the WSQ and the NEO-PI-R, to use them for later faking ability tasks. With O∗NET expert ratings and the linked questionnaires, we establish veridical responses of optimal personality profiles for both questionnaires. Based on this, in Study 2, we develop six faking ability task employing both questionnaires and three common jobs to fake for. To score the tasks, we introduce profile similarity metrics that compare faked response vectors to optimal profile vectors. The faking ability tasks were administered to a community sample (n = 210) who additionally completed measures of cognitive abilities, namely general mental ability, crystallized intelligence, and interpersonal abilities. For all, based on previous research, it can be argued that they should predict individual differences in faking ability. We establish a measurement model of faking ability and its relation to the other cognitive abilities. Using structural equations modeling, we find the strongest effect for crystallized intelligence and weaker effects for general mental ability and interpersonal abilities, all positively predicting faking ability. We show for the first time that we can measure faking ability with psychometrically sound techniques, establish a confirmatory factor model of faking ability and that it is largely explained by other cognitive abilities. We conclude that research supporting a positive link between self-reported personality and job performance is presumably confounded by cognitive abilities, because they are predictive of both faking self-reported personality and job performance. We recommend researchers to broaden their measurements with assessments of faking ability or other cognitive abilities (besides general mental ability) in research regarding applicant selection.
Collapse
Affiliation(s)
- Mattis Geiger
- Department of Individual Differences and Psychological Assessment, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Sally Olderbak
- Department of Individual Differences and Psychological Assessment, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Ramona Sauter
- Department of Individual Differences and Psychological Assessment, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Oliver Wilhelm
- Department of Individual Differences and Psychological Assessment, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
12
|
Merckelbach H, Prins C, Boskovic I, Niesten I, À Campo J. Alexithymia as a potential source of symptom over-reporting: An exploratory study in forensic patients and non-forensic participants. Scand J Psychol 2018; 59:192-197. [DOI: 10.1111/sjop.12427] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 11/08/2017] [Indexed: 01/30/2023]
Affiliation(s)
- Harald Merckelbach
- Forensic Psychology; Faculty of Psychology and Neuroscience; Maastricht University; the Netherlands
| | - Chinouk Prins
- Forensic Psychology; Faculty of Psychology and Neuroscience; Maastricht University; the Netherlands
| | - Irena Boskovic
- Forensic Psychology; Faculty of Psychology and Neuroscience; Maastricht University; the Netherlands
| | - Isabella Niesten
- Forensic Psychology; Faculty of Psychology and Neuroscience; Maastricht University; the Netherlands
| | - Joost À Campo
- Radix Forensic Outpatient Clinic; Mondriaan; Heerlen the Netherlands
| |
Collapse
|
13
|
Affiliation(s)
- Eunike Wetzel
- Department of Psychology, University of Mannheim, Germany
| | - Samuel Greiff
- Cognitive Science and Assessment, University of Luxembourg, Luxembourg
| |
Collapse
|
14
|
Students’ multiple state goals as a function of appraisals, trait goals, and their interactions. CONTEMPORARY EDUCATIONAL PSYCHOLOGY 2017. [DOI: 10.1016/j.cedpsych.2017.09.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
15
|
Ziegler M, Greiff S. A Look Back and a Glimpse Forward. EUROPEAN JOURNAL OF PSYCHOLOGICAL ASSESSMENT 2016. [DOI: 10.1027/1015-5759/a000387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Affiliation(s)
| | - Samuel Greiff
- Cognitive Science & Assessment, University of Luxembourg, Luxembourg
| |
Collapse
|
16
|
Danner D, Blasius J, Breyer B, Eifler S, Menold N, Paulhus DL, Rammstedt B, Roberts RD, Schmitt M, Ziegler M. Current Challenges, New Developments, and Future Directions in Scale Construction. EUROPEAN JOURNAL OF PSYCHOLOGICAL ASSESSMENT 2016. [DOI: 10.1027/1015-5759/a000375] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Affiliation(s)
- Daniel Danner
- GESIS – Leibniz-Institute for the Social Sciences, Survey Design and Methodology, Mannheim, Germany
| | - Jörg Blasius
- University of Bonn, Institut für Politische Wissenschaft und Soziologie, Germany
| | - Bianka Breyer
- GESIS – Leibniz-Institute for the Social Sciences, Survey Design and Methodology, Mannheim, Germany
| | - Stefanie Eifler
- Catholic University of Eichstätt-Ingolstadt, Sociology Institute, Germany
| | - Natalja Menold
- GESIS – Leibniz-Institute for the Social Sciences, Survey Design and Methodology, Mannheim, Germany
| | - Delroy L. Paulhus
- University of British Columbia, Department of Psychology, Vancouver, Canada
| | - Beatrice Rammstedt
- GESIS – Leibniz-Institute for the Social Sciences, Survey Design and Methodology, Mannheim, Germany
| | - Richard D. Roberts
- Center for Innovative Assessments, Professional Examination Service, New York, NY, USA
| | - Manfred Schmitt
- University of Koblenz-Landau, Fachbereich Psychologie, Germany
| | - Matthias Ziegler
- Institut für Psychologie, Humboldt Universität zu Berlin, Germany
| |
Collapse
|
17
|
Ziegler M, Hagemann D. Testing the Unidimensionality of Items. EUROPEAN JOURNAL OF PSYCHOLOGICAL ASSESSMENT 2015. [DOI: 10.1027/1015-5759/a000309] [Citation(s) in RCA: 58] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|