1
|
Merhof V, Böhm CM, Meiser T. Separation of Traits and Extreme Response Style in IRTree Models: The Role of Mimicry Effects for the Meaningful Interpretation of Estimates. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT 2024; 84:927-956. [PMID: 39318484 PMCID: PMC11418598 DOI: 10.1177/00131644231213319] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2024]
Abstract
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person parameters can be separated from each other. Here we investigate conditions under which the substantive meanings of estimated extreme response style parameters are potentially invalid and do not correspond to the meanings attributed to them, that is, content-unrelated category preferences. Rather, the response style factor may mimic the trait and capture part of the trait-induced variance in item responding, thus impairing the meaningful separation of the person parameters. Such a mimicry effect is manifested in a biased estimation of the covariance of response style and trait, as well as in an overestimation of the response style variance. Both can lead to severely misleading conclusions drawn from IRTree analyses. A series of simulation studies reveals that mimicry effects depend on the distribution of observed responses and that the estimation biases are stronger the more asymmetrically the responses are distributed across the rating scale. It is further demonstrated that extending the commonly used IRTree model with unidimensional sub-decisions by multidimensional parameterizations counteracts mimicry effects and facilitates the meaningful separation of parameters. An empirical example of the Program for International Student Assessment (PISA) background questionnaire illustrates the threat of mimicry effects in real data. The implications of applying IRTree models for empirical research questions are discussed.
Collapse
Affiliation(s)
| | - Caroline M. Böhm
- Rhineland-Palatinate Technical University of Kaiserslautern-Landau, Germany
| | | |
Collapse
|
2
|
Hasselhorn K, Ottenstein C, Meiser T, Lischetzke T. The Effects of Questionnaire Length on the Relative Impact of Response Styles in Ambulatory Assessment. MULTIVARIATE BEHAVIORAL RESEARCH 2024; 59:1043-1057. [PMID: 38779850 DOI: 10.1080/00273171.2024.2354233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Ambulatory assessment (AA) is becoming an increasingly popular research method in the fields of psychology and life science. Nevertheless, knowledge about the effects that design choices, such as questionnaire length (i.e., number of items per questionnaire), have on AA data quality is still surprisingly restricted. Additionally, response styles (RS), which threaten data quality, have hardly been analyzed in the context of AA. The aim of the current research was to experimentally manipulate questionnaire length and investigate the association between questionnaire length and RS in an AA study. We expected that the group with the longer (82-item) questionnaire would show greater reliance on RS relative to the substantive traits than the group with the shorter (33-item) questionnaire. Students (n = 284) received questionnaires three times a day for 14 days. We used a multigroup two-dimensional item response tree model in a multilevel structural equation modeling framework to estimate midpoint and extreme RS in our AA study. We found that the long questionnaire group showed a greater reliance on RS relative to trait-based processes than the short questionnaire group. Although further validation of our findings is necessary, we hope that researchers consider our findings when planning an AA study in the future.
Collapse
Affiliation(s)
- Kilian Hasselhorn
- Department of Psychology, RPTU Kaiserslautern-Landau, Landau, Germany
| | | | | | - Tanja Lischetzke
- Department of Psychology, RPTU Kaiserslautern-Landau, Landau, Germany
| |
Collapse
|
3
|
Li Z, Li L, Zhang B, Cao M, Tay L. Killing Two Birds with One Stone: Accounting for Unfolding Item Response Process and Response Styles Using Unfolding Item Response Tree Models. MULTIVARIATE BEHAVIORAL RESEARCH 2024:1-23. [PMID: 39215711 DOI: 10.1080/00273171.2024.2394607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Two research streams on responses to Likert-type items have been developing in parallel: (a) unfolding models and (b) individual response styles (RSs). To accurately understand Likert-type item responding, it is vital to parse unfolding responses from RSs. Therefore, we propose the Unfolding Item Response Tree (UIRTree) model. First, we conducted a Monte Carlo simulation study to examine the performance of the UIRTree model compared to three other models - Samejima's Graded Response Model, Generalized Graded Unfolding Model, and Dominance Item Response Tree model, for Likert-type responses. Results showed that when data followed an unfolding response process and contained RSs, AIC was able to select the UIRTree model, while BIC was biased toward the DIRTree model in many conditions. In addition, model parameters in the UIRTree model could be accurately recovered under realistic conditions, and mis-specifying item response process or wrongly ignoring RSs was detrimental to the estimation of key parameters. Then, we used datasets from empirical studies to show that the UIRTree model could fit personality datasets well and produced more reasonable parameter estimates compared to competing models. A strong presence of RS(s) was also revealed by the UIRTree model. Finally, we provided examples with R code for UIRTree model estimation to facilitate the modeling of responses to Likert-type items in future studies.
Collapse
Affiliation(s)
- Zhaojun Li
- Department of Psychology, The Ohio State University, Columbus, OH, USA
| | - Lingyue Li
- Department of Psychology, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | - Bo Zhang
- Department of Psychology, University of Illinois Urbana-Champaign, Urbana, IL, USA
- School of Labor and Employment Relations, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | | | - Louis Tay
- Department of Psychological Sciences, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
4
|
Merhof V, Meiser T. Dynamic Response Strategies: Accounting for Response Process Heterogeneity in IRTree Decision Nodes. PSYCHOMETRIKA 2023; 88:1354-1380. [PMID: 36746887 PMCID: PMC10656330 DOI: 10.1007/s11336-023-09901-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Indexed: 06/18/2023]
Abstract
It is essential to control self-reported trait measurements for response style effects to ensure a valid interpretation of estimates. Traditional psychometric models facilitating such control consider item responses as the result of two kinds of response processes-based on the substantive trait, or based on response styles-and they assume that both of these processes have a constant influence across the items of a questionnaire. However, this homogeneity over items is not always given, for instance, if the respondents' motivation declines throughout the questionnaire so that heuristic responding driven by response styles may gradually take over from cognitively effortful trait-based responding. The present study proposes two dynamic IRTree models, which account for systematic continuous changes and additional random fluctuations of response strategies, by defining item position-dependent trait and response style effects. Simulation analyses demonstrate that the proposed models accurately capture dynamic trajectories of response processes, as well as reliably detect the absence of dynamics, that is, identify constant response strategies. The continuous version of the dynamic model formalizes the underlying response strategies in a parsimonious way and is highly suitable as a cognitive model for investigating response strategy changes over items. The extended model with random fluctuations of strategies can adapt more closely to the item-specific effects of different response processes and thus is a well-fitting model with high flexibility. By using an empirical data set, the benefits of the proposed dynamic approaches over traditional IRTree models are illustrated under realistic conditions.
Collapse
Affiliation(s)
- Viola Merhof
- Department of Psychology, University of Mannheim, L 13 15, 68161, Mannheim, Germany.
| | - Thorsten Meiser
- Department of Psychology, University of Mannheim, L 13 15, 68161, Mannheim, Germany
| |
Collapse
|
5
|
Kosgolla JV, Smith DC, Begum S, Reinhart CA. Assessing the self-reported honesty threshold in adolescent epidemiological research: comparing supervised machine learning and inferential statistical techniques. BMC Med Res Methodol 2023; 23:210. [PMID: 37735353 PMCID: PMC10512612 DOI: 10.1186/s12874-023-02035-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 09/15/2023] [Indexed: 09/23/2023] Open
Abstract
BACKGROUND Epidemiological surveys offer essential data on adolescent substance use. Nevertheless, the precision of these self-report-based surveys often faces mistrust from researchers and the public. We evaluate the efficacy of a direct method to assess data quality by asking adolescents if they were honest. The main goal of our study was to assess the accuracy of a self-report honesty item and designate an optimal threshold for it, allowing us to better account for its impact on point estimates. METHODS The participants were from the 2020 Illinois Youth Survey, a self-report school-based survey. We divided the primary dataset into subsets based on responses to an honesty item. Then, for each dataset, we examined two distinct data analysis methodologies: supervised machine learning, using the random forest algorithm, and a conventional inferential statistical method, logistic regression. We evaluated item thresholds from both analyses, investigating probable relationships with reported fake drug use, social desirability biases, and missingness in the datasets. RESULTS The study results corroborate the appropriateness and reliability of the honesty item and its corresponding threshold. These contain the agreeing honesty thresholds determined in both data analyses, the identified association between reported fake drug use and lower honesty scores, increased missingness and lower honesty, and the determined link between the social desirability bias and honesty threshold. CONCLUSIONS Confirming the honesty threshold via missing data analysis also strengthens these collective findings, emphasizing our methodology's and findings' robustness. Researchers are encouraged to use self-report honesty items in epidemiological research. This will permit the modeling of accurate point estimates by addressing questionable reporting.
Collapse
Affiliation(s)
- Janaka V Kosgolla
- School of Social Work, University of Illinois Urbana-Champaign, 1010 W. Nevada St, Urbana, IL, 61801, USA.
| | - Douglas C Smith
- School of Social Work, University of Illinois Urbana-Champaign, 1010 W. Nevada St, Urbana, IL, 61801, USA
| | - Shahana Begum
- School of Social Work, University of Illinois Urbana-Champaign, 1010 W. Nevada St, Urbana, IL, 61801, USA
| | - Crystal A Reinhart
- School of Social Work, University of Illinois Urbana-Champaign, 1010 W. Nevada St, Urbana, IL, 61801, USA
| |
Collapse
|
6
|
Lee P, Joo S, Jia Z. Cross‐cultural differences in the use of the “
?
” Response category of the Job Descriptive Index: An application of the item response tree model. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2022. [DOI: 10.1111/ijsa.12414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Affiliation(s)
- Philseok Lee
- Department of Psychology George Mason University Fairfax Virginia USA
| | - Sean Joo
- Department of Educational Psychology University of Kansas Lawrence Kansas USA
| | - Zihao Jia
- Department of Psychology George Mason University Fairfax Virginia USA
| |
Collapse
|
7
|
Davis DE, Bowes S, McLaughlin A, Hsu W, Gazaway S, McElroy-Heltzel S, Van Tongeren DR, Hook JN. In search of convergent creativity: content analysis of research on intellectual humility. THE JOURNAL OF POSITIVE PSYCHOLOGY 2022. [DOI: 10.1080/17439760.2022.2154706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Affiliation(s)
- Don E. Davis
- Department of Counseling and Psychological Services, Georgia State University, Atlanta, Georgia, USA
| | - Shauna Bowes
- Department of Psychology, Emory University, Atlanta, Georgia, USA
| | - Aaron McLaughlin
- Department of Counseling and Psychological Services, Georgia State University, Atlanta, Georgia, USA
| | - Wendy Hsu
- Department of Counseling and Psychological Services, Georgia State University, Atlanta, Georgia, USA
| | - Sarah Gazaway
- Department of Counseling and Psychological Services, Georgia State University, Atlanta, Georgia, USA
| | | | | | - Joshua N. Hook
- Department of Psychology, University of North Texas, Denton, Texas, USA
| |
Collapse
|
8
|
Klar A, Christopher Costello S, Sadusky A, Kraska J. Personality, culture and extreme response style: A multilevel modelling analysis. JOURNAL OF RESEARCH IN PERSONALITY 2022. [DOI: 10.1016/j.jrp.2022.104301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
9
|
Clarifying personality measurement in industrial-organizational psychology: The utility of item response tree models. PERSONALITY AND INDIVIDUAL DIFFERENCES 2022. [DOI: 10.1016/j.paid.2021.111410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
10
|
Misiuro T, Gorbaniuk O, Macheta K, McNeill M, Kubicka-Jakuczun J, Kuźmik M, Wontor P, Zajkowska M, Rykowska K, Świątek P, Zygnerska M. Validation and Psychometric Properties of the HEXACO Personality Inventory Observer Report Form in the Polish Language. J Pers Assess 2021; 104:844-854. [PMID: 34748445 DOI: 10.1080/00223891.2021.1998081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
The aim of this study was to verify the structure and psychometric properties of the Polish adaptation of the HEXACO-PI-R observer report form based on a heterogeneous target sample (liked, neutral and disliked peers). The vast majority of research has focused on the validity and reliability of the self-report form. The psychometric properties of the observer report version have been verified in only two languages. Previous Polish lexical studies based on a heterogeneous target sample have shown that the structure differs from a typical six-factor structure from self-rating studies. Since this phenomenon was not observed in English, we decided to verify the psychometric properties of the observer report form in Polish. Additionally, the NEO-FFI and Polish Personality Markers for observer report were used. All HEXACO-PI-R scales achieved satisfactory internal consistency and showed high stability. The results indicated that the structure of the Polish adaptation of the HEXACO-PI-R observer report form could be considered as similar to the theoretical construct, except when the target of the description is neutral for the respondent. This suggests the necessity to verify the structure of the HEXACO-PI-R observer report form based on liked and disliked peers in other languages as well.
Collapse
Affiliation(s)
| | - Oleg Gorbaniuk
- Institute of Psychology, The John Paul II Catholic University of Lublin
| | | | | | | | | | - Paweł Wontor
- Institute of Psychology, University of Zielona Góra
| | | | | | | | | |
Collapse
|
11
|
Thielmann I, Moshagen M, Hilbig B, Zettler I. On the Comparability of Basic Personality Models: Meta-Analytic Correspondence, Scope, and Orthogonality of the Big Five and HEXACO Dimensions. EUROPEAN JOURNAL OF PERSONALITY 2021. [DOI: 10.1177/08902070211026793] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Models of basic personality structure are among the most widely used frameworks in psychology and beyond, and they have considerably advanced the understanding of individual differences in a plethora of consequential outcomes. Over the past decades, two such models have become most widely used: the Five Factor Model (FFM) or Big Five, respectively, and the HEXACO Model of Personality. However, there is no large-scale empirical evidence on the general comparability of these models. Here, we provide the first comprehensive meta-analysis on (a) the correspondence of the FFM/Big Five and HEXACO dimensions, (b) the scope of trait content the models cover, and (c) the orthogonality (i.e., degree of independence) of dimensions within the models. Results based on 152 (published and unpublished) samples and 6,828 unique effects showed that the HEXACO dimensions incorporate notable conceptual differences compared to the FFM/Big Five dimensions, resulting in a broader coverage of the personality space and less redundancy between dimensions. Moreover, moderator analyses revealed substantial differences between operationalizations of the FFM/Big Five. Taken together, these findings have important theoretical and practical implications for the understanding of basic personality dimensions and their assessment.
Collapse
Affiliation(s)
- Isabel Thielmann
- Department of Psychology, University of Koblenz-Landau, Landau, Germany
| | | | - BenjaminE. Hilbig
- Department of Psychology, University of Koblenz-Landau, Landau, Germany
| | - Ingo Zettler
- Department of Psychology, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
12
|
Sun T, Zhang B, Cao M, Drasgow F. Faking Detection Improved: Adopting a Likert Item Response Process Tree Model. ORGANIZATIONAL RESEARCH METHODS 2021. [DOI: 10.1177/10944281211002904] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
With the increasing popularity of noncognitive inventories in personnel selection, organizations typically wish to be able to tell when a job applicant purposefully manufactures a favorable impression. Past faking research has primarily focused on how to reduce faking via instrument design, warnings, and statistical corrections for faking. This article took a new approach by examining the effects of faking (experimentally manipulated and contextually driven) on response processes. We modified a recently introduced item response theory tree modeling procedure, the three-process model, to identify faking in two studies. Study 1 examined self-reported vocational interest assessment responses using an induced faking experimental design. Study 2 examined self-reported personality assessment responses when some people were in a high-stakes situation (i.e., selection). Across the two studies, individuals instructed or expected to fake were found to engage in more extreme responding. By identifying the underlying differences between fakers and honest respondents, the new approach improves our understanding of faking. Percentage cutoffs based on extreme responding produced a faker classification precision of 85% on average.
Collapse
Affiliation(s)
- Tianjun Sun
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, USA
| | - Bo Zhang
- Department of Psychological & Brain Sciences, Texas A&M University, Champaign, IL, USA
| | - Mengyang Cao
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| | - Fritz Drasgow
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
- School of Labor and Employment Relations, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| |
Collapse
|
13
|
Lang JW, Tay L. The Science and Practice of Item Response Theory in Organizations. ANNUAL REVIEW OF ORGANIZATIONAL PSYCHOLOGY AND ORGANIZATIONAL BEHAVIOR 2021. [DOI: 10.1146/annurev-orgpsych-012420-061705] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Item response theory (IRT) is a modeling approach that links responses to test items with underlying latent constructs through formalized statistical models. This article focuses on how IRT can be used to advance science and practice in organizations. We describe established applications of IRT as a scale development tool and new applications of IRT as a research and theory testing tool that enables organizational researchers to improve their understanding of workers and organizations. We focus on IRT models and their application in four key research and practice areas: testing, questionnaire responding, construct validation, and measurement equivalence of scores. In so doing, we highlight how novel developments in IRT such as explanatory IRT, multidimensional IRT, random item models, and more complex models of response processes such as ideal point models and tree models can potentially advance existing science and practice in these areas. As a starting point for readers interested in learning IRT and applying recent developments in IRT in their research, we provide concrete examples with data and R code.
Collapse
Affiliation(s)
- Jonas W.B. Lang
- Department of Human Resource Management and Organizational Psychology, Ghent University, B-9000 Gent, Belgium
- Business School, University of Exeter, EX4 4PU Exeter, United Kingdom
| | - Louis Tay
- Department of Psychological Sciences, Purdue University, West Lafayette, Indiana 47907, USA
| |
Collapse
|
14
|
Plieninger H. Developing and Applying IR-Tree Models: Guidelines, Caveats, and an Extension to Multiple Groups. ORGANIZATIONAL RESEARCH METHODS 2020. [DOI: 10.1177/1094428120911096] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
IR-tree models assume that categorical item responses can best be explained by multiple response processes. In the present article, guidelines are provided for the development and interpretation of IR-tree models. In more detail, the relationship between a tree diagram, the model equations, and the analysis on the basis of pseudo-items is described. Moreover, it is shown that IR-tree models do not allow conclusions about the sequential order of the processes, and that mistakes in the model specification can have serious consequences. Furthermore, multiple-group IR-tree models are presented as a novel extension of IR-tree models to data from heterogeneous units. This allows, for example, to investigate differences across countries or organizations with respect to core parameters of the IR-tree model. Finally, an empirical example on organizational commitment and response styles is presented.
Collapse
Affiliation(s)
- Hansjörg Plieninger
- School of Social Sciences, Department of Psychology, University of Mannheim, Mannheim, Germany
| |
Collapse
|
15
|
Kholin M, Kückelhaus B, Blickle G. Why dark personalities can get ahead: Extending the toxic career model. PERSONALITY AND INDIVIDUAL DIFFERENCES 2020. [DOI: 10.1016/j.paid.2019.109792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
16
|
Böckenholt U. Contextual Responses to Affirmative and/or Reversed-Worded Items. PSYCHOMETRIKA 2019; 84:986-999. [PMID: 31512026 DOI: 10.1007/s11336-019-09680-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Revised: 06/19/2019] [Indexed: 06/10/2023]
Abstract
This paper presents a systematic investigation of how affirmative and polar-opposite items presented either jointly or separately affect yea-saying tendencies. We measure these yea-saying tendencies with item response models that estimate a respondent's tendency to give a "yea"-response that may be unrelated to the target trait. In a re-analysis of the Zhang et al. (PLoS ONE, 11:1-15, 2016) data, we find that yea-saying tendencies depend on whether items are presented as part of a scale that contains affirmative and/or polar-opposite items. Yea-saying tendencies are stronger for affirmative than for polar-opposite items. Moreover, presenting polar-opposite items together with affirmative items creates lower yea-saying tendencies for polar-opposite items than when presented in isolation. IRT models that do not account for these yea-saying effects arrive at a two-dimensional representation of the target trait. These findings demonstrate that the contextual information provided by an item scale can serve as a determinant of differential item functioning.
Collapse
Affiliation(s)
- Ulf Böckenholt
- Northwestern University, Kellogg School of Management, 2211 Campus Drive, Evanston, IL, 60208, USA.
| |
Collapse
|
17
|
Meiser T, Plieninger H, Henninger M. IRTree models with ordinal and multidimensional decision nodes for response styles and trait-based rating responses. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2019; 72:501-516. [PMID: 30756379 DOI: 10.1111/bmsp.12158] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Revised: 11/14/2018] [Indexed: 05/10/2023]
Abstract
IRTree models decompose observed rating responses into sequences of theory-based decision nodes, and they provide a flexible framework for analysing trait-related judgements and response styles. However, most previous applications of IRTree models have been limited to binary decision nodes that reflect qualitatively distinct and unidimensional judgement processes. The present research extends the family of IRTree models for the analysis of response styles to ordinal judgement processes for polytomous decisions and to multidimensional parametrizations of decision nodes. The integration of ordinal judgement processes overcomes the limitation to binary nodes, and it allows researchers to test whether decisions reflect qualitatively distinct response processes or gradual steps on a joint latent continuum. The extension to multidimensional node models enables researchers to specify multiple judgement processes that simultaneously affect the decision between competing response options. Empirical applications highlight the roles of extreme and midpoint response style in rating judgements and show that judgement processes are moderated by different response formats. Model applications with multidimensional decision nodes reveal that decisions among rating categories are jointly informed by trait-related processes and response styles.
Collapse
|
18
|
Jeon M, De Boeck P. Evaluation on types of invariance in studying extreme response bias with an IRTree approach. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2019; 72:517-537. [PMID: 31292952 DOI: 10.1111/bmsp.12182] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Revised: 05/17/2019] [Indexed: 05/10/2023]
Abstract
In recent years, item response tree (IRTree) approaches have received increasing attention in the response style literature for their ability to partial out response style latent variables as well as associated item parameters. When an IRTree approach is adopted to measure extreme response styles, directional and content invariance could be assumed at the latent variable and item parameter levels. In this study, we propose to evaluate the empirical validity of these invariance assumptions by employing a general IRTree model with relaxed invariance assumptions. This would allow us to examine extreme response biases, beyond extreme response styles. With three empirical applications of the proposed evaluation, we find that relaxing some of the invariance assumptions improves the model fit, which suggests that not all assumed invariances are empirically supported. Specifically, at the latent variable level, we find reasonable evidence for directional invariance but mixed evidence for content invariance, although we also find that estimated correlations between content-specific extreme response latent variables are high, hinting at the potential presence of a general extreme response tendency. At the item parameter level, we find no directional or content invariance for thresholds and no content invariance for slopes. We discuss how the variant item parameter estimates obtained from a general IRTree model can offer useful insight to help us understand response bias related to extreme responding measured within the IRTree framework.
Collapse
Affiliation(s)
- Minjeong Jeon
- Department of Education, University of California, Los Angeles, California, USA
| | - Paul De Boeck
- Department of Psychology, Ohio State University, Columbus, Ohio, USA
| |
Collapse
|
19
|
Park M, Wu AD. Item Response Tree Models to Investigate Acquiescence and Extreme Response Styles in Likert-Type Rating Scales. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT 2019; 79:911-930. [PMID: 31488919 PMCID: PMC6713983 DOI: 10.1177/0013164419829855] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Item response tree (IRTree) models are recently introduced as an approach to modeling response data from Likert-type rating scales. IRTree models are particularly useful to capture a variety of individuals' behaviors involving in item responding. This study employed IRTree models to investigate response styles, which are individuals' tendencies to prefer or avoid certain response categories in a rating scale. Specifically, we introduced two types of IRTree models, descriptive and explanatory models, perceived under a larger modeling framework, called explanatory item response models, proposed by De Boeck and Wilson. This extends the typical application of IRTree models for studying response styles. As a demonstration, we applied the descriptive and explanatory IRTree models to examine acquiescence and extreme response styles in Rosenberg's Self-Esteem Scale. Our findings suggested the presence of two distinct extreme response styles and acquiescence response style in the scale.
Collapse
Affiliation(s)
- Minjeong Park
- University of British Columbia, Vancouver, British Columbia, Canada
- Minjeong Park, Department of Education and Counselling Psychology, and Special Education, University of British Columbia, 2125 Main Mall, Vancouver, British Columbia, V6T 1Z4, Canada.
| | - Amery D. Wu
- University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
20
|
Dekkers LMS, Bexkens A, Hofman AD, Boeck PD, Collot d'Escury AL, Huizenga HM. Formal Modeling of the Resistance to Peer Influence Questionnaire: A Comparison of Adolescent Boys and Girls With and Without Mild-to-Borderline Intellectual Disability. Assessment 2019; 26:1070-1083. [PMID: 31409142 PMCID: PMC6696739 DOI: 10.1177/1073191117698754] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Items of the Resistance to Peer Influence Questionnaire (RPIQ) have a tree-based structure. On each item, individuals first choose whether a less versus more peer-resistant group best describes them; they then indicate whether it is "Really true" versus "Sort of true" that they belong to the chosen group. Using tree-based item response theory, we show that RPIQ items tap three dimensions: A Resistance to Peer Influence (RPI) dimension and two Response Polarization dimensions. We then reveal subgroup differences on these dimensions. That is, adolescents with mild-to-borderline intellectual disability, compared with typically developing adolescents, are less RPI and more polarized in their responses. Also, girls, compared with boys, are more RPI, and, when high RPI, more polarized in their responses. Together, these results indicate that a tree-based modeling approach yields a more sensitive measure of individuals' RPI as well as their tendency to respond more or less extremely.
Collapse
Affiliation(s)
- Laura M S Dekkers
- 1 University of Amsterdam, Amsterdam, Netherlands.,2 Yield, Research Institute of Child Development and Education, Amsterdam, Netherlands
| | - Anika Bexkens
- 1 University of Amsterdam, Amsterdam, Netherlands.,3 Leiden University, Leiden, Netherlands
| | - Abe D Hofman
- 1 University of Amsterdam, Amsterdam, Netherlands
| | | | - Annematt L Collot d'Escury
- 1 University of Amsterdam, Amsterdam, Netherlands.,2 Yield, Research Institute of Child Development and Education, Amsterdam, Netherlands
| | - Hilde M Huizenga
- 1 University of Amsterdam, Amsterdam, Netherlands.,2 Yield, Research Institute of Child Development and Education, Amsterdam, Netherlands.,5 ABC, Amsterdam Brain and Cognition Centre, Amsterdam, Netherlands
| |
Collapse
|
21
|
The two faces of fearless dominance and their relations to vocational success. JOURNAL OF RESEARCH IN PERSONALITY 2019. [DOI: 10.1016/j.jrp.2019.05.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
22
|
|
23
|
Wundrack R, Prager J, Asselmann E, O'Connell G, Specht J. Does Intraindividual Variability of Personality States Improve Perspective Taking? An Ecological Approach Integrating Personality and Social Cognition. J Intell 2018; 6:E50. [PMID: 31162477 PMCID: PMC6480758 DOI: 10.3390/jintelligence6040050] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Revised: 11/10/2018] [Accepted: 11/22/2018] [Indexed: 01/26/2023] Open
Abstract
Research integrating cognitive abilities and personality has focused on the role of personality traits. We propose a theory on the role of intraindividual variability of personality states (hereafter state variability) on perspective taking, in particular, the ability to infer other peoples' mental states. First, we review the relevant research on personality psychology and social cognition. Second, we propose two complementary routes by which state variability relates to anchoring and adjustment in perspective taking. The first route, termed ego-dispersion, suggests that an increased state variability decreases egocentric bias, which reduces anchoring. The second route, termed perspective-pooling, suggests that an increased state variability facilitates efficient adjustment. We also discuss how our theory can be investigated empirically. The theory is rooted in an ecological interpretation of personality and social cognition, and flags new ways for integrating these fields of research.
Collapse
Affiliation(s)
- Richard Wundrack
- Department of Psychology, Humboldt-Universität zu Berlin, 10099 Berlin, Germany.
| | - Julia Prager
- Department of Psychology, Humboldt-Universität zu Berlin, 10099 Berlin, Germany.
| | - Eva Asselmann
- Department of Psychology, Humboldt-Universität zu Berlin, 10099 Berlin, Germany.
| | - Garret O'Connell
- Department of Psychology, Humboldt-Universität zu Berlin, 10099 Berlin, Germany.
| | - Jule Specht
- Department of Psychology, Humboldt-Universität zu Berlin, 10099 Berlin, Germany.
| |
Collapse
|
24
|
Plieninger H, Heck DW. A New Model for Acquiescence at the Interface of Psychometrics and Cognitive Psychology. MULTIVARIATE BEHAVIORAL RESEARCH 2018; 53:633-654. [PMID: 29843531 DOI: 10.1080/00273171.2018.1469966] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
When measuring psychological traits, one has to consider that respondents often show content-unrelated response behavior in answering questionnaires. To disentangle the target trait and two such response styles, extreme responding and midpoint responding, Böckenholt ( 2012a ) developed an item response model based on a latent processing tree structure. We propose a theoretically motivated extension of this model to also measure acquiescence, the tendency to agree with both regular and reversed items. Substantively, our approach builds on multinomial processing tree (MPT) models that are used in cognitive psychology to disentangle qualitatively distinct processes. Accordingly, the new model for response styles assumes a mixture distribution of affirmative responses, which are either determined by the underlying target trait or by acquiescence. In order to estimate the model parameters, we rely on Bayesian hierarchical estimation of MPT models. In simulations, we show that the model provides unbiased estimates of response styles and the target trait, and we compare the new model and Böckenholt's model in a recovery study. An empirical example from personality psychology is used for illustrative purposes.
Collapse
Affiliation(s)
| | - Daniel W Heck
- a Department of Psychology , University of Mannheim , Mannheim , Germany
| |
Collapse
|
25
|
LaHuis DM, Blackmore CE, Bryant-Lees KB, Delgado K. Applying Item Response Trees to Personality Data in the Selection Context. ORGANIZATIONAL RESEARCH METHODS 2018. [DOI: 10.1177/1094428118780310] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Self-report personality scales are used frequently in personnel selection. Traditionally, researchers have assumed that individuals respond to items within these scales using a single-decision process. More recently, a flexible set of item response (IR) tree models have been developed that allow researchers to investigate multiple-decision processes. In the present research, we found that IR tree models fit the data better than a single-decision IR model when fitted to seven self-report personality scales used in a concurrent criterion-related validity study. In addition, we found evidence that the latent variable underlying the direction of a response (agree or disagree) decision process predicted job performance better than latent variables reflecting the other decision processes for the best fitting IR tree model.
Collapse
|
26
|
Affiliation(s)
- Michael C. Ashton
- Department of Psychology, Brock University, St. Catharines, ON, Canada
| | - Kibeom Lee
- Department of Psychology, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
27
|
Abstract
The current study investigated how self- and other-ratings of vocational interests converge among student–parent dyads. Using the Personal Globe Inventory–Short, we obtained data from a pooled sample of 271 (high school senior and university) student–parent dyads. Participants rated their own vocational interests and those of the other dyad member. First, profile correlations revealed high levels of self-other agreement, moderate levels of assumed similarity, and low levels of similarity and reciprocity in vocational interests. These correlations are highly similar to those found in personality research. Second, profile elevation showed a reversed pattern compared to interest perceptions, with high levels of self-other agreement and moderate levels of assumed similarity, indicating that profile elevation may mostly be an artifact/rater bias and not a substantive factor. Ipsatization of the vocational interest scales somewhat reduced profile elevation bias. Third, same-gender dyads overestimated their similarity in vocational interests more than different-gender dyads.
Collapse
Affiliation(s)
- Djurre Holtrop
- Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- NOA B.V., Amsterdam, The Netherlands
- University of Western Australia, Perth, Western Australia, Australia
| | - Marise Ph. Born
- Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Reinout E. de Vries
- Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- University of Twente, Enschede, The Netherlands
| |
Collapse
|
28
|
Böckenholt U, Meiser T. Response style analysis with threshold and multi-process IRT models: A review and tutorial. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2017; 70:159-181. [PMID: 28130934 DOI: 10.1111/bmsp.12086] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2016] [Revised: 10/27/2016] [Indexed: 05/13/2023]
Abstract
Two different item response theory model frameworks have been proposed for the assessment and control of response styles in rating data. According to one framework, response styles can be assessed by analysing threshold parameters in Rasch models for ordinal data and in mixture-distribution extensions of such models. A different framework is provided by multi-process item response tree models, which can be used to disentangle response processes that are related to the substantive traits and response tendencies elicited by the response scale. In this tutorial, the two approaches are reviewed, illustrated with an empirical data set of the two-dimensional 'Personal Need for Structure' construct, and compared in terms of multiple criteria. Mplus is used as a software framework for (mixed) polytomous Rasch models and item response tree models as well as for demonstrating how parsimonious model variants can be specified to test assumptions on the structure of response styles and attitude strength. Although both frameworks are shown to account for response styles, they differ on the quantitative criteria of model selection, practical aspects of model estimation, and conceptual issues of representing response styles as continuous and multidimensional sources of individual differences in psychological assessment.
Collapse
|
29
|
De Vries RE, Realo A, Allik J. Using Personality Item Characteristics to Predict Single–Item Internal Reliability, Retest Reliability, and Self–Other Agreement. EUROPEAN JOURNAL OF PERSONALITY 2016. [DOI: 10.1002/per.2083] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The use of reliability estimates is increasingly scrutinized as scholars become more aware that test–retest stability and self–other agreement provide a better approximation of the theoretical and practical usefulness of an instrument than its internal reliability. In this study, we investigate item characteristics that potentially impact single–item internal reliability, retest reliability, and self–other agreement. Across two large samples (N = 6690 and N = 4396), two countries (Estonia and The Netherlands), and two personality inventories (the NEO PI–3 and the HEXACO–PI–R), results show that (i) item variance is a strong predictor of self–other agreement and retest reliability but not of single–item internal reliability; (ii) item variance mediates the relations between evaluativeness and self–other agreement; and (iii) self–other agreement is predicted by observability and item domain. On the whole, weak relations between item length, negations, and item position (indicating effects of questionnaire length) on the one hand, and single–item internal reliability, retest reliability, and self–other agreement on the other, were observed. In order to increase the predictive validity of personality scales, our findings suggest that during the construction of questionnaire items, researchers are advised to pay close attention especially to item variance, but also to evaluativeness and observability. Copyright © 2016 European Association of Personality Psychology
Collapse
Affiliation(s)
- Reinout E. De Vries
- Vrije Universiteit Amsterdam, The Netherlands
- University of Twente, The Netherlands
| | - Anu Realo
- University of Warwick, UK
- University of Tartu, Estonia
| | - Jüri Allik
- University of Tartu, Estonia
- Estonian Academy of Sciences, Estonia
| |
Collapse
|
30
|
|
31
|
Dowling NM, Bolt DM, Deng S, Li C. Measurement and control of bias in patient reported outcomes using multidimensional item response theory. BMC Med Res Methodol 2016; 16:63. [PMID: 27229310 PMCID: PMC4882863 DOI: 10.1186/s12874-016-0161-z] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2015] [Accepted: 05/11/2016] [Indexed: 11/10/2022] Open
Abstract
Background Patient-reported outcome (PRO) measures play a key role in the advancement of patient-centered care research. The accuracy of inferences, relevance of predictions, and the true nature of the associations made with PRO data depend on the validity of these measures. Errors inherent to self-report measures can seriously bias the estimation of constructs assessed by the scale. A well-documented disadvantage of self-report measures is their sensitivity to response style (RS) effects such as the respondent’s tendency to select the extremes of a rating scale. Although the biasing effect of extreme responding on constructs measured by self-reported tools has been widely acknowledged and studied across disciplines, little attention has been given to the development and systematic application of methodologies to assess and control for this effect in PRO measures. Methods We review the methodological approaches that have been proposed to study extreme RS effects (ERS). We applied a multidimensional item response theory model to simultaneously estimate and correct for the impact of ERS on trait estimation in a PRO instrument. Model estimates were used to study the biasing effects of ERS on sum scores for individuals with the same amount of the targeted trait but different levels of ERS. We evaluated the effect of joint estimation of multiple scales and ERS on trait estimates and demonstrated the biasing effects of ERS on these trait estimates when used as explanatory variables. Results A four-dimensional model accounting for ERS bias provided a better fit to the response data. Increasing levels of ERS showed bias in total scores as a function of trait estimates. The effect of ERS was greater when the pattern of extreme responding was the same across multiple scales modeled jointly. The estimated item category intercepts provided evidence of content independent category selection. Uncorrected trait estimates used as explanatory variables in prediction models showed downward bias. Conclusions A comprehensive evaluation of the psychometric quality and soundness of PRO assessment measures should incorporate the study of ERS as a potential nuisance dimension affecting the accuracy and validity of scores and the impact of PRO data in clinical research and decision making.
Collapse
Affiliation(s)
- N Maritza Dowling
- Department of Biostatistics and Medical Informatics, University of Wisconsin, Madison, WI, USA. .,Wisconsin Alzheimer's Disease Research Center, University of Wisconsin, Madison, WI, USA.
| | - Daniel M Bolt
- Department of Educational Psychology, University of Wisconsin, Madison, WI, USA
| | - Sien Deng
- Department of Educational Psychology, University of Wisconsin, Madison, WI, USA
| | - Chenxi Li
- Department of Epidemiology and Biostatistics, Michigan State University, East Lansing, MI, USA
| |
Collapse
|
32
|
Thielmann I, Hilbig BE, Zettler I, Moshagen M. On Measuring the Sixth Basic Personality Dimension: A Comparison Between HEXACO Honesty-Humility and Big Six Honesty-Propriety. Assessment 2016; 24:1024-1036. [DOI: 10.1177/1073191116638411] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Recent developments in personality research led to the proposition of two alternative six-factor trait models, the HEXACO model and the Big Six model. However, given the lack of direct comparisons, it is unclear whether the HEXACO and Big Six factors are distinct or essentially equivalent, that is, whether corresponding inventories measure similar or distinct personality traits. Using Structural Equation Modeling (Study 1), we found substantial differences between the traits as measured via the HEXACO-60 and the 30-item Questionnaire Big Six (30QB6), particularly for Honesty-Humility and Honesty-Propriety (both model’s critical difference to the Big Five approach). This distinction was further supported by Study 2, showing differential capabilities of the HEXACO-60 and the 30QB6 to account for several criteria representing the theoretical core of Honesty-Humility and/or Honesty-Propriety. Specifically, unlike the indicator of Honesty-Humility, the indicator of Honesty-Propriety showed low predictive power for some conceptually relevant criteria, suggesting a limited validity of the 30QB6.
Collapse
Affiliation(s)
| | - Benjamin E. Hilbig
- University of Koblenz-Landau, Landau, Germany
- Max Planck Institute for Research on Collective Goods
| | | | | |
Collapse
|
33
|
Wetzel E, Lüdtke O, Zettler I, Böhnke JR. The Stability of Extreme Response Style and Acquiescence Over 8 Years. Assessment 2015; 23:279-91. [PMID: 25986062 DOI: 10.1177/1073191115583714] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This study investigated the stability of extreme response style (ERS) and acquiescence response style (ARS) over a period of 8 years. ERS and ARS were measured with item sets drawn randomly from a large pool of items used in an ongoing German panel study. Latent-trait-state-occasion and latent-state models were applied to test the relationship between time-specific (state) response style behaviors and time-invariant trait components of response styles. The results show that across different random item samples, on average between 49% and 59% of the variance in the state response style factors was explained by the trait response style factors. This indicates that the systematic differences respondents show in their preferences for certain response categories are remarkably stable over a period of 8 years. The stability of ERS and ARS implies that it is important to consider response styles in the analysis of self-report data from polytomous rating scales, especially in longitudinal studies aimed at investigating stability in substantive traits. Furthermore, the stability of response styles raises the question in how far they might be considered trait-like latent variables themselves that could be of substantive interest.
Collapse
Affiliation(s)
- Eunike Wetzel
- University of Konstanz, Konstanz, Germany Eberhard Karls University Tübingen, Tübingen, Germany
| | - Oliver Lüdtke
- Leibniz Institute for Science and Mathematics Education, Kiel, Germany Center for International Student Assessment, Germany
| | - Ingo Zettler
- Eberhard Karls University Tübingen, Tübingen, Germany University of Copenhagen, Copenhagen, Denmark
| | - Jan R Böhnke
- Mental Health and Addiction Research Group (MHARG), Hull York Medical School and Department of Health Sciences, University of York, York, UK
| |
Collapse
|