1
|
Li Z, Li L, Zhang B, Cao M, Tay L. Killing Two Birds with One Stone: Accounting for Unfolding Item Response Process and Response Styles Using Unfolding Item Response Tree Models. MULTIVARIATE BEHAVIORAL RESEARCH 2024:1-23. [PMID: 39215711 DOI: 10.1080/00273171.2024.2394607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Two research streams on responses to Likert-type items have been developing in parallel: (a) unfolding models and (b) individual response styles (RSs). To accurately understand Likert-type item responding, it is vital to parse unfolding responses from RSs. Therefore, we propose the Unfolding Item Response Tree (UIRTree) model. First, we conducted a Monte Carlo simulation study to examine the performance of the UIRTree model compared to three other models - Samejima's Graded Response Model, Generalized Graded Unfolding Model, and Dominance Item Response Tree model, for Likert-type responses. Results showed that when data followed an unfolding response process and contained RSs, AIC was able to select the UIRTree model, while BIC was biased toward the DIRTree model in many conditions. In addition, model parameters in the UIRTree model could be accurately recovered under realistic conditions, and mis-specifying item response process or wrongly ignoring RSs was detrimental to the estimation of key parameters. Then, we used datasets from empirical studies to show that the UIRTree model could fit personality datasets well and produced more reasonable parameter estimates compared to competing models. A strong presence of RS(s) was also revealed by the UIRTree model. Finally, we provided examples with R code for UIRTree model estimation to facilitate the modeling of responses to Likert-type items in future studies.
Collapse
Affiliation(s)
- Zhaojun Li
- Department of Psychology, The Ohio State University, Columbus, OH, USA
| | - Lingyue Li
- Department of Psychology, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | - Bo Zhang
- Department of Psychology, University of Illinois Urbana-Champaign, Urbana, IL, USA
- School of Labor and Employment Relations, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | | | - Louis Tay
- Department of Psychological Sciences, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
2
|
Lyu W, Bolt D. A Psychometric Perspective on the Associations between Response Accuracy and Response Time Residuals. J Intell 2024; 12:74. [PMID: 39195121 DOI: 10.3390/jintelligence12080074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 07/27/2024] [Accepted: 07/29/2024] [Indexed: 08/29/2024] Open
Abstract
We provide an alternative psychometric perspective on the empirical statistical dependencies observed between response accuracy residuals (RARs) and response time residuals (RTRs) in the context of the van der Linden model. This perspective emphasizes the RAR (or parts of the RAR) as being exogenous and having a directional influence on response time. Our simple and theoretically justifiable perspective adds to previous joint response time/accuracy models and comports with recent generalizations of the D-diffusion IRT model incorporating person-by-item interactions, and thus similarly reproduces many of the recently highlighted empirical findings concerning the associations between RARs and RTRs. Using both empirical and simulation-based results, we show how our psychometric perspective has both applied and interpretational implications. Specifically, it would suggest that (1) studies of item parameter estimate heterogeneity in relation to response times may reflect more of a psychometric artifact (due to the exogenous effects of the RARs) as opposed to providing insights about the response process (e.g., the application of different response strategies) and that (2) efforts to use RTRs as indicators of latent proficiency should attend to the anticipated interactions between the latent proficiency and RAR on response times. The validity of our psychometric perspective against alternatives likely relies on appeals to theory; the best perspective to take may vary depending on the test setting.
Collapse
Affiliation(s)
- Weicong Lyu
- College of Education, University of Washington, Seattle, WA 98105, USA
| | - Daniel Bolt
- Department of Educational Psychology, University of Wisconsin, Madison, WI 53706, USA
| |
Collapse
|
3
|
Kim N, Jeon M, Partchev I. Conditional Dependence across Slow and Fast Item Responses: With a Latent Space Item Response Modeling Approach. J Intell 2024; 12:23. [PMID: 38392179 PMCID: PMC10889770 DOI: 10.3390/jintelligence12020023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 01/24/2024] [Accepted: 02/02/2024] [Indexed: 02/24/2024] Open
Abstract
There recently have been many studies examining conditional dependence between response accuracy and response times in cognitive tests. While most previous research has focused on revealing a general pattern of conditional dependence for all respondents and items, it is plausible that the pattern may vary across respondents and items. In this paper, we attend to its potential heterogeneity and examine the item and person specificities involved in the conditional dependence between item responses and response times. To this end, we use a latent space item response theory (LSIRT) approach with an interaction map that visualizes conditional dependence in response data in the form of item-respondent interactions. We incorporate response time information into the interaction map by applying LSIRT models to slow and fast item responses. Through empirical illustrations with three cognitive test datasets, we confirm the presence and patterns of conditional dependence between item responses and response times, a result consistent with previous studies. Our results further illustrate the heterogeneity in the conditional dependence across respondents, which provides insights into understanding individuals' underlying item-solving processes in cognitive tests. Some practical implications of the results and the use of interaction maps in cognitive tests are discussed.
Collapse
Affiliation(s)
- Nana Kim
- Department of Educational Psychology, College of Education and Human Development, University of Minnesota, Twin-Cities, MN 55455, USA
| | - Minjeong Jeon
- Social Research Methodology, Department of Education, School of Education and Information Studies, University of California, Los Angeles, CA 90095, USA
| | | |
Collapse
|
4
|
Kang I, Jeon M, Partchev I. A Latent Space Diffusion Item Response Theory Model to Explore Conditional Dependence between Responses and Response Times. PSYCHOMETRIKA 2023; 88:830-864. [PMID: 37316615 DOI: 10.1007/s11336-023-09920-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Indexed: 06/16/2023]
Abstract
Traditional measurement models assume that all item responses correlate with each other only through their underlying latent variables. This conditional independence assumption has been extended in joint models of responses and response times (RTs), implying that an item has the same item characteristics fors all respondents regardless of levels of latent ability/trait and speed. However, previous studies have shown that this assumption is violated in various types of tests and questionnaires and there are substantial interactions between respondents and items that cannot be captured by person- and item-effect parameters in psychometric models with the conditional independence assumption. To study the existence and potential cognitive sources of conditional dependence and utilize it to extract diagnostic information for respondents and items, we propose a diffusion item response theory model integrated with the latent space of variations in information processing rate of within-individual measurement processes. Respondents and items are mapped onto the latent space, and their distances represent conditional dependence and unexplained interactions. We provide three empirical applications to illustrate (1) how to use an estimated latent space to inform conditional dependence and its relation to person and item measures, (2) how to derive diagnostic feedback personalized for respondents, and (3) how to validate estimated results with an external measure. We also provide a simulation study to support that the proposed approach can accurately recover its parameters and detect conditional dependence underlying data.
Collapse
Affiliation(s)
- Inhan Kang
- Yonsei University, 403 Widang Hall, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| | - Minjeong Jeon
- UNIVERSITY OF CALIFORNIA, LOS ANGELES, Los Angeles, USA
| | | |
Collapse
|
5
|
Kang I, Molenaar D, Ratcliff R. A Modeling Framework to Examine Psychological Processes Underlying Ordinal Responses and Response Times of Psychometric Data. PSYCHOMETRIKA 2023; 88:940-974. [PMID: 37171779 DOI: 10.1007/s11336-023-09902-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 10/25/2022] [Accepted: 01/03/2023] [Indexed: 05/13/2023]
Abstract
This article presents a joint modeling framework of ordinal responses and response times (RTs) for the measurement of latent traits. We integrate cognitive theories of decision-making and confidence judgments with psychometric theories to model individual-level measurement processes. The model development starts with the sequential sampling framework which assumes that when an item is presented, a respondent accumulates noisy evidence over time to respond to the item. Several cognitive and psychometric theories are reviewed and integrated, leading us to three psychometric process models with different representations of the cognitive processes underlying the measurement. We provide simulation studies that examine parameter recovery and show the relationships between latent variables and data distributions. We further test the proposed models with empirical data measuring three traits related to motivation. The results show that all three models provide reasonably good descriptions of observed response proportions and RT distributions. Also, different traits favor different process models, which implies that psychological measurement processes may have heterogeneous structures across traits. Our process of model building and examination illustrates how cognitive theories can be incorporated into psychometric model development to shed light on the measurement process, which has had little attention in traditional psychometric models.
Collapse
Affiliation(s)
- Inhan Kang
- Yonsei University, 403 Widang Hall, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| | | | - Roger Ratcliff
- The Ohio State University, 212 Psychology Building 1835 Neil Avenue, Columbus, 43210, OH, USA
| |
Collapse
|
6
|
Gonthier C. Should Intelligence Tests Be Speeded or Unspeeded? A Brief Review of the Effects of Time Pressure on Response Processes and an Experimental Study with Raven's Matrices. J Intell 2023; 11:120. [PMID: 37367521 DOI: 10.3390/jintelligence11060120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 05/15/2023] [Accepted: 06/06/2023] [Indexed: 06/28/2023] Open
Abstract
Intelligence tests are often performed under time constraints for practical reasons, but the effects of time pressure on reasoning performance are poorly understood. The first part of this work provides a brief review of major expected effects of time pressure, which includes forcing participants to skip items, convoking a mental speed factor, constraining response times, qualitatively altering cognitive processing, affecting anxiety and motivation, and interacting with individual differences. The second part presents data collected with Raven's matrices under three conditions of speededness to provide further insight into the complex effects of time pressure, with three major findings. First, even mild time pressure (with enough time available for all participants to complete the task at a leisurely pace) induced speeding throughout the whole task, starting with the very first item, and participants sped up more than was actually required. Second, time pressure came with lower confidence and poorer strategy use and a substantial decrease of accuracy (d = 0.35), even when controlling for response time at the item level-indicating a detrimental effect on cognitive processing beyond speeding. Third, time pressure disproportionately reduced response times for difficult items and participants with high ability, working memory capacity, or need for cognition, although this did not differentially affect ability estimates. Overall, both the review and empirical sections show that the effects of time pressure go well beyond forcing participants to speed or skip the last few items and make even mild time constraints inadvisable when attempting to measure maximal performance, especially for high-performing samples.
Collapse
Affiliation(s)
- Corentin Gonthier
- Nantes Université, Laboratoire de Psychologie des Pays de la Loire (LPPL UR 4638), Chemin de la Censive du Tertre, 44312 Nantes, France
| |
Collapse
|
7
|
Krämer RJ, Koch M, Levacher J, Schmitz F. Testing Replicability and Generalizability of the Time on Task Effect. J Intell 2023; 11:jintelligence11050082. [PMID: 37233332 DOI: 10.3390/jintelligence11050082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/27/2023] Open
Abstract
The time on task (ToT) effect describes the relationship of the time spent on a cognitive task and the probability of successful task completion. The effect has been shown to vary in size and direction across tests and even within tests, depending on the test taker and item characteristics. Specifically, investing more time has a positive effect on response accuracy for difficult items and low ability test-takers, but a negative effect for easy items and high ability test-takers. The present study sought to test the replicability of this result pattern of the ToT effect across samples independently drawn from the same populations of persons and items. Furthermore, its generalizability was tested in terms of differential correlations across ability tests. To this end, ToT effects were estimated for three different reasoning tests and one test measuring natural sciences knowledge in 10 comparable subsamples with a total N = 2640. Results for the subsamples were highly similar, demonstrating that ToT effects are estimated with sufficient reliability. Generally, faster answers tended to be more accurate, suggesting a relatively effortless processing style. However, with increasing item difficulty and decreasing person ability, the effect flipped to the opposite direction, i.e., higher accuracy with longer processing times. The within-task moderation of the ToT effect can be reconciled with an account on effortful processing or cognitive load. By contrast, the generalizability of the ToT effect across different tests was only moderate. Cross-test relations were stronger in relative terms if performance in the respective tasks was more strongly related. This suggests that individual differences in the ToT effect depend on test characteristics such as their reliabilities but also similarities and differences of their processing requirements.
Collapse
Affiliation(s)
- Raimund J Krämer
- Department of Psychology, University of Duisburg-Essen, Universitätsstraße 2, 45141 Essen, Germany
| | - Marco Koch
- Individual Differences & Psychodiagnostics, Saarland University, Campus A1.3, 66123 Saarbrücken, Germany
| | - Julie Levacher
- Individual Differences & Psychodiagnostics, Saarland University, Campus A1.3, 66123 Saarbrücken, Germany
| | - Florian Schmitz
- Department of Psychology, University of Duisburg-Essen, Universitätsstraße 2, 45141 Essen, Germany
| |
Collapse
|
8
|
Kang I, De Boeck P, Ratcliff R. Modeling Conditional Dependence of Response Accuracy and Response Time with the Diffusion Item Response Theory Model. PSYCHOMETRIKA 2022; 87:725-748. [PMID: 34988775 PMCID: PMC9677523 DOI: 10.1007/s11336-021-09819-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 09/05/2021] [Indexed: 05/26/2023]
Abstract
In this paper, we propose a model-based method to study conditional dependence between response accuracy and response time (RT) with the diffusion IRT model (Tuerlinckx and De Boeck in Psychometrika 70(4):629-650, 2005, https://doi.org/10.1007/s11336-000-0810-3 ; van der Maas et al. in Psychol Rev 118(2):339-356, 2011, https://doi.org/10.1080/20445911.2011.454498 ). We extend the earlier diffusion IRT model by introducing variability across persons and items in cognitive capacity (drift rate in the evidence accumulation process) and variability in the starting point of the decision processes. We show that the extended model can explain the behavioral patterns of conditional dependency found in the previous studies in psychometrics. Variability in cognitive capacity can predict positive and negative conditional dependency and their interaction with the item difficulty. Variability in starting point can account for the early changes in the response accuracy as a function of RT given the person and item effects. By the combination of the two variability components, the extended model can produce the curvilinear conditional accuracy functions that have been observed in psychometric data. We also provide a simulation study to validate the parameter recovery of the proposed model and present two empirical applications to show how to implement the model to study conditional dependency underlying data response accuracy and RTs.
Collapse
Affiliation(s)
- Inhan Kang
- The Ohio State University, 291 Psychology Building, 1835 Neil Avenue, Columbus, OH, 43210, USA.
| | - Paul De Boeck
- The Ohio State University, 291 Psychology Building, 1835 Neil Avenue, Columbus, OH, 43210, USA
| | - Roger Ratcliff
- The Ohio State University, 291 Psychology Building, 1835 Neil Avenue, Columbus, OH, 43210, USA
| |
Collapse
|
9
|
Kang I, De Boeck P, Partchev I. A randomness perspective on intelligence processes. INTELLIGENCE 2022. [DOI: 10.1016/j.intell.2022.101632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
10
|
Zhang S, Bergner Y, DiTrapani J, Jeon M. Modeling the interaction between resilience and ability in assessments with allowances for multiple attempts. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106847] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
11
|
Sun T, Zhang B, Cao M, Drasgow F. Faking Detection Improved: Adopting a Likert Item Response Process Tree Model. ORGANIZATIONAL RESEARCH METHODS 2021. [DOI: 10.1177/10944281211002904] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
With the increasing popularity of noncognitive inventories in personnel selection, organizations typically wish to be able to tell when a job applicant purposefully manufactures a favorable impression. Past faking research has primarily focused on how to reduce faking via instrument design, warnings, and statistical corrections for faking. This article took a new approach by examining the effects of faking (experimentally manipulated and contextually driven) on response processes. We modified a recently introduced item response theory tree modeling procedure, the three-process model, to identify faking in two studies. Study 1 examined self-reported vocational interest assessment responses using an induced faking experimental design. Study 2 examined self-reported personality assessment responses when some people were in a high-stakes situation (i.e., selection). Across the two studies, individuals instructed or expected to fake were found to engage in more extreme responding. By identifying the underlying differences between fakers and honest respondents, the new approach improves our understanding of faking. Percentage cutoffs based on extreme responding produced a faker classification precision of 85% on average.
Collapse
Affiliation(s)
- Tianjun Sun
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, USA
| | - Bo Zhang
- Department of Psychological & Brain Sciences, Texas A&M University, Champaign, IL, USA
| | - Mengyang Cao
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| | - Fritz Drasgow
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
- School of Labor and Employment Relations, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| |
Collapse
|
12
|
Plieninger H. Developing and Applying IR-Tree Models: Guidelines, Caveats, and an Extension to Multiple Groups. ORGANIZATIONAL RESEARCH METHODS 2020. [DOI: 10.1177/1094428120911096] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
IR-tree models assume that categorical item responses can best be explained by multiple response processes. In the present article, guidelines are provided for the development and interpretation of IR-tree models. In more detail, the relationship between a tree diagram, the model equations, and the analysis on the basis of pseudo-items is described. Moreover, it is shown that IR-tree models do not allow conclusions about the sequential order of the processes, and that mistakes in the model specification can have serious consequences. Furthermore, multiple-group IR-tree models are presented as a novel extension of IR-tree models to data from heterogeneous units. This allows, for example, to investigate differences across countries or organizations with respect to core parameters of the IR-tree model. Finally, an empirical example on organizational commitment and response styles is presented.
Collapse
Affiliation(s)
- Hansjörg Plieninger
- School of Social Sciences, Department of Psychology, University of Mannheim, Mannheim, Germany
| |
Collapse
|
13
|
Abstract
Various mixture modeling approaches have been proposed to identify within-subjects differences in the psychological processes underlying responses to psychometric tests. Although valuable, the existing mixture models are associated with at least one of the following three challenges: (1) A parametric distribution is assumed for the response times that—if violated—may bias the results; (2) the response processes are assumed to result in equal variances (homoscedasticity) in the response times, whereas some processes may produce more variability than others (heteroscedasticity); and (3) the different response processes are modeled as independent latent variables, whereas they may be related. Although each of these challenges has been addressed separately, in practice they may occur simultaneously. Therefore, we propose a heteroscedastic hidden Markov mixture model for responses and categorized response times that addresses all the challenges above in a single model. In a simulation study, we demonstrated that the model is associated with acceptable parameter recovery and acceptable resolution to distinguish between various special cases. In addition, the model was applied to the responses and response times of the WAIS-IV block design subtest, to demonstrate its use in practice.
Collapse
|
14
|
Blacksmith N, Yang Y, Behrend TS, Ruark GA. Assessing the validity of inferences from scores on the cognitive reflection test. JOURNAL OF BEHAVIORAL DECISION MAKING 2019. [DOI: 10.1002/bdm.2133] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Nikki Blacksmith
- Consortium for Research Fellows ProgramConsortium of Universities of the Washington Metropolitan Area Alexandria Virginia
- Organizational Sciences and Communication DepartmentThe George Washington University Washington District of Columbia
| | - Yongwei Yang
- Department of AnalyticsGoogle, Inc Mountain View California
| | - Tara S. Behrend
- Organizational Sciences and Communication DepartmentThe George Washington University Washington District of Columbia
| | - Gregory A. Ruark
- Foundational Science Research UnitU.S. Army Research Institute for Behavioral and Social Sciences Fort Belvoir Virginia
| |
Collapse
|
15
|
De Boeck P, Jeon M. An Overview of Models for Response Times and Processes in Cognitive Tests. Front Psychol 2019; 10:102. [PMID: 30787891 PMCID: PMC6372526 DOI: 10.3389/fpsyg.2019.00102] [Citation(s) in RCA: 75] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2018] [Accepted: 01/14/2019] [Indexed: 11/13/2022] Open
Abstract
Response times (RTs) are a natural kind of data to investigate cognitive processes underlying cognitive test performance. We give an overview of modeling approaches and of findings obtained with these approaches. Four types of models are discussed: response time models (RT as the sole dependent variable), joint models (RT together with other variables as dependent variable), local dependency models (with remaining dependencies between RT and accuracy), and response time as covariate models (RT as independent variable). The evidence from these approaches is often not very informative about the specific kind of processes (other than problem solving, information accumulation, and rapid guessing), but the findings do suggest dual processing: automated processing (e.g., knowledge retrieval) vs. controlled processing (e.g., sequential reasoning steps), and alternative explanations for the same results exist. While it seems well-possible to differentiate rapid guessing from normal problem solving (which can be based on automated or controlled processing), further decompositions of response times are rarely made, although possible based on some of model approaches.
Collapse
Affiliation(s)
- Paul De Boeck
- Department of Psychology, Ohio State University, Columbus, OH, United States
- KU Leuven, Leuven, Belgium
| | - Minjeong Jeon
- Graduate School of Education and Information Studies, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
16
|
Bolsinova M, Molenaar D. Nonlinear Indicator-Level Moderation in Latent Variable Models. MULTIVARIATE BEHAVIORAL RESEARCH 2019; 54:62-84. [PMID: 30513219 DOI: 10.1080/00273171.2018.1486174] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Revised: 05/31/2018] [Accepted: 06/04/2018] [Indexed: 05/26/2023]
Abstract
Linear, nonlinear, and nonparametric moderated latent variable models have been developed to investigate possible interaction effects between a latent variable and an external continuous moderator on the observed indicators in the latent variable model. Most moderation models have focused on moderators that vary across persons but not across the indicators (e.g., moderators like age and socioeconomic status). However, in many applications, the values of the moderator may vary both across persons and across indicators (e.g., moderators like response times and confidence ratings). Indicator-level moderation models are available for categorical moderators and linear interaction effects. However, these approaches require respectively categorization of the continuous moderator and the assumption of linearity of the interaction effect. In this article, parametric nonlinear and nonparametric indicator-level moderation methods are developed. In a simulation study, we demonstrate the viability of these methods. In addition, the methods are applied to a real data set pertaining to arithmetic ability.
Collapse
|
17
|
Hofman AD, Visser I, Jansen BR, Marsman M, van der Maas HL. Fast and slow strategies in multiplication. LEARNING AND INDIVIDUAL DIFFERENCES 2018. [DOI: 10.1016/j.lindif.2018.09.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
18
|
Bolsinova M, Molenaar D. Modeling Nonlinear Conditional Dependence Between Response Time and Accuracy. Front Psychol 2018; 9:1525. [PMID: 30245650 PMCID: PMC6137682 DOI: 10.3389/fpsyg.2018.01525] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Accepted: 07/31/2018] [Indexed: 11/27/2022] Open
Abstract
The most common process variable available for analysis due to tests presented in a computerized form is response time. Psychometric models have been developed for joint modeling of response accuracy and response time in which response time is an additional source of information about ability and about the underlying response processes. While traditional models assume conditional independence between response time and accuracy given ability and speed latent variables (van der Linden, 2007), recently multiple studies (De Boeck and Partchev, 2012; Meng et al., 2015; Bolsinova et al., 2017a,b) have shown that violations of conditional independence are not rare and that there is more to learn from the conditional dependence between response time and accuracy. When it comes to conditional dependence between time and accuracy, authors typically focus on positive conditional dependence (i.e., relatively slow responses are more often correct) and negative conditional dependence (i.e., relatively fast responses are more often correct), which implies monotone conditional dependence. Moreover, most existing models specify the relationship to be linear. However, this assumption of monotone and linear conditional dependence does not necessarily hold in practice, and assuming linearity might distort the conclusions about the relationship between time and accuracy. In this paper we develop methods for exploring nonlinear conditional dependence between response time and accuracy. Three different approaches are proposed: (1) A joint model for quadratic conditional dependence is developed as an extension of the response moderation models for time and accuracy (Bolsinova et al., 2017b); (2) A joint model for multiple-category conditional dependence is developed as an extension of the fast-slow model of Partchev and De Boeck (2012); (3) An indicator-level nonparametric moderation method (Bolsinova and Molenaar, in press) is used with residual log-response time as a predictor for the item intercept and item slope. Furthermore, we propose using nonparametric moderation to evaluate the viability of the assumption of linearity of conditional dependence by performing posterior predictive checks for the linear conditional dependence model. The developed methods are illustrated using data from an educational test in which, for the majority of the items, conditional dependence is shown to be nonlinear.
Collapse
|
19
|
Molenaar D, de Boeck P. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times. PSYCHOMETRIKA 2018; 83:279-297. [PMID: 29392567 DOI: 10.1007/s11336-017-9602-9] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2015] [Indexed: 05/26/2023]
Abstract
In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.
Collapse
Affiliation(s)
- Dylan Molenaar
- Psychological Methods, Department of Psychology, University of Amsterdam, PO box 15906, 1001 NK , Amsterdam, The Netherlands.
| | | |
Collapse
|
20
|
|
21
|
Molenaar D, Bolsinova M, Vermunt JK. A semi-parametric within-subject mixture approach to the analyses of responses and response times. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2018; 71:205-228. [PMID: 29044460 DOI: 10.1111/bmsp.12117] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2016] [Revised: 06/28/2017] [Indexed: 06/07/2023]
Abstract
In item response theory, modelling the item response times in addition to the item responses may improve the detection of possible between- and within-subject differences in the process that resulted in the responses. For instance, if respondents rely on rapid guessing on some items but not on all, the joint distribution of the responses and response times will be a multivariate within-subject mixture distribution. Suitable parametric methods to detect these within-subject differences have been proposed. In these approaches, a distribution needs to be assumed for the within-class response times. In this paper, it is demonstrated that these parametric within-subject approaches may produce false positives and biased parameter estimates if the assumption concerning the response time distribution is violated. A semi-parametric approach is proposed which resorts to categorized response times. This approach is shown to hardly produce false positives and parameter bias. In addition, the semi-parametric approach results in approximately the same power as the parametric approach.
Collapse
|
22
|
De Boeck P, Chen H, Davison M. Spontaneous and imposed speed of cognitive test responses. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2017; 70:225-237. [PMID: 28474767 DOI: 10.1111/bmsp.12094] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2016] [Revised: 11/14/2016] [Indexed: 05/07/2023]
Abstract
Based on data from a cognitive test presented in a condition with time constraints per item and a condition without time constraints, the effect of speed on accuracy is investigated. First, if the effect of imposed speed on accuracy is negative it can be explained by the speed-accuracy trade-off, and if it can be captured through the corresponding latent variables, then measurement invariance applies between a condition with and a condition without time constraints. The results do show a negative effect and a lack of measurement invariance. Second, the conditional accuracy function (CAF) is investigated in both conditions, with and without time constraints. The CAF shows an (item-dependent) negative conditional dependence between response time and response accuracy and thus a positive relationship between speed and accuracy, which implies that faster responses are more accurate. In sum, there seem to be two kinds of speed effects: a speed-accuracy trade-off effect induced by imposed speed and an opposite CAF effect associated with speed within conditions. The second effect is interpreted as stemming from a within-person variation of the cognitive capacity during the test which simultaneously favours or disfavours speed and accuracy.
Collapse
Affiliation(s)
- Paul De Boeck
- Ohio State University, Columbus, Ohio, USA
- KU Leuven, Belgium
| | - Haiqin Chen
- American Dental Association, Chicago, Illinois, USA
| | - Mark Davison
- University of Minnesota, Minneapolis, Minnesota, USA
| |
Collapse
|
23
|
Bolsinova M, Tijmstra J, Molenaar D, De Boeck P. Conditional Dependence between Response Time and Accuracy: An Overview of its Possible Sources and Directions for Distinguishing between Them. Front Psychol 2017; 8:202. [PMID: 28261136 PMCID: PMC5312167 DOI: 10.3389/fpsyg.2017.00202] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 01/31/2017] [Indexed: 11/22/2022] Open
Abstract
With the widespread use of computerized tests in educational measurement and cognitive psychology, registration of response times has become feasible in many applications. Considering these response times helps provide a more complete picture of the performance and characteristics of persons beyond what is available based on response accuracy alone. Statistical models such as the hierarchical model (van der Linden, 2007) have been proposed that jointly model response time and accuracy. However, these models make restrictive assumptions about the response processes (RPs) that may not be realistic in practice, such as the assumption that the association between response time and accuracy is fully explained by taking speed and ability into account (conditional independence). Assuming conditional independence forces one to ignore that many relevant individual differences may play a role in the RPs beyond overall speed and ability. In this paper, we critically consider the assumption of conditional independence and the important ways in which it may be violated in practice from a substantive perspective. We consider both conditional dependences that may arise when all persons attempt to solve the items in similar ways (homogeneous RPs) and those that may be due to persons differing in fundamental ways in how they deal with the items (heterogeneous processes). The paper provides an overview of what we can learn from observed conditional dependences. We argue that explaining and modeling these differences in the RPs is crucial to increase both the validity of measurement and our understanding of the relevant RPs.
Collapse
Affiliation(s)
- Maria Bolsinova
- Department of Psychology, University of Amsterdam Amsterdam, Netherlands
| | - Jesper Tijmstra
- Department of Methodology and Statistics, Tilburg University Tilburg, Netherlands
| | - Dylan Molenaar
- Department of Psychology, University of Amsterdam Amsterdam, Netherlands
| | - Paul De Boeck
- Department of Psychology, Ohio State UniversityColumbus, OH, USA; Department of Psychology, KU LeuvenLeuven, Belgium
| |
Collapse
|
24
|
|
25
|
Response Mixture Modeling of Intraindividual Differences in Responses and Response Times to the Hungarian WISC-IV Block Design Test. J Intell 2016. [DOI: 10.3390/jintelligence4030010] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|