1
|
Kuzmina Y, Marakshina J, Lobaskova M, Zakharov I, Tikhomirova T, Malykh S. The Interaction between Congruency and Numerical Ratio Effects in the Nonsymbolic Comparison Test. Behav Sci (Basel) 2023; 13:983. [PMID: 38131839 PMCID: PMC10740770 DOI: 10.3390/bs13120983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 11/24/2023] [Accepted: 11/26/2023] [Indexed: 12/23/2023] Open
Abstract
The nonsymbolic comparison task is used to investigate the precision of the Approximate Number Sense, the ability to process discrete numerosity without counting and symbols. There is an ongoing debate regarding the extent to which the ANS is influenced by the processing of non-numerical visual cues. To address this question, we assessed the congruency effect in a nonsymbolic comparison task, examining its variability across different stimulus presentation formats and numerical proportions. Additionally, we examined the variability of the numerical ratio effect with the format and congruency. Utilizing generalized linear mixed-effects models with a sample of 290 students (89% female, mean age 19.33 years), we estimated the congruency effect and numerical ratio effect for separated and intermixed formats of stimulus presentation, and for small and large numerical proportions. The findings indicated that the congruency effect increased in large numerical proportion conditions, but this pattern was observed only in the separated format. In the intermixed format, the congruency effect was insignificant for both types of numerical proportion. Notably, the numerical ratio effect varied for congruent and incongruent trials in different formats. The results may suggest that the processing of visual non-numerical parameters may be crucial when numerosity processing becomes noisier, specifically when numerical proportion becomes larger. The implications of these findings for refining the ANS theory are discussed.
Collapse
Affiliation(s)
| | | | | | | | | | - Sergey Malykh
- Psychological Institute of Russian Academy of Education, 125009 Moscow, Russia; (Y.K.); (J.M.); (M.L.); (I.Z.); (T.T.)
| |
Collapse
|
2
|
Kang I, Molenaar D, Ratcliff R. A Modeling Framework to Examine Psychological Processes Underlying Ordinal Responses and Response Times of Psychometric Data. PSYCHOMETRIKA 2023; 88:940-974. [PMID: 37171779 DOI: 10.1007/s11336-023-09902-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 10/25/2022] [Accepted: 01/03/2023] [Indexed: 05/13/2023]
Abstract
This article presents a joint modeling framework of ordinal responses and response times (RTs) for the measurement of latent traits. We integrate cognitive theories of decision-making and confidence judgments with psychometric theories to model individual-level measurement processes. The model development starts with the sequential sampling framework which assumes that when an item is presented, a respondent accumulates noisy evidence over time to respond to the item. Several cognitive and psychometric theories are reviewed and integrated, leading us to three psychometric process models with different representations of the cognitive processes underlying the measurement. We provide simulation studies that examine parameter recovery and show the relationships between latent variables and data distributions. We further test the proposed models with empirical data measuring three traits related to motivation. The results show that all three models provide reasonably good descriptions of observed response proportions and RT distributions. Also, different traits favor different process models, which implies that psychological measurement processes may have heterogeneous structures across traits. Our process of model building and examination illustrates how cognitive theories can be incorporated into psychometric model development to shed light on the measurement process, which has had little attention in traditional psychometric models.
Collapse
Affiliation(s)
- Inhan Kang
- Yonsei University, 403 Widang Hall, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| | | | - Roger Ratcliff
- The Ohio State University, 212 Psychology Building 1835 Neil Avenue, Columbus, 43210, OH, USA
| |
Collapse
|
3
|
Stimulus-response congruency effects depend on quality of perceptual evidence: A diffusion model account. Atten Percept Psychophys 2023; 85:1335-1354. [PMID: 36725783 DOI: 10.3758/s13414-022-02642-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/20/2022] [Indexed: 02/03/2023]
Abstract
Individuals often need to make quick decisions based on incomplete or "noisy" information. This requires the coordination of attentional, perceptual, cognitive, and behavioral mechanisms. This poses a challenge for isolating the unique effects of each subprocess from behavioral data, which reflect the summation of all subprocesses combined. Sequential sampling models offer a more detailed examination of behavioral data, enabling us to separate decisional and non-decisional processes at play in a task. Participants were required to identify briefly presented shapes while perceptual (duration, size, location) and response features (location-congruent/-incongruent/-neutral) of the task were manipulated. The diffusion model (Ratcliff, 1978) was used to dissociate decisional and executive processes in the task. In Experiment 1, stimuli were presented for either 20 or 80 ms to the left or right of a central fixation while response keys were positioned horizontally. In Experiment 2, stimulus size was manipulated rather than duration. In Experiment 3, response keys were positioned vertically. Results showed a duration x response mapping interaction. Participants displayed stimulus-response (S-R) congruency biases only on short-duration trials. This effect was observed for both horizontal and vertical response key mappings. Stimulus size affected participant response speed, but did not elicit S-R congruency biases. The present findings show that when perceptual quality of evidence is poor, individuals rely more heavily on spatial-motor mechanisms when making speeded choice decisions. Furthermore, positioning response keys vertically is insufficient to eliminate S-R congruency effects. Diffusion model parameters are presented and implications of the model are discussed.
Collapse
|
4
|
Do Q, Li Y, Kane GA, McGuire JT, Scott BB. Assessing evidence accumulation and rule learning in humans with an online game. J Neurophysiol 2023; 129:131-143. [PMID: 36475830 DOI: 10.1152/jn.00124.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Evidence accumulation, an essential component of perception and decision making, is frequently studied with psychophysical tasks involving noisy or ambiguous stimuli. In these tasks, participants typically receive verbal or written instructions that describe the strategy that should be used to guide decisions. Although convenient and effective, explicit instructions can influence learning and decision making strategies and can limit comparisons with animal models, in which behaviors are reinforced through feedback. Here, we developed an online video game and nonverbal training pipeline, inspired by pulse-based tasks for rodents, as an alternative to traditional psychophysical tasks used to study evidence accumulation. Using this game, we collected behavioral data from hundreds of participants trained with an explicit description of the decision rule or with experiential feedback. Participants trained with feedback alone learned the game rules rapidly and used strategies and displayed biases similar to those who received explicit instructions. Finally, by leveraging data across hundreds of participants, we show that perceptual judgments were well described by an accumulation process in which noise scaled nonlinearly with evidence, consistent with previous animal studies but inconsistent with diffusion models widely used to describe perceptual decisions in humans. These results challenge the conventional description of the accumulation process and suggest that online games provide a valuable platform to examine perceptual decision making and learning in humans. In addition, the feedback-based training pipeline developed for this game may be useful for evaluating perceptual decision making in human populations with difficulty following verbal instructions.NEW & NOTEWORTHY Perceptual uncertainty sets critical constraints on our ability to accumulate evidence and make decisions; however, its sources remain unclear. We developed a video game, and feedback-based training pipeline, to study uncertainty during decision making. Leveraging choices from hundreds of subjects, we demonstrate that human choices are inconsistent with popular diffusion models of human decision making and instead are best fit by models in which perceptual uncertainty scales nonlinearly with the strength of sensory evidence.
Collapse
Affiliation(s)
- Quan Do
- Department of Psychological and Brain Sciences and Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| | - Yutong Li
- Department of Psychological and Brain Sciences and Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| | - Gary A Kane
- Department of Psychological and Brain Sciences and Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| | - Joseph T McGuire
- Department of Psychological and Brain Sciences and Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| | - Benjamin B Scott
- Department of Psychological and Brain Sciences and Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| |
Collapse
|
5
|
Ratcliff R. Integrated diffusion models for distance effects in number memory. Cogn Psychol 2022; 138:101516. [PMID: 36115086 PMCID: PMC9732934 DOI: 10.1016/j.cogpsych.2022.101516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 08/26/2022] [Accepted: 08/30/2022] [Indexed: 12/13/2022]
Abstract
I evaluated three models for the representation of numbers in memory. These were integrated with the diffusion decision model to explain accuracy and response time (RT) data from a recognition memory experiment in which the stimuli were two-digit numbers. The integrated models accounted for distance/confusability effects: when a test number was numerically close to a studied number, accuracy was lower and RTs were longer than when a test number was numerically far from a studied number. For two of the models, the representations of numbers are distributed over number (with Gaussian or exponential distributions) and the overlap between the distributions of a studied number and a test number provides the evidence (drift rate) on which a decision is made. For the third, the exponential gradient model, drift rate is an exponential function of the numerical distance between studied and test numbers. The exponential gradient model fit the data slightly better than the two overlap models. Monte Carlo simulations showed that the variability in the important parameter estimates from fitting data collected over 30-40 min is smaller than the variability among individuals, allowing differences among individuals to be studied. A second experiment compared number memory and number discrimination tasks and results showed different distance effects. Number memory had an exponential-like distance-effect and number discrimination had a linear function which shows radically different representations drive the two tasks.
Collapse
|
6
|
Kang I, De Boeck P, Ratcliff R. Modeling Conditional Dependence of Response Accuracy and Response Time with the Diffusion Item Response Theory Model. PSYCHOMETRIKA 2022; 87:725-748. [PMID: 34988775 PMCID: PMC9677523 DOI: 10.1007/s11336-021-09819-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 09/05/2021] [Indexed: 05/26/2023]
Abstract
In this paper, we propose a model-based method to study conditional dependence between response accuracy and response time (RT) with the diffusion IRT model (Tuerlinckx and De Boeck in Psychometrika 70(4):629-650, 2005, https://doi.org/10.1007/s11336-000-0810-3 ; van der Maas et al. in Psychol Rev 118(2):339-356, 2011, https://doi.org/10.1080/20445911.2011.454498 ). We extend the earlier diffusion IRT model by introducing variability across persons and items in cognitive capacity (drift rate in the evidence accumulation process) and variability in the starting point of the decision processes. We show that the extended model can explain the behavioral patterns of conditional dependency found in the previous studies in psychometrics. Variability in cognitive capacity can predict positive and negative conditional dependency and their interaction with the item difficulty. Variability in starting point can account for the early changes in the response accuracy as a function of RT given the person and item effects. By the combination of the two variability components, the extended model can produce the curvilinear conditional accuracy functions that have been observed in psychometric data. We also provide a simulation study to validate the parameter recovery of the proposed model and present two empirical applications to show how to implement the model to study conditional dependency underlying data response accuracy and RTs.
Collapse
Affiliation(s)
- Inhan Kang
- The Ohio State University, 291 Psychology Building, 1835 Neil Avenue, Columbus, OH, 43210, USA.
| | - Paul De Boeck
- The Ohio State University, 291 Psychology Building, 1835 Neil Avenue, Columbus, OH, 43210, USA
| | - Roger Ratcliff
- The Ohio State University, 291 Psychology Building, 1835 Neil Avenue, Columbus, OH, 43210, USA
| |
Collapse
|
7
|
Do data from mechanical Turk subjects replicate accuracy, response time, and diffusion modeling results? Behav Res Methods 2021; 53:2302-2325. [PMID: 33825128 DOI: 10.3758/s13428-021-01573-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/27/2021] [Indexed: 01/01/2023]
Abstract
Online data collection is being used more and more, especially in the face of the COVID crisis. To examine the quality of such data, we chose to replicate lexical decision and item recognition paradigms from Ratcliff et al. (Cognitive Psychology, 60, 127-157, 2010) and numerosity discrimination paradigms from Ratcliff and McKoon (Psychological Review, 125, 183-217, 2018) with subjects recruited from Amazon Mechanical Turk (AMT). Along with these tasks, we collected data from either an IQ test or a math computation test. Subjects in the lexical decision and item recognition tasks were relatively well-behaved, with only a few giving a significant number of responses with response times (RTs) under 300 ms at chance accuracy, i.e., fast guesses, and a few with unstable RTs across a session. But in the numerosity discrimination tasks, almost half of the subjects gave a significant number of fast guesses and/or unstable RTs across the session. Diffusion model parameters were largely consistent with the earlier studies as were correlations across tasks and correlations with IQ and age. One surprising result was that eliminating fast outliers from subjects with highly variable RTs (those eliminated from the main analyses) produced diffusion model analyses that showed patterns of correlations similar to the subjects with stable performance. Methods for displaying data to examine stability, eliminating subjects, and implementing RT data collection on AMT including checks on timing are also discussed.
Collapse
|
8
|
Ratcliff R, McKoon G. Examining aging and numerosity using an integrated diffusion model. J Exp Psychol Learn Mem Cogn 2020; 46:2128-2152. [PMID: 32730057 PMCID: PMC8054446 DOI: 10.1037/xlm0000937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Two experiments are presented that use tasks common in research in numerical cognition with young adults and older adults as subjects. In these tasks, one or two arrays of dots are displayed, and subjects decide whether there are more or fewer dots of one kind than another. Results show that older adults, relative to young adults, tend to rely more on the perceptual feature, area, in making numerosity judgments when area is correlated with numerosity. Also, convex hull unexpectedly shows different effects depending on the task (being either correlated with numerosity or anticorrelated). Accuracy and response time (RT) data are interpreted with the integration of the diffusion decision model with models for the representation of numerosity. One model assumes that the representation of the difference depends on the difference between the numerosities and that standard deviations (SDs) increase linearly with numerosity, and the other model assumes a log representation with constant SDs. The representational models have coefficients that are applied to differences between two numerosities to produce drift rates and SDs in drift rates in the decision process. The two tasks produce qualitatively different patterns of RTs: One model fits results from one task, but the results are mixed for the other task. The effects of age on model parameters show a modest decrease in evidence driving the decision process, an increase in the duration of processes outside the decision process (nondecision time), and an increase in the amount of evidence needed to make a decision (boundary separation). (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|