1
|
Describing and understanding the time course of the property listing task. Cogn Process 2024; 25:61-74. [PMID: 37715827 DOI: 10.1007/s10339-023-01160-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 08/31/2023] [Indexed: 09/18/2023]
Abstract
To study linguistically coded concepts, researchers often resort to the Property Listing Task (PLT). In a PLT, participants are asked to list properties that describe a concept (e.g., for DOG, subjects may list "is a pet", "has four legs", etc.). When PLT data is collected for many concepts, researchers obtain Conceptual Properties Norms (CPNs), which are used to study semantic content and as a source of control variables. Though the PLT and CPNs are widely used across psychology, only recently a model that describes the listing course of a PLT has been developed and validated. That original model describes the listing course using order of production of properties. Here we go a step beyond and validate the model using response times (RT), i.e., the time from cue onset to property listing. Our results show that RT data exhibits the same regularities observed in the previous model, but now we can also analyze the time course, i.e., dynamics of the PLT. As such, the RT validated model may be applied to study several similar memory retrieval tasks, such as the Free Listing Task, Verbal Fluidity Task, and to research related cognitive processes. To illustrate those kinds of analyses, we present a brief example of the difference in PLT's dynamics between listing properties for abstract versus concrete concepts, which shows that the model may be fruitfully applied to study concepts.
Collapse
|
2
|
Using agreement probability to study differences in types of concepts and conceptualizers. Behav Res Methods 2024; 56:93-112. [PMID: 36471211 DOI: 10.3758/s13428-022-02030-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/18/2022] [Indexed: 12/12/2022]
Abstract
Agreement probability p(a) is a homogeneity measure of lists of properties produced by participants in a Property Listing Task (PLT) for a concept. Agreement probability's mathematical properties allow a rich analysis of property-based descriptions. To illustrate, we use p(a) to delve into the differences between concrete and abstract concepts in sighted and blind populations. Results show that concrete concepts are more homogeneous within sighted and blind groups than abstract ones (i.e., exhibit a higher p(a) than abstract ones) and that concrete concepts in the blind group are less homogeneous than in the sighted sample. This supports the idea that listed properties for concrete concepts should be more similar across subjects due to the influence of visual/perceptual information on the learning process. In contrast, abstract concepts are learned based mainly on social and linguistic information, which exhibit more variability among people, thus, making the listed properties more dissimilar across subjects. Relative to abstract concepts, the difference in p(a) between sighted and blind is not statistically significant. Though this is a null result, and should be considered with care, it is expected because abstract concepts should be learned by paying attention to the same social and linguistic input in both, blind and sighted, and thus, there is no reason to expect that the respective lists of properties should differ. Finally, we used p(a) to classify concrete and abstract concepts with a good level of certainty. All these analyses suggest that p(a) can be fruitfully used to study data obtained in a PLT.
Collapse
|
3
|
AC-PLT: An algorithm for computer-assisted coding of semantic property listing data. Behav Res Methods 2023:10.3758/s13428-023-02260-9. [PMID: 37831369 DOI: 10.3758/s13428-023-02260-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/25/2023] [Indexed: 10/14/2023]
Abstract
In this paper, we present a novel algorithm that uses machine learning and natural language processing techniques to facilitate the coding of feature listing data. Feature listing is a method in which participants are asked to provide a list of features that are typically true of a given concept or word. This method is commonly used in research studies to gain insights into people's understanding of various concepts. The standard procedure for extracting meaning from feature listings is to manually code the data, which can be time-consuming and prone to errors, leading to reliability concerns. Our algorithm aims at addressing these challenges by automatically assigning human-created codes to feature listing data that achieve a quantitatively good agreement with human coders. Our preliminary results suggest that our algorithm has the potential to improve the efficiency and accuracy of content analysis of feature listing data. Additionally, this tool is an important step toward developing a fully automated coding algorithm, which we are currently preliminarily devising.
Collapse
|
4
|
CPNCoverageAnalysis: An R package for parameter estimation in conceptual properties norming studies. Behav Res Methods 2023; 55:554-569. [PMID: 35318591 DOI: 10.3758/s13428-022-01811-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/14/2022] [Indexed: 11/08/2022]
Abstract
In conceptual properties norming studies (CPNs), participants list properties that describe a set of concepts. From CPNs, many different parameters are calculated, such as semantic richness. A generally overlooked issue is that those values are only point estimates of the true unknown population parameters. In the present work, we present an R package that allows us to treat those values as population parameter estimates. Relatedly, a general practice in CPNs is using an equal number of participants who list properties for each concept (i.e., standardizing sample size). As we illustrate through examples, this procedure has negative effects on data's statistical analyses. Here, we argue that a better method is to standardize coverage (i.e., the proportion of sampled properties to the total number of properties that describe a concept), such that a similar coverage is achieved across concepts. When standardizing coverage rather than sample size, it is more likely that the set of concepts in a CPN all exhibit a similar representativeness. Moreover, by computing coverage the researcher can decide whether the CPN reached a sufficiently high coverage, so that its results might be generalizable to other studies. The R package we make available in the current work allows one to compute coverage and to estimate the necessary number of participants to reach a target coverage. We show this sampling procedure by using the R package on real and simulated CPN data.
Collapse
|
5
|
A Context-Dependent Bayesian Account for Causal-Based Categorization. Cogn Sci 2023; 47:e13240. [PMID: 36680423 DOI: 10.1111/cogs.13240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 12/21/2022] [Accepted: 12/30/2022] [Indexed: 01/22/2023]
Abstract
The causal view of categories assumes that categories are represented by features and their causal relations. To study the effect of causal knowledge on categorization, researchers have used Bayesian causal models. Within that framework, categorization may be viewed as dependent on a likelihood computation (i.e., the likelihood of an exemplar with a certain combination of features, given the category's causal model) or as a posterior computation (i.e., the probability that the exemplar belongs to the category, given its features). Across three experiments, in combination with computational modeling, we offer evidence that categorization is better accounted for by assuming that people compute posteriors and not likelihoods, though both probabilities are closely related. This result contrasts with existing analyses of causal-based categorization, which assume that likelihood computations give a good approximation of human judgments. We also find that people are able to compute likelihoods in a closely related task that elicits judgments of consistency rather than category membership judgments. Our analyses show that people do use causal probabilistic information as prescribed by a Bayesian model but that they flexibly compute likelihoods or posteriors depending on the task. We discuss our results in relation to the relevant literature on the topic.
Collapse
|
6
|
On the importance of feedback for categorization: Revisiting category learning experiments using an adaptive filter model. JOURNAL OF EXPERIMENTAL PSYCHOLOGY: ANIMAL LEARNING AND COGNITION 2022; 48:295-306. [DOI: 10.1037/xan0000339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
7
|
An adaptive linear filter model of procedural category learning. Cogn Process 2022; 23:393-405. [PMID: 35513744 DOI: 10.1007/s10339-022-01094-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 04/13/2022] [Indexed: 11/03/2022]
Abstract
We use a feature-based association model to fit grouped and individual level category learning and transfer data. The model assumes that people use corrective feedback to learn individual feature to categorization-criterion correlations and combine those correlations additively to produce classifications. The model is an Adaptive Linear Filter (ALF) with logistic output function and Least Mean Squares learning algorithm. Categorization probabilities are computed by a logistic function. Our data span over 31 published data sets. Both at grouped and individual level analysis levels, the model performs remarkably well, accounting for large amounts of available variances. When fitted to grouped data, it outperforms alternative models. When fitted to individual level data, it is able to capture learning and transfer performance with high explained variances. Notably, the model achieves its fits with a very minimal number of free parameters. We discuss the ALF's advantages as a model of procedural categorization, in terms of its simplicity, its ability to capture empirical trends and its ability to solve challenges to other associative models. In particular, we discuss why the model is not equivalent to a prototype model, as previously thought.
Collapse
|
8
|
Language Processing Differences Between Blind and Sighted Individuals and the Abstract Versus Concrete Concept Difference. Cogn Sci 2021; 45:e13044. [PMID: 34606124 DOI: 10.1111/cogs.13044] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 08/18/2021] [Accepted: 08/22/2021] [Indexed: 11/29/2022]
Abstract
In the property listing task (PLT), participants are asked to list properties for a concept (e.g., for the concept dog, "barks," and "is a pet" may be produced). In conceptual property norming (CPNs) studies, participants are asked to list properties for large sets of concepts. Here, we use a mathematical model of the property listing process to explore two longstanding issues: characterizing the difference between concrete and abstract concepts, and characterizing semantic knowledge in the blind versus sighted population. When we apply our mathematical model to a large CPN reporting properties listed by sighted and blind participants, the model uncovers significant differences between concrete and abstract concepts. Though we also find that blind individuals show many of the same processing differences between abstract and concrete concepts found in sighted individuals, our model shows that those differences are noticeably less pronounced than in sighted individuals. We discuss our results vis-a-vis theories attempting to characterize abstract concepts.
Collapse
|
9
|
Eliciting semantic properties: methods and applications. Cogn Process 2020; 21:583-586. [PMID: 33063246 DOI: 10.1007/s10339-020-00999-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 10/05/2020] [Indexed: 10/23/2022]
Abstract
Asking subjects to list semantic properties for concepts is essential for predicting performance in several linguistic and non-linguistic tasks and for creating carefully controlled stimuli for experiments. The property elicitation task and the ensuing norms are widely used across the field, to investigate the organization of semantic memory and design computational models thereof. The contributions of the current Special Topic discuss several core issues concerning how semantic property norms are constructed and how they may be used for research aiming at understanding cognitive processing.
Collapse
|
10
|
Modeling stereotypes and negative self‐stereotypes as a function of interactions among groups with power asymmetries. JOURNAL FOR THE THEORY OF SOCIAL BEHAVIOUR 2019. [DOI: 10.1111/jtsb.12207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
11
|
Manipulating the Alpha Level Cannot Cure Significance Testing. Front Psychol 2018; 9:699. [PMID: 29867666 PMCID: PMC5962803 DOI: 10.3389/fpsyg.2018.00699] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2018] [Accepted: 04/23/2018] [Indexed: 11/30/2022] Open
Abstract
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.
Collapse
|
12
|
Why the designer's intended function is central for proper function assignment and artifact conceptualization: Essentialist and normative accounts. DEVELOPMENTAL REVIEW 2016. [DOI: 10.1016/j.dr.2016.06.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
13
|
Inference and coherence in causal-based artifact categorization. Cognition 2013; 130:50-65. [PMID: 24184394 DOI: 10.1016/j.cognition.2013.10.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2011] [Revised: 10/04/2013] [Accepted: 10/07/2013] [Indexed: 11/24/2022]
Abstract
In four experiments, we tested conditions under which artifact concepts support inference and coherence in causal categorization. In all four experiments, participants categorized scenarios in which we systematically varied information about artifacts' associated design history, physical structure, user intention, user action and functional outcome, and where each property could be specified as intact, compromised or not observed. Consistently across experiments, when participants received complete information (i.e., when all properties were observed), they categorized based on individual properties and did not show evidence of using coherence to categorize. In contrast, when the state of some property was not observed, participants gave evidence of using available information to infer the state of the unobserved property, which increased the value of the available information for categorization. Our data offers answers to longstanding questions regarding artifact categorization, such as whether there are underlying causal models for artifacts, which properties are part of them, whether design history is an artifact's causal essence, and whether physical appearance or functional outcome is the most central artifact property.
Collapse
|
14
|
La Edad Se Correlaciona Directamente con la Fuerza de los Estereotipos de Género: Evidencia Obtenida en una Tarea de Memoria de Reconocimiento. PSYKHE 2012. [DOI: 10.7764/psykhe.21.2.549] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
|
15
|
Situational information contributes to object categorization and inference. Acta Psychol (Amst) 2009; 130:81-94. [PMID: 19041083 DOI: 10.1016/j.actpsy.2008.10.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2008] [Revised: 10/09/2008] [Accepted: 10/13/2008] [Indexed: 11/28/2022] Open
Abstract
Three experiments demonstrated that situational information contributes to the categorization of functional object categories, as well as to inferences about these categories. When an object was presented in the context of setting and event information, categorization was more accurate than when the object was presented in isolation. Inferences about the object similarly became more accurate as the amount of situational information present during categorization increased. The benefits of situational information were higher when both setting and event information were available than when only setting information was available. These findings indicate that situational information about settings and events is stored with functional object categories in memory. Categorization and inference become increasingly accurate as the information available during categorization matches situational information stored with the category.
Collapse
|
16
|
|
17
|
Abstract
Theories typically emphasize affordances or intentions as the primary determinant of an object's perceived function. The HIPE theory assumes that people integrate both into causal models that produce functional attributions. In these models, an object's physical structure and an agent's action specify an affordance jointly, constituting the immediate causes of a perceived function. The object's design history and an agent's goal in using it constitute distant causes. When specified fully, the immediate causes are sufficient for determining the perceived function--distant causes have no effect (the causal proximity principle). When the immediate causes are ambiguous or unknown, distant causes produce inferences about the immediate causes, thereby affecting functional attributions indirectly (the causal updating principle). Seven experiments supported HIPE's predictions.
Collapse
|
18
|
The Error-Reaction Time Correlation as a Prediction of Category Verification Models. AMERICAN JOURNAL OF PSYCHOLOGY 1998. [DOI: 10.2307/1423484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|