1
|
Heitmeier M, Chuang YY, Baayen RH. How trial-to-trial learning shapes mappings in the mental lexicon: Modelling lexical decision with linear discriminative learning. Cogn Psychol 2023; 146:101598. [PMID: 37716109 PMCID: PMC10589761 DOI: 10.1016/j.cogpsych.2023.101598] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 08/23/2023] [Accepted: 09/02/2023] [Indexed: 09/18/2023]
Abstract
Trial-to-trial effects have been found in a number of studies, indicating that processing a stimulus influences responses in subsequent trials. A special case are priming effects which have been modelled successfully with error-driven learning (Marsolek, 2008), implying that participants are continuously learning during experiments. This study investigates whether trial-to-trial learning can be detected in an unprimed lexical decision experiment. We used the Discriminative Lexicon Model (DLM; Baayen et al., 2019), a model of the mental lexicon with meaning representations from distributional semantics, which models error-driven incremental learning with the Widrow-Hoff rule. We used data from the British Lexicon Project (BLP; Keuleers et al., 2012) and simulated the lexical decision experiment with the DLM on a trial-by-trial basis for each subject individually. Then, reaction times were predicted with Generalized Additive Models (GAMs), using measures derived from the DLM simulations as predictors. We extracted measures from two simulations per subject (one with learning updates between trials and one without), and used them as input to two GAMs. Learning-based models showed better model fit than the non-learning ones for the majority of subjects. Our measures also provide insights into lexical processing and individual differences. This demonstrates the potential of the DLM to model behavioural data and leads to the conclusion that trial-to-trial learning can indeed be detected in unprimed lexical decision. Our results support the possibility that our lexical knowledge is subject to continuous changes.
Collapse
|
2
|
Carvalho PF, Goldstone RL. A Computational Model of Context-Dependent Encodings During Category Learning. Cogn Sci 2022; 46:e13128. [PMID: 35411959 PMCID: PMC9285726 DOI: 10.1111/cogs.13128] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 12/28/2022] [Accepted: 03/07/2022] [Indexed: 11/28/2022]
Abstract
Although current exemplar models of category learning are flexible and can capture how different features are emphasized for different categories, they still lack the flexibility to adapt to local changes in category learning, such as the effect of different sequences of study. In this paper, we introduce a new model of category learning, the Sequential Attention Theory Model (SAT-M), in which the encoding of each presented item is influenced not only by its category assignment (global context) as in other exemplar models, but also by how its properties relate to the properties of temporally neighboring items (local context). By fitting SAT-M to data from experiments comparing category learning with different sequences of trials (interleaved vs. blocked), we demonstrate that SAT-M captures the effect of local context and predicts when interleaved or blocked training will result in better testing performance across three different studies. Comparatively, ALCOVE, SUSTAIN, and a version of SAT-M without locally adaptive encoding provided poor fits to the results. Moreover, we evaluated the direct prediction of the model that different sequences of training change what learners encode and determined that the best-fit encoding parameter values match learners' looking times during training.
Collapse
Affiliation(s)
| | - Robert L. Goldstone
- Department of Psychological and Brain Sciences, Cognitive Science ProgramIndiana University
| |
Collapse
|
3
|
Loonis RF, Brincat SL, Antzoulatos EG, Miller EK. A Meta-Analysis Suggests Different Neural Correlates for Implicit and Explicit Learning. Neuron 2017; 96:521-534.e7. [PMID: 29024670 PMCID: PMC5662212 DOI: 10.1016/j.neuron.2017.09.032] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Revised: 08/07/2017] [Accepted: 09/20/2017] [Indexed: 10/18/2022]
Abstract
A meta-analysis of non-human primates performing three different tasks (Object-Match, Category-Match, and Category-Saccade associations) revealed signatures of explicit and implicit learning. Performance improved equally following correct and error trials in the Match (explicit) tasks, but it improved more after correct trials in the Saccade (implicit) task, a signature of explicit versus implicit learning. Likewise, error-related negativity, a marker for error processing, was greater in the Match (explicit) tasks. All tasks showed an increase in alpha/beta (10-30 Hz) synchrony after correct choices. However, only the implicit task showed an increase in theta (3-7 Hz) synchrony after correct choices that decreased with learning. In contrast, in the explicit tasks, alpha/beta synchrony increased with learning and decreased thereafter. Our results suggest that explicit versus implicit learning engages different neural mechanisms that rely on different patterns of oscillatory synchrony.
Collapse
Affiliation(s)
- Roman F Loonis
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Anatomy and Neurobiology, Boston University, Boston MA, 02118, USA
| | - Scott L Brincat
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Evan G Antzoulatos
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Center for Neuroscience, Department of Neurobiology, Physiology and Behavior, University of California Davis, Davis, CA 95616, USA
| | - Earl K Miller
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| |
Collapse
|
4
|
Greco A, Moretti S. Use of evidence in a categorization task: analytic and holistic processing modes. Cogn Process 2017; 18:431-446. [PMID: 28808826 DOI: 10.1007/s10339-017-0829-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Accepted: 08/05/2017] [Indexed: 11/24/2022]
Abstract
Category learning performance can be influenced by many contextual factors, but the effects of these factors are not the same for all learners. The present study suggests that these differences can be due to the different ways evidence is used, according to two main basic modalities of processing information, analytically or holistically. In order to test the impact of the information provided, an inductive rule-based task was designed, in which feature salience and comparison informativeness between examples of two categories were manipulated during the learning phases, by introducing and progressively reducing some perceptual biases. To gather data on processing modalities, we devised the Active Feature Composition task, a production task that does not require classifying new items but reproducing them by combining features. At the end, an explicit rating task was performed, which entailed assessing the accuracy of a set of possible categorization rules. A combined analysis of the data collected with these two different tests enabled profiling participants in regard to the kind of processing modality, the structure of representations and the quality of categorial judgments. Results showed that despite the fact that the information provided was the same for all participants, those who adopted analytic processing better exploited evidence and performed more accurately, whereas with holistic processing categorization is perfectly possible but inaccurate. Finally, the cognitive implications of the proposed procedure, with regard to involved processes and representations, are discussed.
Collapse
Affiliation(s)
- Alberto Greco
- Laboratory of Psychology and Cognitive Sciences, Department of Social Sciences, University of Genoa, Genoa, Italy.
| | - Stefania Moretti
- Laboratory of Psychology and Cognitive Sciences, Department of Social Sciences, University of Genoa, Genoa, Italy
| |
Collapse
|
5
|
De Cesarei A, Loftus GR, Mastria S, Codispoti M. Understanding natural scenes: Contributions of image statistics. Neurosci Biobehav Rev 2017; 74:44-57. [DOI: 10.1016/j.neubiorev.2017.01.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2016] [Revised: 01/05/2017] [Accepted: 01/09/2017] [Indexed: 10/20/2022]
|
6
|
Hammer R, Sloutsky V. Visual Category Learning Results in Rapid Changes in Brain Activation Reflecting Sensitivity to the Category Relation between Perceived Objects and to Decision Correctness. J Cogn Neurosci 2016; 28:1804-1819. [DOI: 10.1162/jocn_a_01008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Little is known about the time scales in which sensitivity to novel category identity may become evident in visual and executive cortices in visual category learning (VCL) tasks and the nature of such changes in brain activation. We used fMRI to investigate the processing of category information and trial-by-trial feedback information. In each VCL task, stimuli differed in three feature dimensions. In each trial, either two same-category stimuli or two different-categories stimuli were presented. The participant had to learn which feature dimension was relevant for categorization based on the feedback that followed each categorization decision. We contrasted between same-category stimuli trials and different-category trials and between correct and incorrect categorization decision trials. In each trial, brain activation in the visual stimuli processing phase was modeled separately from activation during the later feedback processing phase. We found activation in the lateral occipital complex, indicating sensitivity to the category relation between stimuli, to be evident in VCL within only few learning trials. Specifically, greater lateral occipital complex activation was evident when same-category stimuli were presented than when different-category stimuli were presented. In the feedback processing phase, greater activation in both executive and visual cortices was evident primarily after “misdetections” of same-category stimuli. Implications regarding the contribution of different learning trials to VCL, and the respective role of key brain regions, at the onset of VCL, are discussed.
Collapse
|
7
|
Vanmarcke S, Calders F, Wagemans J. The Time-Course of Ultrarapid Categorization: The Influence of Scene Congruency and Top-Down Processing. Iperception 2016; 7:2041669516673384. [PMID: 27803794 PMCID: PMC5076752 DOI: 10.1177/2041669516673384] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Although categorization can take place at different levels of abstraction, classic studies on semantic labeling identified the basic level, for example, dog, as entry point for categorization. Ultrarapid categorization tasks have contradicted these findings, indicating that participants are faster at detecting superordinate-level information, for example, animal, in a complex visual image. We argue that both seemingly contradictive findings can be reconciled within the framework of parallel distributed processing and its successor Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm). The current study aimed at verifying this prediction in an ultrarapid categorization task with a dynamically changing presentation time (PT) for each briefly presented object, followed by a perceptual mask. Furthermore, we manipulated two defining task variables: level of categorization (basic vs. superordinate categorization) and object presentation mode (object-in-isolation vs. object-in-context). In contradiction with previous ultrarapid categorization research, focusing on reaction time, we used accuracy as our main dependent variable. Results indicated a consistent superordinate processing advantage, coinciding with an overall improvement in performance with longer PT and a significantly more accurate detection of objects in isolation, compared with objects in context, at lower stimulus PT. This contextual disadvantage disappeared when PT increased, indicating that figure-ground separation with recurrent processing is vital for meaningful contextual processing to occur.
Collapse
|
8
|
Vanmarcke S, Wagemans J. Individual differences in spatial frequency processing in scene perception: the influence of autism-related traits. VISUAL COGNITION 2016. [DOI: 10.1080/13506285.2016.1199625] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
9
|
Abstract
We explore a puzzle of visual object categorization: Under normal viewing conditions, you spot something as a dog fastest, but at a glance, you spot it faster as an animal. During speeded category verification, a classic basic-level advantage is commonly observed (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976), with categorization as a dog faster than as an animal (superordinate) or Golden Retriever (subordinate). A different story emerges during ultra-rapid categorization with limited exposure duration (<30 ms), with superordinate categorization faster than basic or subordinate categorization (Thorpe, Fize, & Marlot, 1996). These two widely cited findings paint contrary theoretical pictures about the time course of categorization, yet no previous study has investigated them together. We systematically examined two experimental factors that could explain the qualitative difference in categorization across the two paradigms: exposure duration and category trial context. Mapping out the time course of object categorization by manipulating exposure duration and the timing of a post-stimulus mask revealed that brief exposure durations favor superordinate-level categorization, but with more time a basic-level advantage emerges. However, these advantages were modulated by category trial context. With randomized target categories, the superordinate advantage was eliminated; and with only four repetitions of superordinate categorization within an otherwise randomized context, the basic-level advantage was eliminated. Contrary to theoretical accounts that dictate a fixed priority for certain levels of abstraction in visual processing and access to semantic knowledge, the dynamics of object categorization are flexible, depending jointly on the level of abstraction, time for perceptual encoding, and category context.
Collapse
|
10
|
Hammer R. Impact of feature saliency on visual category learning. Front Psychol 2015; 6:451. [PMID: 25954220 PMCID: PMC4404734 DOI: 10.3389/fpsyg.2015.00451] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2015] [Accepted: 03/30/2015] [Indexed: 11/13/2022] Open
Abstract
People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the 'essence' of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies.
Collapse
Affiliation(s)
- Rubi Hammer
- Department of Communication Sciences and Disorders, Interdepartmental Neuroscience Program, Northwestern UniversityEvanston, IL, USA
| |
Collapse
|