1
|
Van Overwalle J, Van der Donck S, Van de Cruys S, Boets B, Wagemans J. Assessing Spontaneous Categorical Processing of Visual Shapes via Frequency-Tagging EEG. J Neurosci 2024; 44:e1346232024. [PMID: 38423762 PMCID: PMC11026363 DOI: 10.1523/jneurosci.1346-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 01/25/2024] [Accepted: 02/20/2024] [Indexed: 03/02/2024] Open
Abstract
Categorization is an essential cognitive and perceptual process, which happens spontaneously. However, earlier research often neglected the spontaneous nature of this process by mainly adopting explicit tasks in behavioral or neuroimaging paradigms. Here, we use frequency-tagging (FT) during electroencephalography (EEG) in 22 healthy human participants (both male and female) as a direct approach to pinpoint spontaneous visual categorical processing. Starting from schematic natural visual stimuli, we created morph sequences comprising 11 equal steps. Mirroring a behavioral categorical perception discrimination paradigm, we administered a FT-EEG oddball paradigm, assessing neural sensitivity for equally sized differences within and between stimulus categories. Likewise, mirroring a behavioral category classification paradigm, we administered a sweep FT-EEG oddball paradigm, sweeping from one end of the morph sequence to the other, thereby allowing us to objectively pinpoint the neural category boundary. We found that FT-EEG can implicitly measure categorical processing and discrimination. More specifically, we could derive an objective neural index of the required level to differentiate between the two categories, and this neural index showed the typical marker of categorical perception (i.e., stronger discrimination across as compared with within categories). The neural findings of the implicit paradigms were also validated using an explicit behavioral task. These results provide evidence that FT-EEG can be used as an objective tool to measure discrimination and categorization and that the human brain inherently and spontaneously (without any conscious or decisional processes) uses higher-level meaningful categorization information to interpret ambiguous (morph) shapes.
Collapse
Affiliation(s)
- Jaana Van Overwalle
- Department of Brain and Cognition, Leuven Brain Institute, KU Leuven, Leuven 3000, Belgium
| | - Stephanie Van der Donck
- Center for Developmental Psychiatry, Leuven Brain Institute, KU Leuven, Leuven 3000, Belgium
| | - Sander Van de Cruys
- Department of Brain and Cognition, Leuven Brain Institute, KU Leuven, Leuven 3000, Belgium
| | - Bart Boets
- Center for Developmental Psychiatry, Leuven Brain Institute, KU Leuven, Leuven 3000, Belgium
| | - Johan Wagemans
- Department of Brain and Cognition, Leuven Brain Institute, KU Leuven, Leuven 3000, Belgium
| |
Collapse
|
2
|
Son G, Walther DB, Mack ML. Brief category learning distorts perceptual space for complex scenes. Psychon Bull Rev 2024:10.3758/s13423-024-02484-6. [PMID: 38438711 DOI: 10.3758/s13423-024-02484-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/20/2024] [Indexed: 03/06/2024]
Abstract
The formation of categories is known to distort perceptual space: representations are pushed away from category boundaries and pulled toward categorical prototypes. This phenomenon has been studied with artificially constructed objects, whose feature dimensions are easily defined and manipulated. How such category-induced perceptual distortions arise for complex, real-world scenes, however, remains largely unknown due to the technical challenge of measuring and controlling scene features. We address this question by generating realistic scene images from a high-dimensional continuous space using generative adversarial networks and using the images as stimuli in a novel learning task. Participants learned to categorize the scene images along arbitrary category boundaries and later reconstructed the same scenes from memory. Systematic biases in reconstruction errors closely tracked each participant's subjective category boundaries. These findings suggest that the perception of global scene properties is warped to align with a newly learned category structure after only a brief learning experience.
Collapse
Affiliation(s)
- Gaeun Son
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada.
| | - Dirk B Walther
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Michael L Mack
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
3
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
4
|
Abstract
Categorical perception refers to the enhancement of perceptual sensitivity near category boundaries, generally along dimensions that are informative about category membership. However, it remains unclear exactly which dimensions are treated as informative and why. This article reports a series of experiments in which subjects were asked to learn statistically defined categories in a novel, unfamiliar 2D perceptual space of shapes. Perceptual discrimination was tested before and after category learning of various features in the space, each defined by its position and orientation relative to the maximally informative dimension. The results support a remarkably simple generalization: The magnitude of improvement in perceptual discrimination of each feature is proportional to the mutual information between the feature and the category variable. This finding suggests a rational basis for categorical perception in which the precision of perceptual discrimination is tuned to the statistical structure of the environment.
Collapse
Affiliation(s)
- Jacob Feldman
- Department of Psychology, Center for Cognitive Science, Rutgers University
| |
Collapse
|
5
|
The role of category- and exemplar-specific experience in ensemble processing of objects. Atten Percept Psychophys 2020; 83:1080-1093. [PMID: 33078383 DOI: 10.3758/s13414-020-02162-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/21/2020] [Indexed: 01/23/2023]
Abstract
People can relatively easily report summary properties for ensembles of objects, suggesting that this information can enrich visual experience and increase the efficiency of perceptual processing. Here, we ask whether the ability to judge diversity within object arrays improves with experience. We surmised that ensemble judgments would be more accurate for commonly experienced objects, and perhaps even more for objects of expertise like faces. We also expected improvements in ensemble processing with practice with a novel category, and perhaps even more with repeated experience with specific exemplars. We compared the effect of experience on diversity judgments for arrays of objects, with participants being tested with either a small number of repeated exemplars or with a large number of exemplars from the same object category. To explore the role of more prolonged experience, we tested participants with completely novel objects (random blobs), with objects familiar at the category level (cars), and with objects with which observers are experts at subordinate-level recognition (faces). For objects that are novel, participants showed evidence of improved ability to distribute attention. In contrast, for object categories with long-term experience, i.e., faces and cars, performance improved during the experiment but not necessarily due to improved ensemble processing. Practice with specific exemplars did not result in better diversity judgments for all object categories. Considered together, these results suggest that ensemble processing improves with experience. However, experience operates rapidly, the role of experience does not rely on exemplar-level knowledge and may not benefit from subordinate-level expertise.
Collapse
|
6
|
Soto FA, Escobar K, Salan J. Adaptation aftereffects reveal how categorization training changes the encoding of face identity. J Vis 2020; 20:18. [PMID: 33064122 PMCID: PMC7571276 DOI: 10.1167/jov.20.10.18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Previous research suggests that learning to categorize faces along a novel dimension changes the perceptual representation of such dimension, increasing its discriminability, its invariance, and the information used to identify faces varying along the dimension. A common interpretation of these results is that categorization training promotes the creation of novel dimensions, rather than simply the enhancement of already existing representations. Here, we trained a group of participants to categorize faces that varied along two morphing dimensions, one of them relevant to the categorization task and the other irrelevant to the task. An untrained group did not receive such categorization training. In three experiments, we used face adaptation aftereffects to explore how categorization training changes the encoding of face identities at the extremes of the category-relevant dimension and whether such training produces encoding of the category-relevant dimension as a preferred direction in face space. The pattern of results suggests that categorization training enhances the already existing norm-based coding of face identity, rather than creating novel category-relevant representations. We formalized this conclusion in a model that explains the most important results in our experiments and serves as a working hypothesis for future work in this area.
Collapse
Affiliation(s)
- Fabian A Soto
- Florida International University, Department of Psychology, Miami, FL, USA.,
| | - Karla Escobar
- Florida International University, Department of Psychology, Miami, FL, USA.,
| | | |
Collapse
|
7
|
An investigation of the effect of temporal contiguity training on size-tolerant representations in object-selective cortex. Neuroimage 2020; 217:116881. [PMID: 32353487 DOI: 10.1016/j.neuroimage.2020.116881] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Revised: 04/17/2020] [Accepted: 04/23/2020] [Indexed: 02/06/2023] Open
Abstract
The human visual system has a remarkable ability to reliably identify objects across variations in appearance, such as variations in viewpoint, lighting and size. Here we used fMRI in humans to test whether temporal contiguity training with natural and altered image dynamics can respectively build and break neural size tolerance for objects. Participants (N = 23) were presented with sequences of images of "growing" and "shrinking" objects. In half of the trials, the object also changed identity when the size change happened. According to the temporal contiguity hypothesis, and studies with a similar paradigm in monkeys, this training process should alter size tolerance. After the training phase, BOLD responses to each of the object images were measured in the scanner. Neural patterns in LOC and V1 contained information on size, similarity and identity. In LOC, the representation of object identity was partially invariant to changes in size. However, temporal contiguity training did not affect size tolerance in LOC. Size tolerance in human object-selective cortex is more robust to variations in input statistics than expected based on prior work in monkeys supporting the temporal contiguity hypothesis.
Collapse
|
8
|
Wu MH, Kleinschmidt D, Emberson L, Doko D, Edelman S, Jacobs R, Raizada R. Cortical Transformation of Stimulus Space in Order to Linearize a Linearly Inseparable Task. J Cogn Neurosci 2020; 32:2342-2355. [PMID: 31951157 DOI: 10.1162/jocn_a_01533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The human brain is able to learn difficult categorization tasks, even ones that have linearly inseparable boundaries; however, it is currently unknown how it achieves this computational feat. We investigated this by training participants on an animal categorization task with a linearly inseparable prototype structure in a morph shape space. Participants underwent fMRI scans before and after 4 days of behavioral training. Widespread representational changes were found throughout the brain, including an untangling of the categories' neural patterns that made them more linearly separable after behavioral training. These neural changes were task dependent, as they were only observed while participants were performing the categorization task, not during passive viewing. Moreover, they were found to occur in frontal and parietal areas, rather than ventral temporal cortices, suggesting that they reflected attentional and decisional reweighting, rather than changes in object recognition templates. These results illustrate how the brain can flexibly transform neural representational space to solve computationally challenging tasks.
Collapse
|
9
|
Pérez-Gay Juárez F, Sicotte T, Thériault C, Harnad S. Category learning can alter perception and its neural correlates. PLoS One 2019; 14:e0226000. [PMID: 31810079 PMCID: PMC6897555 DOI: 10.1371/journal.pone.0226000] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Accepted: 11/18/2019] [Indexed: 11/25/2022] Open
Abstract
Learned Categorical Perception (CP) occurs when the members of different categories come to look more dissimilar ("between-category separation") and/or members of the same category come to look more similar ("within-category compression") after a new category has been learned. To measure learned CP and its physiological correlates we compared dissimilarity judgments and Event Related Potentials (ERPs) before and after learning to sort multi-featured visual textures into two categories by trial and error with corrective feedback. With the same number of training trials and feedback, about half the subjects succeeded in learning the categories ("Learners": criterion 80% accuracy) and the rest did not ("Non-Learners"). At both lower and higher levels of difficulty, successful Learners showed significant between-category separation-and, to a lesser extent, within-category compression-in pairwise dissimilarity judgments after learning, compared to before; their late parietal ERP positivity (LPC, usually interpreted as decisional) also increased and their occipital N1 amplitude (usually interpreted as perceptual) decreased. LPC amplitude increased with response accuracy and N1 amplitude decreased with between-category separation for the Learners. Non-Learners showed no significant changes in dissimilarity judgments, LPC or N1, within or between categories. This is behavioral and physiological evidence that category learning can alter perception. We sketch a neural net model predictive of this effect.
Collapse
Affiliation(s)
| | - Tomy Sicotte
- Université du Québec à Montréal, Montréal, Canada
| | | | - Stevan Harnad
- McGill University, Montréal, Canada
- Université du Québec à Montréal, Montréal, Canada
- University of Southampton, Southampton, United Kingdom
| |
Collapse
|
10
|
Braunlich K, Love BC. Occipitotemporal representations reflect individual differences in conceptual knowledge. J Exp Psychol Gen 2019; 148:1192-1203. [PMID: 30382719 PMCID: PMC6586152 DOI: 10.1037/xge0000501] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2018] [Revised: 06/12/2018] [Accepted: 07/31/2018] [Indexed: 12/04/2022]
Abstract
Through selective attention, decision-makers can learn to ignore behaviorally irrelevant stimulus dimensions. This can improve learning and increase the perceptual discriminability of relevant stimulus information. Across cognitive models of categorization, this is typically accomplished through the inclusion of attentional parameters, which provide information about the importance assigned to each stimulus dimension by each participant. The effect of these parameters on psychological representation is often described geometrically, such that perceptual differences over relevant psychological dimensions are accentuated (or stretched), and differences over irrelevant dimensions are down-weighted (or compressed). In sensory and association cortex, representations of stimulus features are known to covary with their behavioral relevance. Although this implies that neural representational space might closely resemble that hypothesized by formal categorization theory, to date, attentional effects in the brain have been demonstrated through powerful experimental manipulations (e.g., contrasts between relevant and irrelevant features). This approach sidesteps the role of idiosyncratic conceptual knowledge in guiding attention to useful information sources. To bridge this divide, we used formal categorization models, which were fit to behavioral data, to make inferences about the concepts and strategies used by individual participants during decision-making. We found that when greater attentional weight was devoted to a particular visual feature (e.g., "color"), its value (e.g., "red") was more accurately decoded from occipitotemporal cortex. We also found that this effect was sufficiently sensitive to reflect individual differences in conceptual knowledge, indicating that occipitotemporal stimulus representations are embedded within a space closely resembling that formalized by classic categorization theory. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
11
|
Abstract
Previous research suggests that learning to categorize faces along a new dimension changes the perceptual representation of that dimension, but little is known about how the representation of specific face identities changes after such category learning. Here, we trained participants to categorize faces that varied along two morphing dimensions. One dimension was relevant to the categorization task and the other was irrelevant. We used reverse correlation to estimate the internal templates used to identify the two faces at the extremes of the relevant dimension, both before and after training, and at two different levels of the irrelevant dimension. Categorization training changed the internal templates used for face identification, even though identification and categorization tasks impose different demands on the observers. After categorization training, the internal templates became more invariant across changes in the irrelevant dimension. These results suggest that the representation of face identity can be modified by categorization experience.
Collapse
|
12
|
Abstract
The visual system represents summary statistical information from a set of similar items, a phenomenon known as ensemble perception. In exploring various ensemble domains (e.g., orientation, color, facial expression), researchers have often employed the method of continuous report, in which observers select their responses from a gradually changing morph sequence. However, given their current implementation, some face morphs unintentionally introduce noise into the ensemble measurement. Specifically, some facial expressions on the morph wheel appear perceptually similar even though they are far apart in stimulus space. For instance, in a morph wheel of happy-sad-angry-happy expressions, an expression between happy and sad may not be discriminable from an expression between sad and angry. Without accounting for this confusability, observer ability will be underestimated. In the present experiments we accounted for this by delineating the perceptual confusability of morphs of multiple expressions. In a two-alternative forced choice task, eight observers were asked to discriminate between anchor images (36 in total) and all 360 facial expressions on the morph wheel. The results were visualized on a "confusability matrix," depicting the morphs most likely to be confused for one another. The matrix revealed multiple confusable images between distant expressions on the morph wheel. By accounting for these "confusability regions," we demonstrated a significant improvement in performance estimation on a set of independent ensemble data, suggesting that high-level ensemble abilities may be better than has been previously thought. We also provide an alternative computational approach that may be used to determine potentially confusable stimuli in a given morph space.
Collapse
|
13
|
Soto FA, Ashby FG. Novel representations that support rule-based categorization are acquired on-the-fly during category learning. PSYCHOLOGICAL RESEARCH 2019; 83:544-566. [PMID: 30806809 DOI: 10.1007/s00426-019-01157-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Accepted: 02/15/2019] [Indexed: 12/21/2022]
Abstract
Humans learn categorization rules that are aligned with separable dimensions through a rule-based learning system, which makes learning faster and easier to generalize than categorization rules that require integration of information from different dimensions. Recent research suggests that learning to categorize objects along a completely novel dimension changes its perceptual representation, making it more separable and discriminable. Here, we asked whether such newly learned dimensions could support rule-based category learning. One group received extensive categorization training and a second group did not receive such training. Later, both groups were trained in a task that made use of the category-relevant dimension, and then tested in an analogical transfer task (Experiment 1) and a button-switch interference task (Experiment 2). We expected that only the group with extensive pre-training (with well-learned dimensional representations) would show evidence of rule-based behavior in these tasks. Surprisingly, both groups performed as expected from rule-based learning. A third experiment tested whether a single session (less than 1 h) of training in a categorization task would facilitate learning in a task requiring executive function. There was a substantial learning advantage for a group with brief pre-training with the relevant dimension. We hypothesize that extensive experience with separable dimensions is not required for rule-based category learning; rather, the rule-based system may learn representations "on the fly" that allow rule application. We discuss what kind of neurocomputational model might explain these data best.
Collapse
Affiliation(s)
- Fabian A Soto
- Department of Psychology, Florida International University, 11200 SW 8th St, AHC4 460, Miami, FL, 33199, USA.
| | - F Gregory Ashby
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, USA
| |
Collapse
|
14
|
Van Meel C, Op de Beeck HP. Temporal Contiguity Training Influences Behavioral and Neural Measures of Viewpoint Tolerance. Front Hum Neurosci 2018; 12:13. [PMID: 29441006 PMCID: PMC5797614 DOI: 10.3389/fnhum.2018.00013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 01/12/2018] [Indexed: 11/13/2022] Open
Abstract
Humans can often recognize faces across viewpoints despite the large changes in low-level image properties a shift in viewpoint introduces. We present a behavioral and an fMRI adaptation experiment to investigate whether this viewpoint tolerance is reflected in the neural visual system and whether it can be manipulated through training. Participants saw training sequences of face images creating the appearance of a rotating head. Half of the sequences showed faces undergoing veridical changes in appearance across the rotation (non-morph condition). The other half were non-veridical: during rotation, the face simultaneously morphed into another face. This procedure should successfully associate frontal face views with side views of the same or a different identity, and, according to the temporal contiguity hypothesis, thus enhance viewpoint tolerance in the non-morph condition and/or break tolerance in the morph condition. Performance on the same/different task in the behavioral experiment (N = 20) was affected by training. There was a significant interaction between training (associated/not associated) and identity (same/different), mostly reflecting a higher confusion of different identities when they were associated during training. In the fMRI study (N = 20), fMRI adaptation effects were found for same-viewpoint images of untrained faces, but no adaptation for untrained faces was present across viewpoints. Only trained faces which were not morphed during training elicited a slight adaptation across viewpoints in face-selective regions. However, both in the behavioral and in the neural data the effects were small and weak from a statistical point of view. Overall, we conclude that the findings are not inconsistent with the proposal that temporal contiguity can influence viewpoint tolerance, with more evidence for tolerance when faces are not morphed during training.
Collapse
Affiliation(s)
- Chayenne Van Meel
- Laboratory of Biological Psychology, Brain and Cognition, KU Leuven, Leuven, Belgium
| | | |
Collapse
|
15
|
|
16
|
Braunlich K, Liu Z, Seger CA. Occipitotemporal Category Representations Are Sensitive to Abstract Category Boundaries Defined by Generalization Demands. J Neurosci 2017; 37:7631-7642. [PMID: 28674173 PMCID: PMC6596645 DOI: 10.1523/jneurosci.3825-16.2017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2016] [Revised: 06/20/2017] [Accepted: 06/27/2017] [Indexed: 11/21/2022] Open
Abstract
Categorization involves organizing perceptual information so as to maximize differences along dimensions that predict class membership while minimizing differences along dimensions that do not. In the current experiment, we investigated how neural representations reflecting learned category structure vary according to generalization demands. We asked male and female human participants to switch between two rules when determining whether stimuli should be considered members of a single known category. When categorizing according to the "strict" rule, participants were required to limit generalization to make fine-grained distinctions between stimuli and the category prototype. When categorizing according to the "lax" rule, participants were required to generalize category knowledge to highly atypical category members. As expected, frontoparietal regions were primarily sensitive to decisional demands (i.e., the distance of each stimulus from the active category boundary), whereas occipitotemporal representations were primarily sensitive to stimulus typicality (i.e., the similarity between each exemplar and the category prototype). Interestingly, occipitotemporal representations of stimulus typicality differed between rules. While decoding models were able to predict unseen data when trained and tested on the same rule, they were unable to do so when trained and tested on different rules. We additionally found that the discriminability of the multivariate signal negatively covaried with distance from the active category boundary. Thus, whereas many accounts of occipitotemporal cortex emphasize its important role in transforming visual information to accentuate learned category structure, our results highlight the flexible nature of these representations with regards to transient decisional demands.SIGNIFICANCE STATEMENT Occipitotemporal representations are known to reflect category structure and are often assumed to be largely invariant with regards to transient decisional demands. We found that representations of equivalent stimuli differed between strict and lax generalization rules, and that the discriminability of these representations increased as distance from abstract category boundaries decreased. Our results therefore indicate that occipitotemporal representations are flexibly modulated by abstract decisional factors.
Collapse
Affiliation(s)
- Kurt Braunlich
- Department of Experimental Psychology, University College London, London WC1E 6BT, United Kingdom, and
- Department of Psychology and Program in Molecular, Cellular, and Integrative Neurosciences, Colorado State University, Fort Collins, Colorado 80523
| | - Zhiya Liu
- Center for the Study of Applied Psychology, Key Laboratory of Mental Health and Cognitive Science of Guangdong Province, School of Psychology, South China Normal University, Guangzhou 510631, PR China,
| | - Carol A Seger
- Center for the Study of Applied Psychology, Key Laboratory of Mental Health and Cognitive Science of Guangdong Province, School of Psychology, South China Normal University, Guangzhou 510631, PR China,
- Department of Psychology and Program in Molecular, Cellular, and Integrative Neurosciences, Colorado State University, Fort Collins, Colorado 80523
| |
Collapse
|
17
|
Dieciuc M, Roque NA, Folstein JR. Changing similarity: Stable and flexible modulations of psychological dimensions. Brain Res 2017; 1670:208-219. [PMID: 28669719 DOI: 10.1016/j.brainres.2017.06.026] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Revised: 06/01/2017] [Accepted: 06/26/2017] [Indexed: 10/19/2022]
Abstract
Successfully categorizing objects requires discriminating between relevant and irrelevant dimensions (e.g., shape, color). Categorization can lead to changes in the visual system that stretch psychological space, making relevant dimensions more distinct and irrelevant dimensions more similar. These changes are known as dimensional modulation (DM) and they can be both stable and flexible in nature. The current study examined the interaction between stable DM and flexible DM, as well as the time course of relative changes in similarity. Using a two-dimensional space of cars, participants learned to categorize the space and then completed a target identification task during EEG recording. We found that attention, operationally defined as the selection negativity, was sensitive to category-relevance and appeared to selectively enhance previously irrelevant differences in the service of a target detection task. In contrast, we found that late decisional stages, operationally defined as the P3 b, were less sensitive to relevance and instead more sensitive to the number of morphsteps that separated targets from non-targets. Thus, it appears that relative similarity between targets and non-targets dynamically changed over the time course of individual decisions. Similarity between exemplars was greater along the irrelevant than the relevant dimension early on in the time course but a compensatory allocation of attention led to similarity being optimized among all dimensions for later stages. This finding is important because it 1) provides a new source of converging evidence for stable DM and 2) links a neural measure of attentional modulation with facilitation of an unpracticed, but task-relevant perceptual dimension.
Collapse
Affiliation(s)
- Michael Dieciuc
- Department of Psychology, Florida State University, 1107 W. Call St., Tallahassee, FL 32306, United States.
| | - Nelson A Roque
- Department of Psychology, Florida State University, 1107 W. Call St., Tallahassee, FL 32306, United States
| | - Jonathan R Folstein
- Department of Psychology, Florida State University, 1107 W. Call St., Tallahassee, FL 32306, United States
| |
Collapse
|
18
|
Abstract
Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization.
Collapse
|
19
|
Abstract
We explore a puzzle of visual object categorization: Under normal viewing conditions, you spot something as a dog fastest, but at a glance, you spot it faster as an animal. During speeded category verification, a classic basic-level advantage is commonly observed (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976), with categorization as a dog faster than as an animal (superordinate) or Golden Retriever (subordinate). A different story emerges during ultra-rapid categorization with limited exposure duration (<30 ms), with superordinate categorization faster than basic or subordinate categorization (Thorpe, Fize, & Marlot, 1996). These two widely cited findings paint contrary theoretical pictures about the time course of categorization, yet no previous study has investigated them together. We systematically examined two experimental factors that could explain the qualitative difference in categorization across the two paradigms: exposure duration and category trial context. Mapping out the time course of object categorization by manipulating exposure duration and the timing of a post-stimulus mask revealed that brief exposure durations favor superordinate-level categorization, but with more time a basic-level advantage emerges. However, these advantages were modulated by category trial context. With randomized target categories, the superordinate advantage was eliminated; and with only four repetitions of superordinate categorization within an otherwise randomized context, the basic-level advantage was eliminated. Contrary to theoretical accounts that dictate a fixed priority for certain levels of abstraction in visual processing and access to semantic knowledge, the dynamics of object categorization are flexible, depending jointly on the level of abstraction, time for perceptual encoding, and category context.
Collapse
|
20
|
Folstein J, Palmeri TJ, Van Gulick AE, Gauthier I. Category Learning Stretches Neural Representations in Visual Cortex. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2015; 24:17-23. [PMID: 25745280 DOI: 10.1177/0963721414550707] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We review recent work that shows how learning to categorize objects changes how those objects are represented in the mind and the brain. After category learning, visual perception of objects is enhanced along perceptual dimensions that were relevant to the learned categories, an effect we call dimensional modulation (DM). DM stretches object representations along category-relevant dimensions and shrinks them along category-irrelevant dimensions. The perceptual advantage for category-relevant dimensions extends beyond categorization and can be observed during visual discrimination and other tasks that do not depend on the learned categories. fMRI shows that category learning causes ventral stream neural populations in visual cortex representing objects along a category-relevant dimension to become more distinct. These results are consistent with a view that specific aspects of cognitive tasks associated with objects can account for how our visual system responds to objects.
Collapse
|
21
|
Blunden AG, Wang T, Griffiths DW, Little DR. Logical-rules and the classification of integral dimensions: individual differences in the processing of arbitrary dimensions. Front Psychol 2015; 5:1531. [PMID: 25620941 PMCID: PMC4288243 DOI: 10.3389/fpsyg.2014.01531] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2014] [Accepted: 12/11/2014] [Indexed: 11/15/2022] Open
Abstract
A variety of converging operations demonstrate key differences between separable dimensions, which can be analyzed independently, and integral dimensions, which are processed in a non-analytic fashion. A recent investigation of response time distributions, applying a set of logical rule-based models, demonstrated that integral dimensions are pooled into a single coactive processing channel, in contrast to separable dimensions, which are processed in multiple, independent processing channels. This paper examines the claim that arbitrary dimensions created by factorially morphing four faces are processed in an integral manner. In two experiments, 16 participants completed a categorization task in which either upright or inverted morph stimuli were classified in a speeded fashion. Analyses focused on contrasting different assumptions about the psychological representation of the stimuli, perceptual and decisional separability, and the processing architecture. We report consistent individual differences which demonstrate a mixture of some observers who demonstrate coactive processing with other observers who process the dimensions in a parallel self-terminating manner.
Collapse
Affiliation(s)
| | | | | | - Daniel R. Little
- Melbourne School of Psychological Sciences, The University of MelbourneMelbourne, VIC, Australia
| |
Collapse
|
22
|
Folstein JR, Palmeri TJ, Gauthier I. Perceptual advantage for category-relevant perceptual dimensions: the case of shape and motion. Front Psychol 2014; 5:1394. [PMID: 25520691 PMCID: PMC4249057 DOI: 10.3389/fpsyg.2014.01394] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2014] [Accepted: 11/14/2014] [Indexed: 11/20/2022] Open
Abstract
Category learning facilitates perception along relevant stimulus dimensions, even when tested in a discrimination task that does not require categorization. While this general phenomenon has been demonstrated previously, perceptual facilitation along dimensions has been documented by measuring different specific phenomena in different studies using different kinds of objects. Across several object domains, there is support for acquired distinctiveness, the stretching of a perceptual dimension relevant to learned categories. Studies using faces and studies using simple separable visual dimensions have also found evidence of acquired equivalence, the shrinking of a perceptual dimension irrelevant to learned categories, and categorical perception, the local stretching across the category boundary. These later two effects are rarely observed with complex non-face objects. Failures to find these effects with complex non-face objects may have been because the dimensions tested previously were perceptually integrated. Here we tested effects of category learning with non-face objects categorized along dimensions that have been found to be processed by different areas of the brain, shape and motion. While we replicated acquired distinctiveness, we found no evidence for acquired equivalence or categorical perception.
Collapse
Affiliation(s)
| | - Thomas J Palmeri
- Psychological Sciences, Vanderbilt University Nashville, TN, USA
| | - Isabel Gauthier
- Psychological Sciences, Vanderbilt University Nashville, TN, USA
| |
Collapse
|
23
|
Lim SJ, Fiez JA, Holt LL. How may the basal ganglia contribute to auditory categorization and speech perception? Front Neurosci 2014; 8:230. [PMID: 25136291 PMCID: PMC4117994 DOI: 10.3389/fnins.2014.00230] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2014] [Accepted: 07/13/2014] [Indexed: 02/01/2023] Open
Abstract
Listeners must accomplish two complementary perceptual feats in extracting a message from speech. They must discriminate linguistically-relevant acoustic variability and generalize across irrelevant variability. Said another way, they must categorize speech. Since the mapping of acoustic variability is language-specific, these categories must be learned from experience. Thus, understanding how, in general, the auditory system acquires and represents categories can inform us about the toolbox of mechanisms available to speech perception. This perspective invites consideration of findings from cognitive neuroscience literatures outside of the speech domain as a means of constraining models of speech perception. Although neurobiological models of speech perception have mainly focused on cerebral cortex, research outside the speech domain is consistent with the possibility of significant subcortical contributions in category learning. Here, we review the functional role of one such structure, the basal ganglia. We examine research from animal electrophysiology, human neuroimaging, and behavior to consider characteristics of basal ganglia processing that may be advantageous for speech category learning. We also present emerging evidence for a direct role for basal ganglia in learning auditory categories in a complex, naturalistic task intended to model the incidental manner in which speech categories are acquired. To conclude, we highlight new research questions that arise in incorporating the broader neuroscience research literature in modeling speech perception, and suggest how understanding contributions of the basal ganglia can inform attempts to optimize training protocols for learning non-native speech categories in adulthood.
Collapse
Affiliation(s)
- Sung-Joo Lim
- Department of Psychology, Carnegie Mellon University Pittsburgh, PA, USA ; Department of Neuroscience, Center for the Neural Basis of Cognition, University of Pittsburgh Pittsburgh, PA, USA
| | - Julie A Fiez
- Department of Neuroscience, Center for the Neural Basis of Cognition, University of Pittsburgh Pittsburgh, PA, USA ; Department of Neuroscience, Center for Neuroscience, University of Pittsburgh Pittsburgh, PA, USA ; Department of Psychology, University of Pittsburgh Pittsburgh, PA, USA
| | - Lori L Holt
- Department of Psychology, Carnegie Mellon University Pittsburgh, PA, USA ; Department of Neuroscience, Center for the Neural Basis of Cognition, University of Pittsburgh Pittsburgh, PA, USA ; Department of Neuroscience, Center for Neuroscience, University of Pittsburgh Pittsburgh, PA, USA
| |
Collapse
|
24
|
Abstract
In this review, we synthesize the existing literature demonstrating the dynamic interplay between conceptual knowledge and visual perceptual processing. We consider two theoretical frameworks that demonstrate interactions between processes and brain areas traditionally considered perceptual or conceptual. Specifically, we discuss categorical perception, in which visual objects are represented according to category membership, and highlight studies showing that category knowledge can penetrate early stages of visual analysis. We next discuss the embodied account of conceptual knowledge, which holds that concepts are instantiated in the same neural regions required for specific types of perception and action, and discuss the limitations of this framework. We additionally consider studies showing that gaining abstract semantic knowledge about objects and faces leads to behavioral and electrophysiological changes that are indicative of more efficient stimulus processing. Finally, we consider the role that perceiver goals and motivation may play in shaping the interaction between conceptual and perceptual processing. We hope to demonstrate how pervasive such interactions between motivation, conceptual knowledge, and perceptual processing are in our understanding of the visual environment, and to demonstrate the need for future research aimed at understanding how such interactions arise in the brain.
Collapse
Affiliation(s)
- Jessica A Collins
- Department of Psychology, Temple University, Weiss Hall, 1701 North 13th Street, Philadelphia, PA, 19122, USA,
| | | |
Collapse
|
25
|
Van Gulick AE, Gauthier I. The perceptual effects of learning object categories that predict perceptual goals. J Exp Psychol Learn Mem Cogn 2014; 40:1307-20. [PMID: 24820671 DOI: 10.1037/a0036822] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In classic category learning studies, subjects typically learn to assign items to 1 of 2 categories, with no further distinction between how items on each side of the category boundary should be treated. In real life, however, we often learn categories that dictate further processing goals, for instance, with objects in only 1 category requiring further individuation. Using methods from category learning and perceptual expertise, we studied the perceptual consequences of experience with objects in tasks that rely on attention to different dimensions in different parts of the space. In 2 experiments, subjects first learned to categorize complex objects from a single morphspace into 2 categories based on 1 morph dimension, and then learned to perform a different task, either naming or a local feature judgment, for each of the 2 categories. A same-different discrimination test before and after each training measured sensitivity to feature dimensions of the space. After initial categorization, sensitivity increased along the category-diagnostic dimension. After task association, sensitivity increased more for the category that was named, especially along the nondiagnostic dimension. The results demonstrate that local attentional weights, associated with individual exemplars as a function of task requirements, can have lasting effects on perceptual representations.
Collapse
|
26
|
Richler JJ, Palmeri TJ. Visual category learning. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2013; 5:75-94. [PMID: 26304297 DOI: 10.1002/wcs.1268] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2013] [Revised: 09/11/2013] [Accepted: 10/23/2013] [Indexed: 11/11/2022]
Abstract
Visual categories group together different objects as the same kinds of thing. We review a selection of research on how visual categories are learned. We begin with a guide to visual category learning experiments, describing a space of common manipulations of objects, categories, and methods used in the category learning literature. We open with a guide to these details in part because throughout our review we highlight how methodological details can sometimes loom large in theoretical discussions of visual category learning, how variations in methodological details can significantly affect our understanding of visual category learning, and how manipulations of methodological details can affect how visual categories are learned. We review a number of core theories of visual category learning, specifically those theories instantiated as computational models, highlighting just some of the experimental results that help distinguish between competing models. We examine behavioral and neural evidence for single versus multiple representational systems for visual category learning. We briefly discuss how visual category learning influences visual perception, describing empirical and brain imaging results that show how learning to categorize objects can influence how those objects are represented and perceived. We close with work that can potentially impact translation, describing recent experiments that explicitly manipulate key methodological details of category learning procedures with the goal of optimizing visual category learning. WIREs Cogn Sci 2014, 5:75-94. doi: 10.1002/wcs.1268 CONFLICT OF INTEREST: The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
| | - Thomas J Palmeri
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
27
|
Seger CA, Peterson EJ. Categorization = decision making + generalization. Neurosci Biobehav Rev 2013; 37:1187-200. [PMID: 23548891 PMCID: PMC3739997 DOI: 10.1016/j.neubiorev.2013.03.015] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2012] [Revised: 03/21/2013] [Accepted: 03/22/2013] [Indexed: 11/22/2022]
Abstract
We rarely, if ever, repeatedly encounter exactly the same situation. This makes generalization crucial for real world decision making. We argue that categorization, the study of generalizable representations, is a type of decision making, and that categorization learning research would benefit from approaches developed to study the neuroscience of decision making. Similarly, methods developed to examine generalization and learning within the field of categorization may enhance decision making research. We first discuss perceptual information processing and integration, with an emphasis on accumulator models. We then examine learning the value of different decision making choices via experience, emphasizing reinforcement learning modeling approaches. Next we discuss how value is combined with other factors in decision making, emphasizing the effects of uncertainty. Finally, we describe how a final decision is selected via thresholding processes implemented by the basal ganglia and related regions. We also consider how memory related functions in the hippocampus may be integrated with decision making mechanisms and contribute to categorization.
Collapse
Affiliation(s)
- Carol A Seger
- Department of Psychology, Colorado State University Fort Collins, CO 80523, USA.
| | | |
Collapse
|
28
|
Folstein JR, Palmeri TJ, Gauthier I. Category learning increases discriminability of relevant object dimensions in visual cortex. ACTA ACUST UNITED AC 2012; 23:814-23. [PMID: 22490547 DOI: 10.1093/cercor/bhs067] [Citation(s) in RCA: 83] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Learning to categorize objects can transform how they are perceived, causing relevant perceptual dimensions predictive of object category to become enhanced. For example, an expert mycologist might become attuned to species-specific patterns of spacing between mushroom gills but learn to ignore cap textures attributable to varying environmental conditions. These selective changes in perception can persist beyond the act of categorizing objects and influence our ability to discriminate between them. Using functional magnetic resonance imaging adaptation, we demonstrate that such category-specific perceptual enhancements are associated with changes in the neural discriminability of object representations in visual cortex. Regions within the anterior fusiform gyrus became more sensitive to small variations in shape that were relevant during prior category learning. In addition, extrastriate occipital areas showed heightened sensitivity to small variations in shape that spanned the category boundary. Visual representations in cortex, just like our perception, are sensitive to an object's history of categorization.
Collapse
|