1
|
Jovanović V, Petrušić I, Savić A, Ković V. Processing of visual hapaxes in picture naming task: An event-related potential study. Int J Psychophysiol 2024; 203:112394. [PMID: 39053735 DOI: 10.1016/j.ijpsycho.2024.112394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 06/28/2024] [Accepted: 07/15/2024] [Indexed: 07/27/2024]
Abstract
Object recognition and visual categorization are typically swift and seemingly effortless tasks that involve numerous underlying processes. In our investigation, we utilized a picture naming task to explore the processing of rarely encountered objects (visual hapaxes) in comparison to common objects. Our aim was to determine the stage at which these rare objects are classified as unnamable. Contrary to our expectations and in contrast to some prior research on event-related potentials (ERPs) with novel and atypical objects, no differences between conditions were observed in the late time windows corresponding to the P300 or N400 components. However, distinctive patterns between hapaxes and common objects surfaced in three early time windows, corresponding to the posterior N1 and P2 waves, as well as a widespread N2 wave. According to the ERP data, the differentiation between hapaxes and common objects occurs within the first 380 ms of the processing line, involving only limited and indirect top-down influence.
Collapse
Affiliation(s)
- Vojislav Jovanović
- University of Belgrade, Faculty of Philosophy, Department of Psychology, Laboratory for Neurocognition and Applied Cognition, 11000 Belgrade, Serbia.
| | - Igor Petrušić
- University of Belgrade, Faculty of Physical Chemistry, Laboratory for Advanced Analysis of Neuroimages, 11000 Belgrade, Serbia
| | - Andrej Savić
- University of Belgrade, School of Electrical Engineering, Science and Research Centre, 11000 Belgrade, Serbia
| | - Vanja Ković
- University of Belgrade, Faculty of Philosophy, Department of Psychology, Laboratory for Neurocognition and Applied Cognition, 11000 Belgrade, Serbia
| |
Collapse
|
2
|
Liu D, Wang L, Han Y. Mental simulation of colour properties during language comprehension: influence of context and comprehension stages. Cogn Process 2024:10.1007/s10339-024-01201-4. [PMID: 38850444 DOI: 10.1007/s10339-024-01201-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 05/23/2024] [Indexed: 06/10/2024]
Abstract
Many studies have shown that mental simulation may occur during language comprehension. Supporting evidence is derived from the matching effects in the sentence-picture verification (SPV) task often used to assess mental simulations of object properties, such as size, orientation, and shape. However, mixed results have been obtained regarding object colour, with researchers reporting matching or mismatching effects. This study investigated the impact of colour information clarity within sentences on the process of mental simulation during language comprehension. Employing the SPV task and using novel objects, we examined whether there is a mental simulation of colour after excluding typical/atypical colour bias and how varying levels of colour information clarity in sentences influence the emergence of matching effects at different stages of comprehension. To address these issues, we conducted two experiments. In Experiment 1, the participants read normal sentences and subsequently engaged in picture verification with a novel object after a 500 ms delay. In Experiment 2, the participants encountered sentences containing both clear and unclear colour information and, after either a 0 ms or 1500 ms interval, completed picture verification tasks with a novel object. Null effects were found in the 500 ms condition for normal sentences and the 0 ms condition for unclear colour information sentences. A mismatching effect appeared in the 0 ms condition after clear colour information sentences, and a matching effect appeared in the 1500 ms condition for all sentences. The results indicated that after excluding colour bias, the participants still formed mental simulations of colour during language comprehension. Our results also indicated that ongoing colour simulation with time pressure impacted the participant responses. The participants ignored unclear colour information under time pressure, but without time pressure, they constructed simulations that were as detailed as possible, regardless of whether the implicit colour information in the sentence was clear.
Collapse
Affiliation(s)
- Donglin Liu
- School of Psychology, Northeast Normal University, No. 5268 Renmin Street, Changchun, 130024, China
| | - Lijuan Wang
- School of Psychology, Northeast Normal University, No. 5268 Renmin Street, Changchun, 130024, China.
| | - Ying Han
- Psychometrics and Quantitative Psychology, Fordham University, Bronx, NY, 10458, USA
| |
Collapse
|
3
|
Mikhailova A, Lightfoot S, Santos-Victor J, Coco MI. Differential effects of intrinsic properties of natural scenes and interference mechanisms on recognition processes in long-term visual memory. Cogn Process 2024; 25:173-187. [PMID: 37831320 DOI: 10.1007/s10339-023-01164-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Accepted: 09/20/2023] [Indexed: 10/14/2023]
Abstract
Humans display remarkable long-term visual memory (LTVM) processes. Even though images may be intrinsically memorable, the fidelity of their visual representations, and consequently the likelihood of successfully retrieving them, hinges on their similarity when concurrently held in LTVM. In this debate, it is still unclear whether intrinsic features of images (perceptual and semantic) may be mediated by mechanisms of interference generated at encoding, or during retrieval, and how these factors impinge on recognition processes. In the current study, participants (32) studied a stream of 120 natural scenes from 8 semantic categories, which varied in frequencies (4, 8, 16 or 32 exemplars per category) to generate different levels of category interference, in preparation for a recognition test. Then they were asked to indicate which of two images, presented side by side (i.e. two-alternative forced-choice), they remembered. The two images belonged to the same semantic category but varied in their perceptual similarity (similar or dissimilar). Participants also expressed their confidence (sure/not sure) about their recognition response, enabling us to tap into their metacognitive efficacy (meta-d'). Additionally, we extracted the activation of perceptual and semantic features in images (i.e. their informational richness) through deep neural network modelling and examined their impact on recognition processes. Corroborating previous literature, we found that category interference and perceptual similarity negatively impact recognition processes, as well as response times and metacognitive efficacy. Moreover, images semantically rich were less likely remembered, an effect that trumped a positive memorability boost coming from perceptual information. Critically, we did not observe any significant interaction between intrinsic features of images and interference generated either at encoding or during retrieval. All in all, our study calls for a more integrative understanding of the representational dynamics during encoding and recognition enabling us to form, maintain and access visual information.
Collapse
Affiliation(s)
- Anastasiia Mikhailova
- Institute for Systems and Robotics, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal.
| | | | - José Santos-Victor
- Institute for Systems and Robotics, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Moreno I Coco
- Sapienza, University of Rome, Rome, Italy.
- I.R.C.C.S. Santa Lucia, Fondazione Santa Lucia, Roma, Italy.
| |
Collapse
|
4
|
Enge A, Süß F, Abdel Rahman R. Instant Effects of Semantic Information on Visual Perception. J Neurosci 2023; 43:4896-4906. [PMID: 37286353 PMCID: PMC10312055 DOI: 10.1523/jneurosci.2038-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 03/16/2023] [Accepted: 04/17/2023] [Indexed: 06/09/2023] Open
Abstract
Does our perception of an object change once we discover what function it serves? We showed human participants (n = 48, 31 females and 17 males) pictures of unfamiliar objects either together with keywords matching their function, leading to semantically informed perception, or together with nonmatching keywords, resulting in uninformed perception. We measured event-related potentials to investigate at which stages in the visual processing hierarchy these two types of object perception differed from one another. We found that semantically informed compared with uninformed perception was associated with larger amplitudes in the N170 component (150-200 ms), reduced amplitudes in the N400 component (400-700 ms), and a late decrease in alpha/beta band power. When the same objects were presented once more without any information, the N400 and event-related power effects persisted, and we also observed enlarged amplitudes in the P1 component (100-150 ms) in response to objects for which semantically informed perception had taken place. Consistent with previous work, this suggests that obtaining semantic information about previously unfamiliar objects alters aspects of their lower-level visual perception (P1 component), higher-level visual perception (N170 component), and semantic processing (N400 component, event-related power). Our study is the first to show that such effects occur instantly after semantic information has been provided for the first time, without requiring extensive learning.SIGNIFICANCE STATEMENT There has been a long-standing debate about whether or not higher-level cognitive capacities, such as semantic knowledge, can influence lower-level perceptual processing in a top-down fashion. Here we could show, for the first time, that information about the function of previously unfamiliar objects immediately influences cortical processing within less than 200 ms. Of note, this influence does not require training or experience with the objects and related semantic information. Therefore, our study is the first to show effects of cognition on perception while ruling out the possibility that prior knowledge merely acts by preactivating or altering stored visual representations. Instead, this knowledge seems to alter perception online, thus providing a compelling case against the impenetrability of perception by cognition.
Collapse
Affiliation(s)
- Alexander Enge
- Department of Psychology, Humboldt-Universität zu Berlin, 12489 Berlin, Germany
- Max Planck Institute for Human Cognitive, Research Group Learning in Early Childhood and Brain Sciences, 04103, Leipzig, Germany
| | - Franziska Süß
- Fachhochschule des Mittelstands, 96050, Bamberg, Germany
| | - Rasha Abdel Rahman
- Department of Psychology, Humboldt-Universität zu Berlin, 12489 Berlin, Germany
- Cluster of Excellence "Science of Intelligence," 10587, Berlin, Germany
| |
Collapse
|
5
|
Carvalho PF, Goldstone RL. A Computational Model of Context-Dependent Encodings During Category Learning. Cogn Sci 2022; 46:e13128. [PMID: 35411959 PMCID: PMC9285726 DOI: 10.1111/cogs.13128] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 12/28/2022] [Accepted: 03/07/2022] [Indexed: 11/28/2022]
Abstract
Although current exemplar models of category learning are flexible and can capture how different features are emphasized for different categories, they still lack the flexibility to adapt to local changes in category learning, such as the effect of different sequences of study. In this paper, we introduce a new model of category learning, the Sequential Attention Theory Model (SAT-M), in which the encoding of each presented item is influenced not only by its category assignment (global context) as in other exemplar models, but also by how its properties relate to the properties of temporally neighboring items (local context). By fitting SAT-M to data from experiments comparing category learning with different sequences of trials (interleaved vs. blocked), we demonstrate that SAT-M captures the effect of local context and predicts when interleaved or blocked training will result in better testing performance across three different studies. Comparatively, ALCOVE, SUSTAIN, and a version of SAT-M without locally adaptive encoding provided poor fits to the results. Moreover, we evaluated the direct prediction of the model that different sequences of training change what learners encode and determined that the best-fit encoding parameter values match learners' looking times during training.
Collapse
Affiliation(s)
| | - Robert L. Goldstone
- Department of Psychological and Brain Sciences, Cognitive Science ProgramIndiana University
| |
Collapse
|
6
|
Ahn S, Zelinsky GJ, Lupyan G. Use of superordinate labels yields more robust and human-like visual representations in convolutional neural networks. J Vis 2021; 21:13. [PMID: 34967860 PMCID: PMC8727315 DOI: 10.1167/jov.21.13.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Human visual recognition is outstandingly robust. People can recognize thousands of object classes in the blink of an eye (50–200 ms) even when the objects vary in position, scale, viewpoint, and illumination. What aspects of human category learning facilitate the extraction of invariant visual features for object recognition? Here, we explore the possibility that a contributing factor to learning such robust visual representations may be a taxonomic hierarchy communicated in part by common labels to which people are exposed as part of natural language. We did this by manipulating the taxonomic level of labels (e.g., superordinate-level [mammal, fruit, vehicle] and basic-level [dog, banana, van]), and the order in which these training labels were used during learning by a Convolutional Neural Network. We found that training the model with hierarchical labels yields visual representations that are more robust to image transformations (e.g., position/scale, illumination, noise, and blur), especially when images were first trained with superordinate labels and then fine-tuned with basic labels. We also found that Superordinate-label followed by Basic-label training best predicts functional magnetic resonance imaging responses in visual cortex and behavioral similarity judgments recorded while viewing naturalistic images. The benefits of training with superordinate labels in the earlier stages of category learning is discussed in the context of representational efficiency and generalization.
Collapse
Affiliation(s)
- Seoyoung Ahn
- Department of Psychology, Stony Brook University, Stony Brook, NY, USA.,
| | - Gregory J Zelinsky
- Department of Psychology, Stony Brook University, Stony Brook, NY, USA.,Department of Computer Science, Stony Brook University, Stony Brook, NY, USA.,
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA.,
| |
Collapse
|
7
|
Abstract
Categorical perception refers to the enhancement of perceptual sensitivity near category boundaries, generally along dimensions that are informative about category membership. However, it remains unclear exactly which dimensions are treated as informative and why. This article reports a series of experiments in which subjects were asked to learn statistically defined categories in a novel, unfamiliar 2D perceptual space of shapes. Perceptual discrimination was tested before and after category learning of various features in the space, each defined by its position and orientation relative to the maximally informative dimension. The results support a remarkably simple generalization: The magnitude of improvement in perceptual discrimination of each feature is proportional to the mutual information between the feature and the category variable. This finding suggests a rational basis for categorical perception in which the precision of perceptual discrimination is tuned to the statistical structure of the environment.
Collapse
Affiliation(s)
- Jacob Feldman
- Department of Psychology, Center for Cognitive Science, Rutgers University
| |
Collapse
|
8
|
Abstract
In our exploratory study, we ask how naive observers, without a distinct religious background,
approach biblical art that combines image and text. For this purpose, we choose the
book ‘New biblical figures of the Old and New Testament’ published in 1569 as source of
the stimuli. This book belongs to the genre of illustrated Bibles, which were very popular
during the Reformation. Since there is no empirical knowledge regarding the interaction
between image and text during the process of such biblical art reception, we selected four
relevant images from the book and measured the eye movements of participants in order to
characterize and quantify their scanning behavior related to such stimuli in terms of i) looking
at text (text usage), ii) text vs. image interaction measures (semantic or contextual relevance
of text), and iii) narration. We show that texts capture attention early in the process
of inspection and that text and image interact. Moreover, semantics of texts are used to guide
eye movements later through the image, supporting the formation of the narrative.
Collapse
|
9
|
Abstract
Many objects that we encounter have typical material qualities: spoons are hard, pillows are soft, and Jell-O dessert is wobbly. Over a lifetime of experiences, strong associations between an object and its typical material properties may be formed, and these associations not only include how glossy, rough, or pink an object is, but also how it behaves under force: we expect knocked over vases to shatter, popped bike tires to deflate, and gooey grilled cheese to hang between two slices of bread when pulled apart. Here we ask how such rich visual priors affect the visual perception of material qualities and present a particularly striking example of expectation violation. In a cue conflict design, we pair computer-rendered familiar objects with surprising material behaviors (a linen curtain shattering, a porcelain teacup wrinkling, etc.) and find that material qualities are not solely estimated from the object's kinematics (i.e., its physical [atypical] motion while shattering, wrinkling, wobbling etc.); rather, material appearance is sometimes “pulled” toward the “native” motion, shape, and optical properties that are associated with this object. Our results, in addition to patterns we find in response time data, suggest that visual priors about materials can set up high-level expectations about complex future states of an object and show how these priors modulate material appearance.
Collapse
Affiliation(s)
| | | | - Katja Doerschner
- Justus Liebig University, Giessen, Germany.,Bilkent University, Ankara, Turkey.,
| |
Collapse
|
10
|
Lupyan G, Abdel Rahman R, Boroditsky L, Clark A. Effects of Language on Visual Perception. Trends Cogn Sci 2020; 24:930-944. [PMID: 33012687 DOI: 10.1016/j.tics.2020.08.005] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 08/22/2020] [Accepted: 08/25/2020] [Indexed: 11/24/2022]
Abstract
Does language change what we perceive? Does speaking different languages cause us to perceive things differently? We review the behavioral and electrophysiological evidence for the influence of language on perception, with an emphasis on the visual modality. Effects of language on perception can be observed both in higher-level processes such as recognition and in lower-level processes such as discrimination and detection. A consistent finding is that language causes us to perceive in a more categorical way. Rather than being fringe or exotic, as they are sometimes portrayed, we discuss how effects of language on perception naturally arise from the interactive and predictive nature of perception.
Collapse
Affiliation(s)
- Gary Lupyan
- University of Wisconsin-Madison, Madison, WI, USA.
| | | | | | - Andy Clark
- University of Sussex, Brighton, UK; Macquarie University, Sydney, Australia
| |
Collapse
|
11
|
Collins E, Behrmann M. Exemplar learning reveals the representational origins of expert category perception. Proc Natl Acad Sci U S A 2020; 117:11167-11177. [PMID: 32366664 PMCID: PMC7245133 DOI: 10.1073/pnas.1912734117] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Irrespective of whether one has substantial perceptual expertise for a class of stimuli, an observer invariably encounters novel exemplars from this class. To understand how novel exemplars are represented, we examined the extent to which previous experience with a category constrains the acquisition and nature of representation of subsequent exemplars from that category. Participants completed a perceptual training paradigm with either novel other-race faces (category of experience) or novel computer-generated objects (YUFOs) that included pairwise similarity ratings at the beginning, middle, and end of training, and a 20-d visual search training task on a subset of category exemplars. Analyses of pairwise similarity ratings revealed multiple dissociations between the representational spaces for those learning faces and those learning YUFOs. First, representational distance changes were more selective for faces than YUFOs; trained faces exhibited greater magnitude in representational distance change relative to untrained faces, whereas this trained-untrained distance change was much smaller for YUFOs. Second, there was a difference in where the representational distance changes were observed; for faces, representations that were closer together before training exhibited a greater distance change relative to those that were farther apart before training. For YUFOs, however, the distance changes occurred more uniformly across representational space. Last, there was a decrease in dimensionality of the representational space after training on YUFOs, but not after training on faces. Together, these findings demonstrate how previous category experience governs representational patterns of exemplar learning as well as the underlying dimensionality of the representational space.
Collapse
Affiliation(s)
- Elliot Collins
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213
- School of Medicine, University of Pittsburgh, Pittsburgh, PA 15260
| | - Marlene Behrmann
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213;
| |
Collapse
|
12
|
Freeman JB, Stolier RM, Brooks JA. Dynamic interactive theory as a domain-general account of social perception. ADVANCES IN EXPERIMENTAL SOCIAL PSYCHOLOGY 2019; 61:237-287. [PMID: 34326560 PMCID: PMC8317542 DOI: 10.1016/bs.aesp.2019.09.005] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
The perception of social categories, emotions, and personality traits from others' faces each have been studied extensively but in relative isolation. We synthesize emerging findings suggesting that, in each of these domains of social perception, both a variety of bottom-up facial features and top-down social cognitive processes play a part in driving initial perceptions. Among such top-down processes, social-conceptual knowledge in particular can have a fundamental structuring role in how we perceive others' faces. Extending the Dynamic Interactive framework (Freeman & Ambady, 2011), we outline a perspective whereby the perception of social categories, emotions, and traits from faces can all be conceived as emerging from an integrated system relying on domain-general cognitive properties. Such an account of social perception would envision perceptions to be a rapid, but gradual, process of negotiation between the variety of visual cues inherent to a person and the social cognitive knowledge an individual perceiver brings to the perceptual process. We describe growing evidence in support of this perspective as well as its theoretical implications for social psychology.
Collapse
|
13
|
Fixating the eyes of a speaker provides sufficient visual information to modulate early auditory processing. Biol Psychol 2019; 146:107724. [PMID: 31323242 DOI: 10.1016/j.biopsycho.2019.107724] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Revised: 02/26/2019] [Accepted: 06/29/2019] [Indexed: 11/23/2022]
Abstract
In face-to-face conversations, when listeners process and combine information obtained from hearing and seeing a speaker, they mostly look at the eyes rather than at the more informative mouth region. Measuring event-related potentials, we tested whether fixating the speaker's eyes is sufficient for gathering enough visual speech information to modulate early auditory processing, or whether covert attention to the speaker's mouth is needed. Results showed that when listeners fixated the eye region of the speaker, the amplitudes of the auditory evoked N1 and P2 were reduced when listeners heard and saw the speaker than when they only heard her. These cross-modal interactions also occurred when, in addition, attention was restricted to the speaker's eye region. Fixating the speaker's eyes thus provides listeners with sufficient visual information to facilitate early auditory processing. The spread of covert attention to the mouth area is not needed to observe audiovisual interactions.
Collapse
|
14
|
Richler JJ, Tomarken AJ, Sunday MA, Vickery TJ, Ryan KF, Floyd RJ, Sheinberg D, Wong AC, Gauthier I. Individual differences in object recognition. Psychol Rev 2019; 126:226-251. [PMID: 30802123 PMCID: PMC6484857 DOI: 10.1037/rev0000129] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
There is substantial evidence for individual differences in personality and cognitive abilities, but we lack clear intuitions about individual differences in visual abilities. Previous work on this topic has typically compared performance with only 2 categories, each measured with only 1 task. This approach is insufficient for demonstration of domain-general effects. Most previous work has used familiar object categories, for which experience may vary between participants and categories, thereby reducing correlations that would stem from a common factor. In Study 1, we adopted a latent variable approach to test for the first time whether there is a domain-general object recognition ability, o. We assessed whether shared variance between latent factors representing performance for each of 5 novel object categories could be accounted for by a single higher-order factor. On average, 89% of the variance of lower-order factors denoting performance on novel object categories could be accounted for by a higher-order factor, providing strong evidence for o. Moreover, o also accounted for a moderate proportion of variance in tests of familiar object recognition. In Study 2, we assessed whether the strong association across categories in object recognition is due to third-variable influences. We find that o has weak to moderate associations with a host of cognitive, perceptual, and personality constructs and that a clear majority of the variance in and covariance between performance on different categories is independent of fluid intelligence. This work provides the first demonstration of a reliable, specific, and domain-general object recognition ability, and suggest a rich framework for future work in this area. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
15
|
General Transformations of Object Representations in Human Visual Cortex. J Neurosci 2018; 38:8526-8537. [PMID: 30126975 DOI: 10.1523/jneurosci.2800-17.2018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Revised: 08/07/2018] [Accepted: 08/09/2018] [Indexed: 11/21/2022] Open
Abstract
The brain actively represents incoming information, but these representations are only useful to the extent that they flexibly reflect changes in the environment. How does the brain transform representations across changes, such as in size or viewing angle? We conducted a fMRI experiment and a magnetoencephalography experiment in humans (both sexes) in which participants viewed objects before and after affine viewpoint changes (rotation, translation, enlargement). We used a novel approach, representational transformation analysis, to derive transformation functions that linked the distributed patterns of brain activity evoked by an object before and after an affine change. Crucially, transformations derived from one object could predict a postchange representation for novel objects. These results provide evidence of general operations in the brain that are distinct from neural representations evoked by particular objects and scenes.SIGNIFICANCE STATEMENT The dominant focus in cognitive neuroscience has been on how the brain represents information, but these representations are only useful to the extent that they flexibly reflect changes in the environment. How does the brain transform representations, such as linking two states of an object, for example, before and after an object undergoes a physical change? We used a novel method to derive transformations between the brain activity evoked by an object before and after an affine viewpoint change. We show that transformations derived from one object undergoing a change generalized to a novel object undergoing the same change. This result shows that there are general perceptual operations that transform object representations from one state to another.
Collapse
|
16
|
Quaddles: A multidimensional 3-D object set with parametrically controlled and customizable features. Behav Res Methods 2018; 51:2522-2532. [PMID: 30088255 DOI: 10.3758/s13428-018-1097-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Many studies of vision and cognition require novel three-dimensional object sets defined by a parametric feature space. Creating such sets and verifying that they are suitable for a given task, however, can be time-consuming and effortful. Here we present a new set of multidimensional objects, Quaddles, designed for studies of feature-based learning and attention, but adaptable for many research purposes. Quaddles have features that are all equally visible from any angle around the vertical axis and can be designed to be equally discriminable along feature dimensions; these objects do not show strong or consistent response biases, with a small number of quantified exceptions. They are available as two-dimensional images, rotating videos, and FBX object files suitable for use with any modern video game engine. We also provide scripts that can be used to generate hundreds of thousands of further Quaddles, as well as examples and tutorials for modifying Quaddles or creating completely new object sets from scratch, with the aim to speed up the development time of future novel-object studies.
Collapse
|
17
|
Specific problems in visual cognition of dyslexic readers: Face discrimination deficits predict dyslexia over and above discrimination of scrambled faces and novel objects. Cognition 2018; 175:157-168. [DOI: 10.1016/j.cognition.2018.02.017] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2017] [Revised: 02/13/2018] [Accepted: 02/14/2018] [Indexed: 02/01/2023]
|
18
|
Almaraz SM, Hugenberg K, Young SG. Perceiving Sophisticated Minds Influences Perceptual Individuation. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2017; 44:143-157. [PMID: 29094646 DOI: 10.1177/0146167217733070] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
In six studies, we investigated how ascribing humanlike versus animallike minds to targets influences how easily targets are individuated. Across the studies, participants learned to discriminate among a variety of "aliens" (actually Greebles). Our initial study showed that participants' ability to learn to individuate targets was related to beliefs that targets had sophisticated minds. Investigating the directionality of this relationship, we found that learning to better recognize the targets did not affect perceptions of mind (Study 2). However, when targets were described as having sophisticated humanlike (relative to simplistic animallike) mental faculties, perceivers indicated more motivation to individuate (Study 3) and were more successful individuating them (Studies 4 and 5). Finally, we showed that increased self-similarity mediated the relationship between targets' mental sophistication and perceivers' motivation to individuate (Study 6). These findings indicate ascribing sophisticated mental faculties to others has implications for how we individuate them.
Collapse
Affiliation(s)
| | | | - Steven G Young
- 2 Baruch College, City University of New York, New York City, USA
| |
Collapse
|
19
|
Chen CH, Gershkoff-Stowe L, Wu CY, Cheung H, Yu C. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers. Cogn Sci 2017; 41:1485-1509. [PMID: 27671780 PMCID: PMC5366274 DOI: 10.1111/cogs.12417] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2014] [Revised: 05/26/2016] [Accepted: 06/14/2016] [Indexed: 11/26/2022]
Abstract
Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories based on the commonalities across training stimuli. Experiment 2 replicated the first experiment and further examined whether speakers of Mandarin, a language in which final syllables of object names are more predictive of category membership than English, were able to learn words and form object categories when trained with the same type of structures. The results indicate that both groups of learners successfully extracted multiple levels of co-occurrence and used them to learn words and object categories simultaneously. However, marked individual differences in performance were also found, suggesting possible interference and competition in processing the two concurrent streams of regularities.
Collapse
Affiliation(s)
- Chi-hsin Chen
- Department of Psychological and Brain Sciences, Indiana University
| | | | - Chih-Yi Wu
- Graduate Institute of Linguistics, National Taiwan University
| | - Hintat Cheung
- Department of Linguistics and Modern Language Studies, The Education University of Hong Kong
| | - Chen Yu
- Department of Psychological and Brain Sciences, Indiana University
| |
Collapse
|
20
|
Zahn R, Green S, Beaumont H, Burns A, Moll J, Caine D, Gerhard A, Hoffman P, Shaw B, Grafman J, Lambon Ralph MA. Frontotemporal lobar degeneration and social behaviour: Dissociation between the knowledge of its consequences and its conceptual meaning. Cortex 2017. [PMID: 28646671 PMCID: PMC5542070 DOI: 10.1016/j.cortex.2017.05.009] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Inappropriate social behaviour is an early symptom of frontotemporal lobar degeneration (FTLD) in both behavioural variant frontotemporal dementia (bvFTD) and semantic dementia (SD) subtypes. Knowledge of social behaviour is essential for appropriate social conduct. The superior anterior temporal lobe (ATL) has been identified as one key neural component for the conceptual knowledge of social behaviour, but it is unknown whether this is dissociable from knowledge of the consequences of social behaviour. Here, we used a newly-developed test of knowledge about long-term and short-term consequences of social behaviour to investigate its impairment in patients with FTLD relative to a previously-developed test of social conceptual knowledge. We included 19 healthy elderly control participants and 19 consecutive patients with features of bvFTD or SD and defined dissociations as performance differences between tasks for each patient (Bonferroni-corrected p < .05). Knowledge of long-term consequences was selectively impaired relative to short-term consequences in five patients and the reverse dissociation occurred in one patient. Six patients showed a selective impairment of social concepts relative to long-term consequences with the reverse dissociation occurring in one patient. These results corroborate the hypothesis that knowledge of long-term consequences of social behaviour is dissociable from knowledge of short-term consequences, as well as of social conceptual knowledge. Confirming our hypothesis, we found that patients with more marked grey matter (GM) volume loss in frontopolar relative to right superior ATL regions of interest exhibited poorer knowledge of the long-term consequences of social behaviour relative to the knowledge of its conceptual meaning and vice versa (n = 15). These findings support the hypothesis that frontopolar and ATL regions represent distinct aspects of social knowledge. This suggests that rather than being unable to suppress urges to behave inappropriately, FTLD patients often lose the knowledge of what appropriate social behaviour is and can therefore not be expected to behave accordingly.
Collapse
Affiliation(s)
- Roland Zahn
- Institute of Psychiatry, Psychology & Neuroscience, Department of Psychological Medicine, King's College London, London, SE5 8AZ, UK; Neuroscience and Aphasia Research Unit, Division of Neuroscience and Experimental Psychology, The University of Manchester, Manchester, UK.
| | - Sophie Green
- Neuroscience and Aphasia Research Unit, Division of Neuroscience and Experimental Psychology, The University of Manchester, Manchester, UK
| | - Helen Beaumont
- Neuroscience and Aphasia Research Unit, Division of Neuroscience and Experimental Psychology, The University of Manchester, Manchester, UK; Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
| | - Alistair Burns
- Division of Neuroscience and Experimental Psychology, The University of Manchester, Manchester, UK
| | - Jorge Moll
- Cognitive and Behavioral Neuroscience Unit, D'Or Institute for Research and Education (IDOR), Rio de Janeiro, RJ, Brazil
| | - Diana Caine
- Neuroscience and Aphasia Research Unit, Division of Neuroscience and Experimental Psychology, The University of Manchester, Manchester, UK; National Hospital for Neurology & Neurosurgery, Queen Square, London, UK
| | - Alexander Gerhard
- Division of Neuroscience and Experimental Psychology, The University of Manchester, Manchester, UK; Department of Nuclear Medicine and Geriatric Medicine, University Hospital Essen, Germany
| | - Paul Hoffman
- Neuroscience and Aphasia Research Unit, Division of Neuroscience and Experimental Psychology, The University of Manchester, Manchester, UK
| | - Benjamin Shaw
- Division of Neuroscience and Experimental Psychology, The University of Manchester, Manchester, UK
| | - Jordan Grafman
- Rehabilitation Institute of Chicago, Chicago, IL, USA; Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, USA
| | - Matthew A Lambon Ralph
- Neuroscience and Aphasia Research Unit, Division of Neuroscience and Experimental Psychology, The University of Manchester, Manchester, UK
| |
Collapse
|
21
|
Vuong QC, Willenbockel V, Zimmermann FGS, Lochy A, Laguesse R, Dryden A, Rossion B. Facelikeness matters: A parametric multipart object set to understand the role of spatial configuration in visual recognition. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1289997] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Quoc C. Vuong
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK
| | | | - Friederike G. S. Zimmermann
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK
- Institute of Research in Psychology & Institute of Neuroscience, Université Catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Aliette Lochy
- Institute of Research in Psychology & Institute of Neuroscience, Université Catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Renaud Laguesse
- Institute of Research in Psychology & Institute of Neuroscience, Université Catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Adam Dryden
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK
| | - Bruno Rossion
- Institute of Research in Psychology & Institute of Neuroscience, Université Catholique de Louvain, Louvain-La-Neuve, Belgium
| |
Collapse
|
22
|
Interference from related actions in spoken word production: Behavioural and fMRI evidence. Neuropsychologia 2017; 96:78-88. [DOI: 10.1016/j.neuropsychologia.2017.01.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2016] [Revised: 01/09/2017] [Accepted: 01/09/2017] [Indexed: 11/18/2022]
|
23
|
Clarke A, Pell PJ, Ranganath C, Tyler LK. Learning Warps Object Representations in the Ventral Temporal Cortex. J Cogn Neurosci 2016; 28:1010-23. [PMID: 26967942 DOI: 10.1162/jocn_a_00951] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The human ventral temporal cortex (VTC) plays a critical role in object recognition. Although it is well established that visual experience shapes VTC object representations, the impact of semantic and contextual learning is unclear. In this study, we tracked changes in representations of novel visual objects that emerged after learning meaningful information about each object. Over multiple training sessions, participants learned to associate semantic features (e.g., "made of wood," "floats") and spatial contextual associations (e.g., "found in gardens") with novel objects. fMRI was used to examine VTC activity for objects before and after learning. Multivariate pattern similarity analyses revealed that, after learning, VTC activity patterns carried information about the learned contextual associations of the objects, such that objects with contextual associations exhibited higher pattern similarity after learning. Furthermore, these learning-induced increases in pattern information about contextual associations were correlated with reductions in pattern information about the object's visual features. In a second experiment, we validated that these contextual effects translated to real-life objects. Our findings demonstrate that visual object representations in VTC are shaped by the knowledge we have about objects and show that object representations can flexibly adapt as a consequence of learning with the changes related to the specific kind of newly acquired information.
Collapse
Affiliation(s)
- Alex Clarke
- University of Cambridge, UK.,University of California, Davis
| | | | | | | |
Collapse
|
24
|
Folstein J, Palmeri TJ, Van Gulick AE, Gauthier I. Category Learning Stretches Neural Representations in Visual Cortex. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2015; 24:17-23. [PMID: 25745280 DOI: 10.1177/0963721414550707] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We review recent work that shows how learning to categorize objects changes how those objects are represented in the mind and the brain. After category learning, visual perception of objects is enhanced along perceptual dimensions that were relevant to the learned categories, an effect we call dimensional modulation (DM). DM stretches object representations along category-relevant dimensions and shrinks them along category-irrelevant dimensions. The perceptual advantage for category-relevant dimensions extends beyond categorization and can be observed during visual discrimination and other tasks that do not depend on the learned categories. fMRI shows that category learning causes ventral stream neural populations in visual cortex representing objects along a category-relevant dimension to become more distinct. These results are consistent with a view that specific aspects of cognitive tasks associated with objects can account for how our visual system responds to objects.
Collapse
|
25
|
Goals and task difficulty expectations modulate striatal responses to feedback. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2015; 14:610-20. [PMID: 24638235 PMCID: PMC4072914 DOI: 10.3758/s13415-014-0269-8] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The striatum plays a critical role in learning from reward, and it has been implicated in learning from performance-related feedback as well. Positive and negative performance-related feedback is known to engage the striatum during learning by eliciting a response similar to the reinforcement signal for extrinsic rewards and punishments. Feedback is an important tool used to teach new skills and promote healthful lifestyle changes, so it is important to understand how motivational contexts can modulate its effectiveness at promoting learning. While it is known that striatal responses scale with subjective factors influencing the desirability of rewards, it is less clear how expectations and goals might modulate the striatal responses to cognitive feedback during learning. We used functional magnetic resonance imaging to investigate the effects of task difficulty expectations and achievement goals on feedback processing during learning. We found that individuals who scored high in normative goals, which reflect a desire to outperform other students academically, showed the strongest effects of our manipulation. High levels of normative goals were associated with greater performance gains and exaggerated striatal sensitivity to positive versus negative feedback during blocks that were expected to be more difficult. Our findings suggest that normative goals may enhance performance when difficulty expectations are high, while at the same time modulating the subjective value of feedback as processed in the striatum.
Collapse
|
26
|
Maier M, Glage P, Hohlfeld A, Abdel Rahman R. Does the semantic content of verbal categories influence categorical perception? An ERP study. Brain Cogn 2014; 91:1-10. [PMID: 25163810 DOI: 10.1016/j.bandc.2014.07.008] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2013] [Revised: 07/25/2014] [Accepted: 07/28/2014] [Indexed: 10/24/2022]
Abstract
Accumulating evidence suggests that visual perception and, in particular, visual discrimination, can be influenced by verbal category boundaries. One issue that still awaits systematic investigation is the specific influence of semantic contents of verbal categories on categorical perception (CP). We tackled this issue with a learning paradigm in which initially unfamiliar, yet realistic objects were associated with either bare labels lacking explicit semantic content or labels that were accompanied by enriched semantic information about the specific meaning of the label. Two to three days after learning, the EEG was recorded while participants performed a lateralized oddball task. Newly acquired verbal category boundaries modulated low-level aspects of visual perception as early as 100-150 ms after stimulus onset, suggesting a genuine influence of language on perception. Importantly, this effect was not further influenced by enriched semantic category information, suggesting that bare labels and the associated minimal and predominantly perceptual information are sufficient for CP. Distinct effects of semantic knowledge independent of category boundaries were found subsequently, starting at about 200 ms, possibly reflecting selective attention to semantically meaningful visual features.
Collapse
Affiliation(s)
- Martin Maier
- Humboldt-Universität zu Berlin, Berlin, Germany.
| | | | | | | |
Collapse
|
27
|
Abstract
In this review, we synthesize the existing literature demonstrating the dynamic interplay between conceptual knowledge and visual perceptual processing. We consider two theoretical frameworks that demonstrate interactions between processes and brain areas traditionally considered perceptual or conceptual. Specifically, we discuss categorical perception, in which visual objects are represented according to category membership, and highlight studies showing that category knowledge can penetrate early stages of visual analysis. We next discuss the embodied account of conceptual knowledge, which holds that concepts are instantiated in the same neural regions required for specific types of perception and action, and discuss the limitations of this framework. We additionally consider studies showing that gaining abstract semantic knowledge about objects and faces leads to behavioral and electrophysiological changes that are indicative of more efficient stimulus processing. Finally, we consider the role that perceiver goals and motivation may play in shaping the interaction between conceptual and perceptual processing. We hope to demonstrate how pervasive such interactions between motivation, conceptual knowledge, and perceptual processing are in our understanding of the visual environment, and to demonstrate the need for future research aimed at understanding how such interactions arise in the brain.
Collapse
Affiliation(s)
- Jessica A Collins
- Department of Psychology, Temple University, Weiss Hall, 1701 North 13th Street, Philadelphia, PA, 19122, USA,
| | | |
Collapse
|
28
|
Cheung OS, Gauthier I. Visual appearance interacts with conceptual knowledge in object recognition. Front Psychol 2014; 5:793. [PMID: 25120509 PMCID: PMC4114261 DOI: 10.3389/fpsyg.2014.00793] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2014] [Accepted: 07/06/2014] [Indexed: 12/02/2022] Open
Abstract
Objects contain rich visual and conceptual information, but do these two types of information interact? Here, we examine whether visual and conceptual information interact when observers see novel objects for the first time. We then address how this interaction influences the acquisition of perceptual expertise. We used two types of novel objects (Greebles), designed to resemble either animals or tools, and two lists of words, which described non-visual attributes of people or man-made objects. Participants first judged if a word was more suitable for describing people or objects while ignoring a task-irrelevant image, and showed faster responses if the words and the unfamiliar objects were congruent in terms of animacy (e.g., animal-like objects with words that described human). Participants then learned to associate objects and words that were either congruent or not in animacy, before receiving expertise training to rapidly individuate the objects. Congruent pairing of visual and conceptual information facilitated observers' ability to become a perceptual expert, as revealed in a matching task that required visual identification at the basic or subordinate levels. Taken together, these findings show that visual and conceptual information interact at multiple levels in object recognition.
Collapse
Affiliation(s)
- Olivia S Cheung
- Department of Psychology, Harvard University Cambridge, MA, USA ; Center for Mind/Brain Sciences, University of Trento Trentino, Italy
| | - Isabel Gauthier
- Department of Psychology, Vanderbilt University Nashville, TN, USA
| |
Collapse
|
29
|
Collins JA, Curby KM. Conceptual knowledge attenuates viewpoint dependency in visual object recognition. VISUAL COGNITION 2013. [DOI: 10.1080/13506285.2013.836138] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
30
|
Campanella F, Fabbro F, Urgesi C. Cognitive and anatomical underpinnings of the conceptual knowledge for common objects and familiar people: a repetitive transcranial magnetic stimulation study. PLoS One 2013; 8:e64596. [PMID: 23704999 PMCID: PMC3660352 DOI: 10.1371/journal.pone.0064596] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2012] [Accepted: 04/15/2013] [Indexed: 12/04/2022] Open
Abstract
Several studies have addressed the issue of how knowledge of common objects is organized in the brain, whereas the cognitive and anatomical underpinnings of familiar people knowledge have been less explored. Here we applied repetitive transcranial magnetic stimulation (rTMS) over the left and right temporal poles before asking healthy individuals to perform a speeded word-to-picture matching task using familiar people and common objects as stimuli. We manipulated two widely used semantic variables, namely the semantic distance and the familiarity of stimuli, to assess whether the semantic organization of familiar people knowledge is similar to that of common objects. For both objects and faces we reliably found semantic distance and familiarity effects, with less accurate and slower responses for stimulus pairs that were more closely related and less familiar. However, the effects of semantic variables differed across categories, with semantic distance effects larger for objects and familiarity effects larger for faces, suggesting that objects and faces might share a partially comparable organization of their semantic representations. The application of rTMS to the left temporal pole modulated, for both categories, semantic distance, but not familiarity effects, revealing that accessing object and face concepts might rely on overlapping processes within left anterior temporal regions. Crucially, rTMS of the left temporal pole affected only the recognition of pairs of stimuli that could be discriminated at specific levels of categorization (e.g., two kitchen tools or two famous persons), with no effect for discriminations at either superordinate or individual levels. Conversely, rTMS of the right temporal pole induced an overall slowing of reaction times that positively correlated with the visual similarity of the stimuli, suggesting a more perceptual rather than semantic role of the right anterior temporal regions. Results are discussed in the light of current models of face and object semantic representations in the brain.
Collapse
Affiliation(s)
- Fabio Campanella
- Neurosurgery Unit, Azienda Ospedaliero-Universitaria ‘Santa Maria della Misericordia’, Udine, Italy
- Department of Human Sciences, University of Udine, Udine, Italy
- * E-mail:
| | - Franco Fabbro
- Department of Human Sciences, University of Udine, Udine, Italy
- Istituto di Ricovero e Cura a Carattere Scientifico ‘E. Medea’, Polo Regionale Friuli Venezia Giulia, San Vito al Tagliamento, Pordenone, Italy
| | - Cosimo Urgesi
- Department of Human Sciences, University of Udine, Udine, Italy
- Istituto di Ricovero e Cura a Carattere Scientifico ‘E. Medea’, Polo Regionale Friuli Venezia Giulia, San Vito al Tagliamento, Pordenone, Italy
| |
Collapse
|
31
|
Schuster D, Rivera J, Sellers BC, Fiore SM, Jentsch F. Perceptual training for visual search. ERGONOMICS 2013; 56:1101-1115. [PMID: 23650877 DOI: 10.1080/00140139.2013.790481] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
UNLABELLED People are better at visual search than the best fully automated methods. Despite this, visual search remains a difficult perceptual task. The goal of this investigation was to experimentally test the ways in which visual search performance could be improved through two categories of training interventions: perceptual training and conceptual training. To determine the effects of each training on a later performance task, the two types of trainings were manipulated using a between-subjects design (conceptual vs. perceptual × training present vs. training absent). Perceptual training led to speed and accuracy improvements in visual search. Issues with the design and administration of the conceptual training limited conclusions on its effectiveness but provided useful lessons for conceptual training design. The results suggest that when the visual search task involves detecting heterogeneous or otherwise unpredictable stimuli, perceptual training can improve visual search performance. Similarly, careful consideration of the performance task and training design is required to evaluate the effectiveness of conceptual training. PRACTITIONER SUMMARY Visual search is a difficult, yet critical, task in industries such as baggage screening and radiology. This study investigated the effectiveness of perceptual training for visual search. The results suggest that when visual search involves detecting heterogeneous or otherwise unpredictable stimuli, perceptual training may improve the speed and accuracy of visual search.
Collapse
Affiliation(s)
- David Schuster
- Institute for Simulation and Training, University of Central Florida, Orlando, FL, USA
| | | | | | | | | |
Collapse
|
32
|
Yim D, Rudoy J. Implicit statistical learning and language skills in bilingual children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:310-322. [PMID: 22896046 DOI: 10.1044/1092-4388(2012/11-0243)] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
PURPOSE Implicit statistical learning in 2 nonlinguistic domains (visual and auditory) was used to investigate (a) whether linguistic experience influences the underlying learning mechanism and (b) whether there are modality constraints in predicting implicit statistical learning with age and language skills. METHOD Implicit statistical learning was examined in visual and auditory domains. One hundred twelve English native speaking monolinguals and Spanish-English bilinguals age 5-13 years participated in the study. Language skills were measured by standardized language tests. RESULTS The overall results showed that all children implicitly learned statistical regularities above chance level in both modalities. However, there was no group difference between monolingual and bilingual children on either visual or auditory tasks. Lastly, a different tendency in predicting implicit statistical learning was observed for each group. In the monolingual group, both age and language scores significantly explained auditory statistical learning, whereas age explained visual statistical learning. In the bilingual group, age explained auditory statistical learning, and nothing was significant for visual statistical learning. CONCLUSIONS These findings are discussed in terms of the extent to which implicit statistical learning is influenced by internal and external factors and a consideration of important notions when testing bilingual children's language skills.
Collapse
|
33
|
Breadmore HL, Olson AC, Krott A. Deaf and hearing children's plural noun spelling. Q J Exp Psychol (Hove) 2012; 65:2169-92. [DOI: 10.1080/17470218.2012.684694] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
The present study examines deaf and hearing children's spelling of plural nouns. Severe literacy impairments are well documented in the deaf, which are believed to be a consequence of phonological awareness limitations. Fifty deaf (mean chronological age 13;10 years, mean reading age 7;5 years) and 50 reading-age-matched hearing children produced spellings of regular, semiregular, and irregular plural nouns in Experiment 1 and nonword plurals in Experiment 2. Deaf children performed reading-age appropriately on rule-based (regular and semiregular) plurals but were significantly less accurate at spelling irregular plurals. Spelling of plural nonwords and spelling error analyses revealed clear evidence for use of morphology. Deaf children used morphological generalization to a greater degree than their reading-age-matched hearing counterparts. Also, hearing children combined use of phonology and morphology to guide spelling, whereas deaf children appeared to use morphology without phonological mediation. Therefore, use of morphology in spelling can be independent of phonology and is available to the deaf despite limited experience with spoken language. Indeed, deaf children appear to be learning about morphology from the orthography. Education on more complex morphological generalization and exceptions may be highly beneficial not only for the deaf but also for other populations with phonological awareness limitations.
Collapse
Affiliation(s)
| | - Andrew C. Olson
- School of Psychology, University of Birmingham, Birmingham, UK
| | - Andrea Krott
- School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
34
|
Keane BP, Lu H, Papathomas TV, Silverstein SM, Kellman PJ. Is interpolation cognitively encapsulated? Measuring the effects of belief on Kanizsa shape discrimination and illusory contour formation. Cognition 2012; 123:404-18. [PMID: 22440789 PMCID: PMC3548673 DOI: 10.1016/j.cognition.2012.02.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2011] [Revised: 02/08/2012] [Accepted: 02/11/2012] [Indexed: 11/18/2022]
Abstract
Contour interpolation is a perceptual process that fills-in missing edges on the basis of how surrounding edges (inducers) are spatiotemporally related. Cognitive encapsulation refers to the degree to which perceptual mechanisms act in isolation from beliefs, expectations, and utilities (Pylyshyn, 1999). Is interpolation encapsulated from belief? We addressed this question by having subjects discriminate briefly-presented, partially-visible fat and thin shapes, the edges of which either induced or did not induce illusory contours (relatable and non-relatable conditions, respectively). Half the trials in each condition incorporated task-irrelevant distractor lines, known to disrupt the filling-in of contours. Half of the observers were told that the visible parts of the shape belonged to a single thing (group strategy); the other half were told that the visible parts were disconnected (ungroup strategy). It was found that distractor lines strongly impaired performance in the relatable condition, but minimally in the non-relatable condition; that strategy did not alter the effects of the distractor lines for either the relatable or non-relatable stimuli; and that cognitively grouping relatable fragments improved performance whereas cognitively grouping non-relatable fragments did not. These results suggest that (1) filling-in effects during illusory contour formation cannot be easily removed via strategy; (2) filling-in effects cannot be easily manufactured from stimuli that fail to elicit interpolation; and (3) actively grouping fragments can readily improve discrimination performance, but only when those fragments form interpolated contours. Taken together, these findings indicate that discriminating filled-in shapes depends on strategy but the filling-in process itself may be encapsulated from belief.
Collapse
Affiliation(s)
- Brian P Keane
- Department of Psychology, University of California, Los Angeles, USA.
| | | | | | | | | |
Collapse
|
35
|
Albright TD. On the perception of probable things: neural substrates of associative memory, imagery, and perception. Neuron 2012; 74:227-45. [PMID: 22542178 PMCID: PMC3361508 DOI: 10.1016/j.neuron.2012.04.001] [Citation(s) in RCA: 96] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/03/2012] [Indexed: 11/18/2022]
Abstract
Perception is influenced both by the immediate pattern of sensory inputs and by memories acquired through prior experiences with the world. Throughout much of its illustrious history, however, study of the cellular basis of perception has focused on neuronal structures and events that underlie the detection and discrimination of sensory stimuli. Relatively little attention has been paid to the means by which memories interact with incoming sensory signals. Building upon recent neurophysiological/behavioral studies of the cortical substrates of visual associative memory, I propose a specific functional process by which stored information about the world supplements sensory inputs to yield neuronal signals that can account for visual perceptual experience. This perspective represents a significant shift in the way we think about the cellular bases of perception.
Collapse
Affiliation(s)
- Thomas D Albright
- Center for the Neurobiology of Vision, The Salk Institute for Biological Studies, La Jolla, CA 92037, USA.
| |
Collapse
|
36
|
Rabovsky M, Sommer W, Abdel Rahman R. Depth of Conceptual Knowledge Modulates Visual Processes during Word Reading. J Cogn Neurosci 2012; 24:990-1005. [PMID: 21861677 DOI: 10.1162/jocn_a_00117] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Recent evidence suggests that conceptual knowledge modulates early visual stages of object recognition. The present study investigated whether similar modulations can be observed also for the recognition of object names, that is, for symbolic representations with only arbitrary relationships between their visual features and the corresponding conceptual knowledge. In a learning paradigm, we manipulated the amount of information provided about initially unfamiliar visual objects while controlling for perceptual stimulus properties and exposure. In a subsequent test session with electroencephalographic recordings, participants performed several tasks on either the objects or their written names. For objects as well as names, knowledge effects were observed as early as about 120 msec in the P1 component of the ERP, reflecting perceptual processing in extrastriate visual cortex. These knowledge-dependent modulations of early stages of visual word recognition suggest that information about word meanings may modulate the perception of arbitrarily related visual features surprisingly early.
Collapse
Affiliation(s)
- Milena Rabovsky
- Department of Psychology, Humboldt-Universität zu Berlin, Rudower Chaussee 18, 12489 Berlin, Germany.
| | | | | |
Collapse
|
37
|
Bukach CM, Vickery TJ, Kinka D, Gauthier I. Training experts: individuation without naming is worth it. J Exp Psychol Hum Percept Perform 2011; 38:14-7. [PMID: 21967271 DOI: 10.1037/a0025610] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
There is growing evidence that individuation experience is necessary for development of expert object discrimination that transfers to new exemplars. Individuation training in human studies has primarily used label association tasks where labels are learned at both the individual and more abstract (basic) level, and expertise criterion requires that individual-level judgments become as fast as basic-level judgments. However, there are training situations when the use of labels is not practical (e.g., with animals or some clinical populations). Moreover, labeling itself can facilitate object discrimination, thus it is unclear what role labels play in the acquisition of expertise in such training paradigms. Here, participants completed an online game that did not require labels in which they interacted with novel objects (Greebles) or control objects (Yufos). Games either required individuation or categorization. We then assessed the impact of this exposure on an abridged Greeble training paradigm. As expected, participants who played Yufo games or Greeble categorization games showed a significant basic-level advantage for Greebles in the abridged training paradigm, typical of novices. However, participants who played the Greeble identity game showed a reduced basic-level advantage, suggesting that individuation without labels may be sufficient to acquire perceptual expertise.
Collapse
Affiliation(s)
- Cindy M Bukach
- Department of Psychology, University of Richmond, VA 23173, USA.
| | | | | | | |
Collapse
|
38
|
|
39
|
Rogers TT, Lambon Ralph MA, Hodges JR, Patterson K. NATURAL SELECTION: THE IMPACT OF SEMANTIC IMPAIRMENT ON LEXICAL AND OBJECT DECISION. Cogn Neuropsychol 2010; 21:331-52. [PMID: 21038209 DOI: 10.1080/02643290342000366] [Citation(s) in RCA: 95] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
| | | | - John R. Hodges
- c MRC Cognition & Brain Sciences Unit and Addenbrooke's Hospital , Cambridge, UK
| | | |
Collapse
|
40
|
Konkle T, Brady TF, Alvarez GA, Oliva A. Conceptual distinctiveness supports detailed visual long-term memory for real-world objects. J Exp Psychol Gen 2010; 139:558-78. [PMID: 20677899 DOI: 10.1037/a0019165] [Citation(s) in RCA: 243] [Impact Index Per Article: 17.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Humans have a massive capacity to store detailed information in visual long-term memory. The present studies explored the fidelity of these visual long-term memory representations and examined how conceptual and perceptual features of object categories support this capacity. Observers viewed 2,800 object images with a different number of exemplars presented from each category. At test, observers indicated which of 2 exemplars they had previously studied. Memory performance was high and remained quite high (82% accuracy) with 16 exemplars from a category in memory, demonstrating a large memory capacity for object exemplars. However, memory performance decreased as more exemplars were held in memory, implying systematic categorical interference. Object categories with conceptually distinctive exemplars showed less interference in memory as the number of exemplars increased. Interference in memory was not predicted by the perceptual distinctiveness of exemplars from an object category, though these perceptual measures predicted visual search rates for an object target among exemplars. These data provide evidence that observers' capacity to remember visual information in long-term memory depends more on conceptual structure than perceptual distinctiveness.
Collapse
Affiliation(s)
- Talia Konkle
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA.
| | | | | | | |
Collapse
|
41
|
Martin A, Caramazza A. NEUROPSYCHOLOGICAL AND NEUROIMAGING PERSPECTIVES ON CONCEPTUAL KNOWLEDGE: AN INTRODUCTION. Cogn Neuropsychol 2010; 20:195-212. [DOI: 10.1080/02643290342000050] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
42
|
Wong YK, Gauthier I. A Multimodal Neural Network Recruited by Expertise with Musical Notation. J Cogn Neurosci 2010; 22:695-713. [PMID: 19320551 DOI: 10.1162/jocn.2009.21229] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Prior neuroimaging work on visual perceptual expertise has focused on changes in the visual system, ignoring possible effects of acquiring expert visual skills in nonvisual areas. We investigated expertise for reading musical notation, a skill likely to be associated with multimodal abilities. We compared brain activity in music-reading experts and novices during perception of musical notation, Roman letters, and mathematical symbols and found selectivity for musical notation for experts in a widespread multimodal network of areas. The activity in several of these areas was correlated with a behavioral measure of perceptual fluency with musical notation, suggesting that activity in nonvisual areas can predict individual differences in visual expertise. The visual selectivity for musical notation is distinct from that for faces, single Roman letters, and letter strings. Implications of the current findings to the study of visual perceptual expertise, music reading, and musical expertise are discussed.
Collapse
|
43
|
Stoppel CM, Boehler CN, Strumpf H, Heinze HJ, Hopf JM, Düzel E, Schoenfeld MA. Neural correlates of exemplar novelty processing under different spatial attention conditions. Hum Brain Mapp 2010; 30:3759-71. [PMID: 19434602 DOI: 10.1002/hbm.20804] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
The detection of novel events and their identification is a basic prerequisite in a rapidly changing environment. Recently, the processing of novelty has been shown to rely on the hippocampus and to be associated with activity in reward-related areas. The present study investigated the influence of spatial attention on neural processing of novel relative to frequently presented standard and target stimuli. Never-before-seen Mandelbrot-fractals absent of semantic content were employed as stimulus material. Consistent with current theories, novelty activated a widespread network of brain areas including the hippocampus. No activity, however, could be observed in reward-related areas with the novel stimuli absent of a semantic meaning employed here. In the perceptual part of the novelty-processing network a region in the lingual gyrus was found to specifically process novel events when they occurred outside the focus of spatial attention. These findings indicate that the initial detection of unexpected novel events generally occurs in specialized perceptual areas within the ventral visual stream, whereas activation of reward-related areas appears to be restricted to events that do possess a semantic content indicative of the biological relevance of the stimulus.
Collapse
Affiliation(s)
- Christian Michael Stoppel
- Department of Neurology and Centre for Advanced Imaging, Otto-von-Guericke-University, Magdeburg, Germany.
| | | | | | | | | | | | | |
Collapse
|
44
|
|
45
|
Nishimura M, Maurer D. The effect of categorisation on sensitivity to second-order relations in novel objects. Perception 2008; 37:584-601. [PMID: 18546665 DOI: 10.1068/p5740] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Adults appear to be more sensitive to configural information, including second-order relations (the spacing of features), in faces than in other objects. Superior processing of second-order relations in faces may arise from our experience of identifying faces at the individual level of categorisation (eg Bob versus John) but other objects at the basic level of categorisation (eg table versus chair; Gauthier and Tarr, 1997 Vision Research 37 1673- 1682). We simulated this learning difference with novel stimuli (comprised of blobs) by having two groups view the same stimuli but learn to identify the objects only at the basic level (based on the number of constituent blobs) or at both the basic level and individual level (based on the spacing, or second-order relations, of the blobs) of categorisation. Results from two experiments showed that, after training, observers in the individual-level training group were more sensitive to the second-order relations in novel exemplars of the learned category than observers in the basic-level training group. This is the first demonstration of specific improvement in sensitivity to second-order relations after training with non-face stimuli. The findings are consistent with the hypothesis that adults are more sensitive to second-order relations in faces than in other objects, at least in part, because they have more experience identifying faces at the individual level of categorisation.
Collapse
Affiliation(s)
- Mayu Nishimura
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
| | | |
Collapse
|
46
|
Shapiro LR, Lamberts K, Olson AC. Measuring the influence of similarity on category-specific effects. ACTA ACUST UNITED AC 2008. [DOI: 10.1080/09541440601155570] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
47
|
Lupyan G, Rakison DH, McClelland JL. Language is not just for talking: redundant labels facilitate learning of novel categories. Psychol Sci 2008; 18:1077-83. [PMID: 18031415 DOI: 10.1111/j.1467-9280.2007.02028.x] [Citation(s) in RCA: 122] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
In addition to having communicative functions, verbal labels may play a role in shaping concepts. Two experiments assessed whether the presence of labels affected category formation. Subjects learned to categorize "aliens" as those to be approached or those to be avoided. After accuracy feedback on each response was provided, a nonsense label was either presented or not. Providing nonsense category labels facilitated category learning even though the labels were redundant and all subjects had equivalent experience with supervised categorization of the stimuli. A follow-up study investigated differences between learning verbal and nonverbal associations and showed that learning a nonverbal association did not facilitate categorization. The findings show that labels make category distinctions more concrete and bear directly on the language-and-thought debate.
Collapse
Affiliation(s)
- Gary Lupyan
- Department of Psychology, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA.
| | | | | |
Collapse
|
48
|
Confounds in pictorial sets: The role of complexity and familiarity in basic-level picture processing. Behav Res Methods 2008; 40:116-29. [DOI: 10.3758/brm.40.1.116] [Citation(s) in RCA: 73] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
49
|
|
50
|
Abstract
We used a deadline procedure to investigate how time pressure may influence the processes involved in picture naming. The deadline exaggerated errors found under naming without deadline. There were also category differences in performance between living and nonliving things and, in particular, for animals versus fruit and vegetables. The majority of errors were visuallyand semantically related to the target (e. celery-asparagus), and there was a greater proportion of these errors made to living things. Importantly, there were also more visual-semantic errors to animals than to fruit and vegetables. In addition, there were a smaller number of pure semantic errors (e.g., nut-bolt), which were made predominantly to nonliving things. The different kinds of error were correlated with different variables. Overall, visual-semantic errors were associated with visual complexity and visual similarity, whereas pure semantic errors were associated with imageability and age of acquisition. However, for animals, visual-semantic errors were associated with visual complexity, whereas for fruit and vegetables they were associated with visual similarity. We discuss these findings in terms of theories of category-specific semantic impairment and models of picture naming.
Collapse
|