1
|
Leticevscaia O, Brandman T, Peelen MV. Scene context and attention independently facilitate MEG decoding of object category. Vision Res 2024; 224:108484. [PMID: 39260230 DOI: 10.1016/j.visres.2024.108484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 08/25/2024] [Accepted: 09/02/2024] [Indexed: 09/13/2024]
Abstract
Many of the objects we encounter in our everyday environments would be hard to recognize without any expectations about these objects. For example, a distant silhouette may be perceived as a car because we expect objects of that size, positioned on a road, to be cars. Reflecting the influence of such expectations on visual processing, neuroimaging studies have shown that when objects are poorly visible, expectations derived from scene context facilitate the representations of these objects in visual cortex from around 300 ms after scene onset. The current magnetoencephalography (MEG) study tested whether this facilitation occurs independently of attention and task relevance. Participants viewed degraded objects alone or within scene context while they either attended the scenes (attended condition) or the fixation cross (unattended condition), also temporally directing attention away from the scenes. Results showed that at 300 ms after stimulus onset, multivariate classifiers trained to distinguish clearly visible animate vs inanimate objects generalized to distinguish degraded objects in scenes better than degraded objects alone, despite the added clutter of the scene background. Attention also modulated object representations at this latency, with better category decoding in the attended than the unattended condition. The modulatory effects of context and attention were independent of each other. Finally, data from the current study and a previous study were combined (N = 51) to provide a more detailed temporal characterization of contextual facilitation. These results extend previous work by showing that facilitatory scene-object interactions are independent of the specific task performed on the visual input.
Collapse
Affiliation(s)
- Olga Leticevscaia
- University of Reading, Centre for Integrative Neuroscience and Neurodynamics, United Kingdom
| | - Talia Brandman
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
2
|
Garlichs A, Lustig M, Gamer M, Blank H. Expectations guide predictive eye movements and information sampling during face recognition. iScience 2024; 27:110920. [PMID: 39351204 PMCID: PMC11439840 DOI: 10.1016/j.isci.2024.110920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 07/21/2024] [Accepted: 09/06/2024] [Indexed: 10/04/2024] Open
Abstract
Context information has a crucial impact on our ability to recognize faces. Theoretical frameworks of predictive processing suggest that predictions derived from context guide sampling of sensory evidence at informative locations. However, it is unclear how expectations influence visual information sampling during face perception. To investigate the effects of expectations on eye movements during face anticipation and recognition, we conducted two eye-tracking experiments (n = 34, each) using cued face morphs containing expected and unexpected facial features, and clear expected and unexpected faces. Participants performed predictive saccades toward expected facial features and fixated expected more often and longer than unexpected features. In face morphs, expected features attracted early eye movements, followed by unexpected features, indicating that top-down as well as bottom-up information drives face sampling. Our results provide compelling evidence that expectations influence face processing by guiding predictive and early eye movements toward anticipated informative locations, supporting predictive processing.
Collapse
Affiliation(s)
- Annika Garlichs
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Hamburg Brain School, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Mark Lustig
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Department of Psychology, University of Hamburg, Hamburg, Germany
| | - Matthias Gamer
- Department of Psychology, University of Würzburg, Würzburg, Germany
| | - Helen Blank
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Hamburg Brain School, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Predictive Cognition, Research Center One Health Ruhr of the University Alliance Ruhr, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
3
|
Faurite C, Aprile E, Kauffmann L, Mermillod M, Gallice M, Chiquet C, Cottereau BR, Peyrin C. Interaction between central and peripheral vision: Influence of distance and spatial frequencies. J Vis 2024; 24:3. [PMID: 38190145 PMCID: PMC10777871 DOI: 10.1167/jov.24.1.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 10/30/2024] [Indexed: 01/09/2024] Open
Abstract
Visual scene perception is based on reciprocal interactions between central and peripheral information. Such interactions are commonly investigated through the semantic congruence effect, which usually reveals a congruence effect of central vision on peripheral vision as strong as the reverse. The aim of the present study was to further investigate the mechanisms underlying central-peripheral visual interactions using a central-peripheral congruence paradigm through three behavioral experiments. We presented simultaneously a central and a peripheral stimulus, that could be either semantically congruent or incongruent. To assess the congruence effect of central vision on peripheral vision, participants had to categorize the peripheral target stimulus while ignoring the central distractor stimulus. To assess the congruence effect of the peripheral vision on central vision, they had to categorize the central target stimulus while ignoring the peripheral distractor stimulus. Experiment 1 revealed that the physical distance between central and peripheral stimuli influences central-peripheral visual interactions: Congruence effect of central vision is stronger when the distance between the target and the distractor is the shortest. Experiments 2 and 3 revealed that the spatial frequency content of distractors also influence central-peripheral interactions: Congruence effect of central vision is observed only when the distractor contained high spatial frequencies while congruence effect of peripheral vision is observed only when the distractor contained low spatial frequencies. These results raise the question of how these influences are exerted (bottom-up vs. top-down) and are discussed based on the retinocortical properties of the visual system and the predictive brain hypothesis.
Collapse
Affiliation(s)
- Cynthia Faurite
- Université Grenoble Alpes, Univ. Savoie Mont Blanc, Grenoble, France
| | - Eva Aprile
- Université Grenoble Alpes, Univ. Savoie Mont Blanc, Grenoble, France
| | - Louise Kauffmann
- Université Grenoble Alpes, Univ. Savoie Mont Blanc, Grenoble, France
| | - Martial Mermillod
- Université Grenoble Alpes, Univ. Savoie Mont Blanc, Grenoble, France
| | - Mathilde Gallice
- Department of Ophthalmology, Grenoble Alpes University Hospital, Grenoble, France
| | - Christophe Chiquet
- Department of Ophthalmology, Grenoble Alpes University Hospital, Grenoble, France
| | - Benoit R Cottereau
- Centre de Recherche Cerveau et Cognition, Université Toulouse III-Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse, France
| | - Carole Peyrin
- Université Grenoble Alpes, Univ. Savoie Mont Blanc, Grenoble, France
| |
Collapse
|
4
|
Peelen MV, Berlot E, de Lange FP. Predictive processing of scenes and objects. NATURE REVIEWS PSYCHOLOGY 2024; 3:13-26. [PMID: 38989004 PMCID: PMC7616164 DOI: 10.1038/s44159-023-00254-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/25/2023] [Indexed: 07/12/2024]
Abstract
Real-world visual input consists of rich scenes that are meaningfully composed of multiple objects which interact in complex, but predictable, ways. Despite this complexity, we recognize scenes, and objects within these scenes, from a brief glance at an image. In this review, we synthesize recent behavioral and neural findings that elucidate the mechanisms underlying this impressive ability. First, we review evidence that visual object and scene processing is partly implemented in parallel, allowing for a rapid initial gist of both objects and scenes concurrently. Next, we discuss recent evidence for bidirectional interactions between object and scene processing, with scene information modulating the visual processing of objects, and object information modulating the visual processing of scenes. Finally, we review evidence that objects also combine with each other to form object constellations, modulating the processing of individual objects within the object pathway. Altogether, these findings can be understood by conceptualizing object and scene perception as the outcome of a joint probabilistic inference, in which "best guesses" about objects act as priors for scene perception and vice versa, in order to concurrently optimize visual inference of objects and scenes.
Collapse
Affiliation(s)
- Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Eva Berlot
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
5
|
Li C, Ficco L, Trapp S, Rostalski SM, Korn L, Kovács G. The effect of context congruency on fMRI repetition suppression for objects. Neuropsychologia 2023; 188:108603. [PMID: 37270029 DOI: 10.1016/j.neuropsychologia.2023.108603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 05/31/2023] [Accepted: 05/31/2023] [Indexed: 06/05/2023]
Abstract
The recognition of objects is strongly facilitated when they are presented in the context of other objects (Biederman, 1972). Such contexts facilitate perception and induce expectations of context-congruent objects (Trapp and Bar, 2015). The neural mechanisms underlying these facilitatory effects of context on object processing, however, are not yet fully understood. In the present study, we investigate how context-induced expectations affect subsequent object processing. We used functional magnetic resonance imaging and measured repetition suppression as a proxy for prediction error processing. Participants viewed pairs of alternating or repeated object images which were preceded by context-congruent, context-incongruent or neutral cues. We found a stronger repetition suppression in congruent as compared to incongruent or neutral cues in the object sensitive lateral occipital cortex. Interestingly, this stronger effect was driven by enhanced responses to alternating stimulus pairs in the congruent contexts, rather than by suppressed responses to repeated stimulus pairs, which emphasizes the contribution of surprise-related response enhancement for the context modulation on RS when expectations are violated. In addition, in the congruent condition, we discovered significant functional connectivity between object-responsive and frontal cortical regions, as well as between object-responsive regions and the fusiform gyrus. Our findings indicate that prediction errors, reflected in enhanced brain responses to violated contextual expectations, underlie the facilitating effect of context during object perception.
Collapse
Affiliation(s)
- Chenglin Li
- School of Psychology, Zhejiang Normal University, China; Department of Biological Psychology and Cognitive Neurosciences, Institute of Psychology, Friedrich-Schiller-Universität Jena, Germany
| | - Linda Ficco
- Department of General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich-Schiller-Universität Jena, Germany; Department of Linguistics and Cultural Evolution, International Max Planck Research School for the Science of Human History, Jena, Germany
| | - Sabrina Trapp
- Macromedia University of Applied Sciences, Munich, Germany
| | - Sophie-Marie Rostalski
- Department of Biological Psychology and Cognitive Neurosciences, Institute of Psychology, Friedrich-Schiller-Universität Jena, Germany
| | - Lukas Korn
- Department of Biological Psychology and Cognitive Neurosciences, Institute of Psychology, Friedrich-Schiller-Universität Jena, Germany
| | - Gyula Kovács
- Department of Biological Psychology and Cognitive Neurosciences, Institute of Psychology, Friedrich-Schiller-Universität Jena, Germany.
| |
Collapse
|
6
|
Klever L, Islam J, Võ MLH, Billino J. Aging attenuates the memory advantage for unexpected objects in real-world scenes. Heliyon 2023; 9:e20241. [PMID: 37809883 PMCID: PMC10560015 DOI: 10.1016/j.heliyon.2023.e20241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 09/14/2023] [Accepted: 09/14/2023] [Indexed: 10/10/2023] Open
Abstract
Across the adult lifespan memory processes are subject to pronounced changes. Prior knowledge and expectations might critically shape functional differences; however, corresponding findings have remained ambiguous so far. Here, we chose a tailored approach to scrutinize how schema (in-)congruencies affect older and younger adults' memory for objects embedded in real-world scenes, a scenario close to everyday life memory demands. A sample of 23 older (52-81 years) and 23 younger adults (18-38 years) freely viewed 60 photographs of scenes in which target objects were included that were either congruent or incongruent with the given context information. After a delay, recognition performance for those objects was determined. In addition, recognized objects had to be matched to the scene context in which they were previously presented. While we found schema violations beneficial for object recognition across age groups, the advantage was significantly less pronounced in older adults. We moreover observed an age-related congruency bias for matching objects to their original scene context. Our findings support a critical role of predictive processes for age-related memory differences and indicate enhanced weighting of predictions with age. We suggest that recent predictive processing theories provide a particularly useful framework to elaborate on age-related functional vulnerabilities as well as stability.
Collapse
Affiliation(s)
- Lena Klever
- Experimental Psychology, Justus Liebig University Giessen, Germany
- Center for Mind, Brain, And Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| | - Jasmin Islam
- Experimental Psychology, Justus Liebig University Giessen, Germany
| | - Melissa Le-Hoa Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Jutta Billino
- Experimental Psychology, Justus Liebig University Giessen, Germany
- Center for Mind, Brain, And Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| |
Collapse
|
7
|
Brandman T, Peelen MV. Objects sharpen visual scene representations: evidence from MEG decoding. Cereb Cortex 2023; 33:9524-9531. [PMID: 37365829 PMCID: PMC10431745 DOI: 10.1093/cercor/bhad222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 06/02/2023] [Accepted: 06/03/2023] [Indexed: 06/28/2023] Open
Abstract
Real-world scenes consist of objects, defined by local information, and scene background, defined by global information. Although objects and scenes are processed in separate pathways in visual cortex, their processing interacts. Specifically, previous studies have shown that scene context makes blurry objects look sharper, an effect that can be observed as a sharpening of object representations in visual cortex from around 300 ms after stimulus onset. Here, we use MEG to show that objects can also sharpen scene representations, with the same temporal profile. Photographs of indoor (closed) and outdoor (open) scenes were blurred such that they were difficult to categorize on their own but easily disambiguated by the inclusion of an object. Classifiers were trained to distinguish MEG response patterns to intact indoor and outdoor scenes, presented in an independent run, and tested on degraded scenes in the main experiment. Results revealed better decoding of scenes with objects than scenes alone and objects alone from 300 ms after stimulus onset. This effect was strongest over left posterior sensors. These findings show that the influence of objects on scene representations occurs at similar latencies as the influence of scenes on object representations, in line with a common predictive processing mechanism.
Collapse
Affiliation(s)
- Talia Brandman
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 GD, The Netherlands
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 GD, The Netherlands
| |
Collapse
|
8
|
Nicholls VI, Alsbury-Nealy B, Krugliak A, Clarke A. Context effects on object recognition in real-world environments: A study protocol. Wellcome Open Res 2023; 7:165. [PMID: 37274451 PMCID: PMC10238820 DOI: 10.12688/wellcomeopenres.17856.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2023] [Indexed: 07/22/2023] Open
Abstract
Background: The environments that we live in impact on our ability to recognise objects, with recognition being facilitated when objects appear in expected locations (congruent) compared to unexpected locations (incongruent). However, these findings are based on experiments where the object is isolated from its environment. Moreover, it is not clear which components of the recognition process are impacted by the environment. In this experiment, we seek to examine the impact real world environments have on object recognition. Specifically, we will use mobile electroencephalography (mEEG) and augmented reality (AR) to investigate how the visual and semantic processing aspects of object recognition are changed by the environment. Methods: We will use AR to place congruent and incongruent virtual objects around indoor and outdoor environments. During the experiment a total of 34 participants will walk around the environments and find these objects while we record their eye movements and neural signals. We will perform two primary analyses. First, we will analyse the event-related potential (ERP) data using paired samples t-tests in the N300/400 time windows in an attempt to replicate congruency effects on the N300/400. Second, we will use representational similarity analysis (RSA) and computational models of vision and semantics to determine how visual and semantic processes are changed by congruency. Conclusions: Based on previous literature, we hypothesise that scene-object congruence would facilitate object recognition. For ERPs, we predict a congruency effect in the N300/N400, and for RSA we predict that higher level visual and semantic information will be represented earlier for congruent scenes than incongruent scenes. By collecting mEEG data while participants are exploring a real-world environment, we will be able to determine the impact of a natural context on object recognition, and the different processing stages of object recognition.
Collapse
Affiliation(s)
| | | | - Alexandra Krugliak
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
| | - Alex Clarke
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
| |
Collapse
|
9
|
Rossel P, Peyrin C, Kauffmann L. Subjective perception of objects depends on the interaction between the validity of context-based expectations and signal reliability. Vision Res 2023; 206:108191. [PMID: 36773476 DOI: 10.1016/j.visres.2023.108191] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 02/02/2023] [Accepted: 02/02/2023] [Indexed: 02/11/2023]
Abstract
Predictive coding theories of visual perception assume that expectations based on prior knowledge modulate the processing of information. However, the underlying mechanisms remain debated. Some accounts propose that expectations enhance the perception of expected relative to unexpected stimuli while others assume the opposite. Recently, the opposing process theory suggested that enhanced perception of expected vs. unexpected stimuli may occur alternatively depending upon the reliability of the visual signal. When the signal is noisy, perception would be biassed toward what is expected since anything else may be too noisy to be resolved. When the signal is unambiguous, perception would be biassed toward what diverges from expectations and is more informative. Our study tested this hypothesis, using a perceptual matching task to investigate the influence of expectations on the perceived sharpness of objects in context. Participants saw two blurred images depicting the same object and had to adjust the blur level of one object to match the blur level of the other one. We manipulated the validity of expectations about objects by varying their scene context (congruent or incongruent context leading to valid or invalid expectations about the object). We also manipulated the reliability of the visual signal by varying the initial blur level of object pairs. Results showed that expectations validity differentially affected the perception of objects depending on signal reliability. Perception of validly expected objects was enhanced (sharpened) relative to unexpected objects when visual inputs were unreliable while this effect reversed to the benefit of unexpected objects when the signal was more reliable.
Collapse
Affiliation(s)
- Pauline Rossel
- Univ. Grenoble Alpes, CNRS, LPNC, 38000 Grenoble, France
| | - Carole Peyrin
- Univ. Grenoble Alpes, CNRS, LPNC, 38000 Grenoble, France
| | | |
Collapse
|
10
|
Thorat S, Quek GL, Peelen MV. Statistical learning of distractor co-occurrences facilitates visual search. J Vis 2022; 22:2. [PMID: 36053133 PMCID: PMC9440606 DOI: 10.1167/jov.22.10.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual search is facilitated by knowledge of the relationship between the target and the distractors, including both where the target is likely to be among the distractors and how it differs from the distractors. Whether the statistical structure among distractors themselves, unrelated to target properties, facilitates search is less well understood. Here, we assessed the benefit of distractor structure using novel shapes whose relationship to each other was learned implicitly during visual search. Participants searched for target items in arrays of shapes that comprised either four pairs of co-occurring distractor shapes (structured scenes) or eight distractor shapes randomly partitioned into four pairs on each trial (unstructured scenes). Across five online experiments (N = 1,140), we found that after a period of search training, participants were more efficient when searching for targets in structured than unstructured scenes. This structure benefit emerged independently of whether the position of the shapes within each pair was fixed or variable and despite participants having no explicit knowledge of the structured pairs they had seen. These results show that implicitly learned co-occurrence statistics between distractor shapes increases search efficiency. Increased efficiency in the rejection of regularly co-occurring distractors may contribute to the efficiency of visual search in natural scenes, where such regularities are abundant.
Collapse
Affiliation(s)
- Sushrut Thorat
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,
| | - Genevieve L Quek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia.,
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,
| |
Collapse
|
11
|
Nicholls VI, Alsbury-Nealy B, Krugliak A, Clarke A. Context effects on object recognition in real-world environments: A study protocol. Wellcome Open Res 2022. [DOI: 10.12688/wellcomeopenres.17856.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Background: The environments that we live in impact on our ability to recognise objects, with recognition being facilitated when objects appear in expected locations (congruent) compared to unexpected locations (incongruent). However, these findings are based on experiments where the object is isolated from its environment. Moreover, it is not clear which components of the recognition process are impacted by the environment. In this experiment, we seek to examine the impact real world environments have on object recognition. Specifically, we will use mobile electroencephalography (mEEG) and augmented reality (AR) to investigate how the visual and semantic processing aspects of object recognition are changed by the environment. Methods: We will use AR to place congruent and incongruent virtual objects around indoor and outdoor environments. During the experiment a total of 34 participants will walk around the environments and find these objects while we record their eye movements and neural signals. We will perform two primary analyses. First, we will analyse the event-related potential (ERP) data using paired samples t-tests in the N300/400 time windows in an attempt to replicate congruency effects on the N300/400. Second, we will use representational similarity analysis (RSA) and computational models of vision and semantics to determine how visual and semantic processes are changed by congruency. Conclusions: Based on previous literature, we hypothesise that scene-object congruence would facilitate object recognition. For ERPs, we predict a congruency effect in the N300/N400, and for RSA we predict that higher level visual and semantic information will be represented earlier for congruent scenes than incongruent scenes. By collecting mEEG data while participants are exploring a real-world environment, we will be able to determine the impact of a natural context on object recognition, and the different processing stages of object recognition.
Collapse
|