1
|
Motlagh SC, Joanisse M, Wang B, Mohsenzadeh Y. Unveiling the neural dynamics of conscious perception in rapid object recognition. Neuroimage 2024; 296:120668. [PMID: 38848982 DOI: 10.1016/j.neuroimage.2024.120668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 05/23/2024] [Accepted: 06/05/2024] [Indexed: 06/09/2024] Open
Abstract
Our brain excels at recognizing objects, even when they flash by in a rapid sequence. However, the neural processes determining whether a target image in a rapid sequence can be recognized or not remains elusive. We used electroencephalography (EEG) to investigate the temporal dynamics of brain processes that shape perceptual outcomes in these challenging viewing conditions. Using naturalistic images and advanced multivariate pattern analysis (MVPA) techniques, we probed the brain dynamics governing conscious object recognition. Our results show that although initially similar, the processes for when an object can or cannot be recognized diverge around 180 ms post-appearance, coinciding with feedback neural processes. Decoding analyses indicate that gist perception (partial conscious perception) can occur at ∼120 ms through feedforward mechanisms. In contrast, object identification (full conscious perception of the image) is resolved at ∼190 ms after target onset, suggesting involvement of recurrent processing. These findings underscore the importance of recurrent neural connections in object recognition and awareness in rapid visual presentations.
Collapse
Affiliation(s)
- Saba Charmi Motlagh
- Western Center for Brain and Mind, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| | - Marc Joanisse
- Western Center for Brain and Mind, Western University, London, Ontario, Canada; Department of Psychology, Western University, London, Ontario, Canada
| | - Boyu Wang
- Western Center for Brain and Mind, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada; Department of Computer Science, Western University, London, Ontario, Canada
| | - Yalda Mohsenzadeh
- Western Center for Brain and Mind, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada; Department of Computer Science, Western University, London, Ontario, Canada.
| |
Collapse
|
2
|
Herzog MH. The Irreducibility of Vision: Gestalt, Crowding and the Fundamentals of Vision. Vision (Basel) 2022; 6:vision6020035. [PMID: 35737422 PMCID: PMC9228288 DOI: 10.3390/vision6020035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 05/25/2022] [Accepted: 05/31/2022] [Indexed: 11/16/2022] Open
Abstract
What is fundamental in vision has been discussed for millennia. For philosophical realists and the physiological approach to vision, the objects of the outer world are truly given, and failures to perceive objects properly, such as in illusions, are just sporadic misperceptions. The goal is to replace the subjectivity of the mind by careful physiological analyses. Continental philosophy and the Gestaltists are rather skeptical or ignorant about external objects. The percepts themselves are their starting point, because it is hard to deny the truth of one own′s percepts. I will show that, whereas both approaches can well explain many visual phenomena with classic visual stimuli, they both have trouble when stimuli become slightly more complex. I suggest that these failures have a deeper conceptual reason, namely that their foundations (objects, percepts) do not hold true. I propose that only physical states exist in a mind independent manner and that everyday objects, such as bottles and trees, are perceived in a mind-dependent way. The fundamental processing units to process objects are extended windows of unconscious processing, followed by short, discrete conscious percepts.
Collapse
Affiliation(s)
- Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland
| |
Collapse
|
3
|
Abstract
In crowding, perception of a target deteriorates in the presence of nearby flankers. Surprisingly, perception can be rescued from crowding if additional flankers are added (uncrowding). Uncrowding is a major challenge for all classic models of crowding and vision in general, because the global configuration of the entire stimulus is crucial. However, it is unclear which characteristics of the configuration impact (un)crowding. Here, we systematically dissected flanker configurations and showed that (un)crowding cannot be easily explained by the effects of the sub-parts or low-level features of the stimulus configuration. Our modeling results suggest that (un)crowding requires global processing. These results are well in line with previous studies showing the importance of global aspects in crowding.
Collapse
Affiliation(s)
- Oh-Hyeon Choung
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Alban Bornet
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Adrien Doerig
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
4
|
Unraveling brain interactions in vision: The example of crowding. Neuroimage 2021; 240:118390. [PMID: 34271157 DOI: 10.1016/j.neuroimage.2021.118390] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 07/09/2021] [Accepted: 07/12/2021] [Indexed: 11/22/2022] Open
Abstract
Crowding, the impairment of target discrimination in clutter, is the standard situation in vision. Traditionally, crowding is explained with (feedforward) models, in which only neighboring elements interact, leading to a "bottleneck" at the earliest stages of vision. It is with this implicit prior that most functional magnetic resonance imaging (fMRI) studies approach the identification of the "neural locus" of crowding, searching for the earliest visual area in which the blood-oxygenation-level-dependent (BOLD) signal is suppressed under crowded conditions. Using this classic approach, we replicated previous findings of crowding-related BOLD suppression starting in V2 and increasing up the visual hierarchy. Surprisingly, under conditions of uncrowding, in which adding flankers improves performance, the BOLD signal was further suppressed. This suggests an important role for top-down connections, which is in line with global models of crowding. To discriminate between various possible models, we used dynamic causal modeling (DCM). We show that recurrent interactions between all visual areas, including higher-level areas like V4 and the lateral occipital complex (LOC), are crucial in crowding and uncrowding. Our results explain the discrepancies in previous findings: in a recurrent visual hierarchy, the crowding effect can theoretically be detected at any stage. Beyond crowding, we demonstrate the need for models like DCM to understand the complex recurrent processing which most likely underlies human perception in general.
Collapse
|
5
|
Bornet A, Kaiser J, Kroner A, Falotico E, Ambrosano A, Cantero K, Herzog MH, Francis G. Running Large-Scale Simulations on the Neurorobotics Platform to Understand Vision - The Case of Visual Crowding. Front Neurorobot 2019; 13:33. [PMID: 31191291 PMCID: PMC6549494 DOI: 10.3389/fnbot.2019.00033] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 05/14/2019] [Indexed: 11/13/2022] Open
Abstract
Traditionally, human vision research has focused on specific paradigms and proposed models to explain very specific properties of visual perception. However, the complexity and scope of modern psychophysical paradigms undermine the success of this approach. For example, perception of an element strongly deteriorates when neighboring elements are presented in addition (visual crowding). As it was shown recently, the magnitude of deterioration depends not only on the directly neighboring elements but on almost all elements and their specific configuration. Hence, to fully explain human visual perception, one needs to take large parts of the visual field into account and combine all the aspects of vision that become relevant at such scale. These efforts require sophisticated and collaborative modeling. The Neurorobotics Platform (NRP) of the Human Brain Project offers a unique opportunity to connect models of all sorts of visual functions, even those developed by different research groups, into a coherently functioning system. Here, we describe how we used the NRP to connect and simulate a segmentation model, a retina model, and a saliency model to explain complex results about visual perception. The combination of models highlights the versatility of the NRP and provides novel explanations for inward-outward anisotropy in visual crowding.
Collapse
Affiliation(s)
- Alban Bornet
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Jacques Kaiser
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Alexander Kroner
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Egidio Falotico
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| | | | | | - Michael H. Herzog
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Gregory Francis
- Department of Psychological Sciences, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
6
|
Rajaei K, Mohsenzadeh Y, Ebrahimpour R, Khaligh-Razavi SM. Beyond core object recognition: Recurrent processes account for object recognition under occlusion. PLoS Comput Biol 2019; 15:e1007001. [PMID: 31091234 PMCID: PMC6538196 DOI: 10.1371/journal.pcbi.1007001] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 05/28/2019] [Accepted: 04/02/2019] [Indexed: 01/08/2023] Open
Abstract
Core object recognition, the ability to rapidly recognize objects despite variations in their appearance, is largely solved through the feedforward processing of visual information. Deep neural networks are shown to achieve human-level performance in these tasks, and explain the primate brain representation. On the other hand, object recognition under more challenging conditions (i.e. beyond the core recognition problem) is less characterized. One such example is object recognition under occlusion. It is unclear to what extent feedforward and recurrent processes contribute in object recognition under occlusion. Furthermore, we do not know whether the conventional deep neural networks, such as AlexNet, which were shown to be successful in solving core object recognition, can perform similarly well in problems that go beyond the core recognition. Here, we characterize neural dynamics of object recognition under occlusion, using magnetoencephalography (MEG), while participants were presented with images of objects with various levels of occlusion. We provide evidence from multivariate analysis of MEG data, behavioral data, and computational modelling, demonstrating an essential role for recurrent processes in object recognition under occlusion. Furthermore, the computational model with local recurrent connections, used here, suggests a mechanistic explanation of how the human brain might be solving this problem. In recent years, deep-learning-based computer vision algorithms have been able to achieve human-level performance in several object recognition tasks. This has also contributed in our understanding of how our brain may be solving these recognition tasks. However, object recognition under more challenging conditions, such as occlusion, is less characterized. Temporal dynamics of object recognition under occlusion is largely unknown in the human brain. Furthermore, we do not know if the previously successful deep-learning algorithms can similarly achieve human-level performance in these more challenging object recognition tasks. By linking brain data with behavior, and computational modeling, we characterized temporal dynamics of object recognition under occlusion, and proposed a computational mechanism that explains both behavioral and the neural data in humans. This provides a plausible mechanistic explanation for how our brain might be solving object recognition under more challenging conditions.
Collapse
Affiliation(s)
- Karim Rajaei
- School of Cognitive Sciences (SCS), Institute for Research in Fundamental Sciences (IPM), Niavaran, Tehran, Iran
| | - Yalda Mohsenzadeh
- Computer Science and AI Lab (CSAIL), MIT, Cambridge, Massachusetts, United States of America
| | - Reza Ebrahimpour
- School of Cognitive Sciences (SCS), Institute for Research in Fundamental Sciences (IPM), Niavaran, Tehran, Iran
- Department of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran
- * E-mail: (RE); (S-MK-R)
| | - Seyed-Mahdi Khaligh-Razavi
- Computer Science and AI Lab (CSAIL), MIT, Cambridge, Massachusetts, United States of America
- Department of Brain and Cognitive Sciences, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran
- * E-mail: (RE); (S-MK-R)
| |
Collapse
|
7
|
Doerig A, Bornet A, Rosenholtz R, Francis G, Clarke AM, Herzog MH. Beyond Bouma's window: How to explain global aspects of crowding? PLoS Comput Biol 2019; 15:e1006580. [PMID: 31075131 PMCID: PMC6530878 DOI: 10.1371/journal.pcbi.1006580] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Revised: 05/22/2019] [Accepted: 10/04/2018] [Indexed: 11/19/2022] Open
Abstract
In crowding, perception of an object deteriorates in the presence of nearby elements. Although crowding is a ubiquitous phenomenon, since elements are rarely seen in isolation, to date there exists no consensus on how to model it. Previous experiments showed that the global configuration of the entire stimulus must be taken into account. These findings rule out simple pooling or substitution models and favor models sensitive to global spatial aspects. In order to investigate how to incorporate global aspects into models, we tested a large number of models with a database of forty stimuli tailored for the global aspects of crowding. Our results show that incorporating grouping like components strongly improves model performance.
Collapse
Affiliation(s)
- Adrien Doerig
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Alban Bornet
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Ruth Rosenholtz
- Department of Brain and Cognitive Sciences, Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, United States of America
| | - Gregory Francis
- Department of Psychological Sciences, Purdue University, West Lafayette, IN, United States of America
| | - Aaron M. Clarke
- Laboratory of Computational Vision, Psychology Department, Bilkent University, Ankara, Turkey
| | - Michael H. Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
8
|
Wallis TS, Funke CM, Ecker AS, Gatys LA, Wichmann FA, Bethge M. Image content is more important than Bouma's Law for scene metamers. eLife 2019; 8:42512. [PMID: 31038458 PMCID: PMC6491040 DOI: 10.7554/elife.42512] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Accepted: 03/09/2019] [Indexed: 11/16/2022] Open
Abstract
We subjectively perceive our visual field with high fidelity, yet peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding). Prior work showed that humans could not discriminate images synthesised to match the responses of a mid-level ventral visual stream model when information was averaged in receptive fields with a scaling of about half their retinal eccentricity. This result implicated ventral visual area V2, approximated ‘Bouma’s Law’ of crowding, and has subsequently been interpreted as a link between crowding zones, receptive field scaling, and our perceptual experience. However, this experiment never assessed natural images. We find that humans can easily discriminate real and model-generated images at V2 scaling, requiring scales at least as small as V1 receptive fields to generate metamers. We speculate that explaining why scenes look as they do may require incorporating segmentation and global organisational constraints in addition to local pooling. As you read this digest, your eyes move to follow the lines of text. But now try to hold your eyes in one position, while reading the text on either side and below: it soon becomes clear that peripheral vision is not as good as we tend to assume. It is not possible to read text far away from the center of your line of vision, but you can see ‘something’ out of the corner of your eye. You can see that there is text there, even if you cannot read it, and you can see where your screen or page ends. So how does the brain generate peripheral vision, and why does it differ from what you see when you look straight ahead? One idea is that the visual system averages information over areas of the peripheral visual field. This gives rise to texture-like patterns, as opposed to images made up of fine details. Imagine looking at an expanse of foliage, gravel or fur, for example. Your eyes cannot make out the individual leaves, pebbles or hairs. Instead, you perceive an overall pattern in the form of a texture. Our peripheral vision may also consist of such textures, created when the brain averages information over areas of space. Wallis, Funke et al. have now tested this idea using an existing computer model that averages visual input in this way. By giving the model a series of photographs to process, Wallis, Funke et al. obtained images that should in theory simulate peripheral vision. If the model mimics the mechanisms that generate peripheral vision, then healthy volunteers should be unable to distinguish the processed images from the original photographs. But in fact, the participants could easily discriminate the two sets of images. This suggests that the visual system does not solely use textures to represent information in the peripheral visual field. Wallis, Funke et al. propose that other factors, such as how the visual system separates and groups objects, may instead determine what we see in our peripheral vision. This knowledge could ultimately benefit patients with eye diseases such as macular degeneration, a condition that causes loss of vision in the center of the visual field and forces patients to rely on their peripheral vision.
Collapse
Affiliation(s)
- Thomas Sa Wallis
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Christina M Funke
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Alexander S Ecker
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Bernstein Center for Computational Neuroscience, Berlin, Germany.,Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, United States.,Institute for Theoretical Physics, Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Leon A Gatys
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Felix A Wichmann
- Neural Information Processing Group, Faculty of Science, Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Matthias Bethge
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, United States.,Institute for Theoretical Physics, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
9
|
A few remarks on spatial interference in visual stimuli. Behav Res Methods 2017; 50:1716-1722. [PMID: 29067673 DOI: 10.3758/s13428-017-0978-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Many vision experiments, e.g., tests of masking and visual crowding, involve the effect of adding a second stimulus to an initial one. The effects of such additions are generally considered in terms of physiological mechanisms and the possibility of interference in the stimuli is generally not considered. In the present study, interference between two stimuli was assessed by comparing the sum of amplitudes in the combined stimulus to the sums of the amplitudes in the two stimuli determined separately. With this approach, evidence for interference was found. It was also found that adding a second stimulus may alter the phase angles. These observations mean that the same stimulus presented together with other stimuli may have less stimulus power than when presented by itself. Thus, it is necessary to take account of the possibility of interference when interpreting results from experiments in which the effect of one stimulus element upon another is explored.
Collapse
|
10
|
Herzog MH, Thunell E, Ögmen H. Putting low-level vision into global context: Why vision cannot be reduced to basic circuits. Vision Res 2015; 126:9-18. [PMID: 26456069 DOI: 10.1016/j.visres.2015.09.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2015] [Revised: 07/28/2015] [Accepted: 09/18/2015] [Indexed: 11/28/2022]
Abstract
To cope with the complexity of vision, most models in neuroscience and computer vision are of hierarchical and feedforward nature. Low-level vision, such as edge and motion detection, is explained by basic low-level neural circuits, whose outputs serve as building blocks for more complex circuits computing higher level features such as shape and entire objects. There is an isomorphism between states of the outer world, neural circuits, and perception, inspired by the positivistic philosophy of the mind. Here, we show that although such an approach is conceptually and mathematically appealing, it fails to explain many phenomena including crowding, visual masking, and non-retinotopic processing.
Collapse
Affiliation(s)
- Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland.
| | - Evelina Thunell
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Haluk Ögmen
- Department of Electrical and Computer Engineering, Center for Neuro-Engineering and Cognitive Science, University of Houston, TX, USA
| |
Collapse
|
11
|
Kafaligonul H, Breitmeyer BG, Öğmen H. Feedforward and feedback processes in vision. Front Psychol 2015; 6:279. [PMID: 25814974 PMCID: PMC4357201 DOI: 10.3389/fpsyg.2015.00279] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2015] [Accepted: 02/25/2015] [Indexed: 11/13/2022] Open
Affiliation(s)
- Hulusi Kafaligonul
- National Magnetic Resonance Research Center (UMRAM), Bilkent University Ankara, Turkey
| | - Bruno G Breitmeyer
- Department of Psychology, University of Houston Houston, TX, USA ; Center for Neuro-Engineering and Cognitive Science, University of Houston Houston, TX, USA
| | - Haluk Öğmen
- Center for Neuro-Engineering and Cognitive Science, University of Houston Houston, TX, USA ; Department of Electrical and Computer Engineering, University of Houston Houston, TX, USA
| |
Collapse
|
12
|
Herzog MH, Sayim B, Chicherov V, Manassi M. Crowding, grouping, and object recognition: A matter of appearance. J Vis 2015; 15:5. [PMID: 26024452 PMCID: PMC4429926 DOI: 10.1167/15.6.5] [Citation(s) in RCA: 77] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2014] [Indexed: 11/24/2022] Open
Abstract
In crowding, the perception of a target strongly deteriorates when neighboring elements are presented. Crowding is usually assumed to have the following characteristics. (a) Crowding is determined only by nearby elements within a restricted region around the target (Bouma's law). (b) Increasing the number of flankers can only deteriorate performance. (c) Target-flanker interference is feature-specific. These characteristics are usually explained by pooling models, which are well in the spirit of classic models of object recognition. In this review, we summarize recent findings showing that crowding is not determined by the above characteristics, thus, challenging most models of crowding. We propose that the spatial configuration across the entire visual field determines crowding. Only when one understands how all elements of a visual scene group with each other, can one determine crowding strength. We put forward the hypothesis that appearance (i.e., how stimuli look) is a good predictor for crowding, because both crowding and appearance reflect the output of recurrent processing rather than interactions during the initial phase of visual processing.
Collapse
|