1
|
Cutler J, Bodet A, Rivest J, Cavanagh P. The word superiority effect overcomes crowding. Vision Res 2024; 222:108436. [PMID: 38820621 DOI: 10.1016/j.visres.2024.108436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 05/14/2024] [Accepted: 05/14/2024] [Indexed: 06/02/2024]
Abstract
Crowding and the word superiority effect are two perceptual phenomena that influence reading. The identification of the inner letters of a word can be hindered by crowding from adjacent letters, but it can be facilitated by the word context itself (the word superiority effect). In the present study, strings of four-letters (words and non-words) with different inter-letter spacings (ranging from an optimal spacing to produce crowding to a spacing too large to produce crowding) were presented briefly in the periphery and participants were asked to identify the third letter of the string. Each word had a partner word that was identical except for its third letter (e.g., COLD, CORD) so that guessing as the source of the improved performance for words could be ruled out. Unsurprisingly, letter identification accuracy for words was better than non-words. For non-words, it was lowest at closer spacings, confirming crowding. However, for words, accuracy remained high at all inter-letter spacings showing that crowding did not prevent identification of the inner letters. This result supports models of "holistic" word recognition where partial cues can lead to recognition without first identifying individual letters. Once the word is recognized, its inner letters can be recovered, despite their feature loss produced by crowding.
Collapse
Affiliation(s)
- June Cutler
- Department of Psychology, Glendon College, York University, Toronto, ON, M4N 3M6, Canada
| | - Alexandre Bodet
- Department of Psychology, Glendon College, York University, Toronto, ON, M4N 3M6, Canada
| | - Josée Rivest
- Department of Psychology, Glendon College, York University, Toronto, ON, M4N 3M6, Canada; Centre for Vision Research, York University, Toronto, ON, M3J 1P3, Canada.
| | - Patrick Cavanagh
- Department of Psychology, Glendon College, York University, Toronto, ON, M4N 3M6, Canada; Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA; Centre for Vision Research, York University, Toronto, ON, M3J 1P3, Canada
| |
Collapse
|
2
|
Bondarko VM, Chikhman VN, Danilova MV, Solnushkin SD. Foveal crowding for large and small Landolt Cs: Similarity and Attention. Vision Res 2024; 215:108346. [PMID: 38171199 DOI: 10.1016/j.visres.2023.108346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 12/05/2023] [Accepted: 12/07/2023] [Indexed: 01/05/2024]
Abstract
We compare the recognition of foveal crowded Landolt Cs of two sizes: brief (40 ms), large, low-contrast Cs and high-contrast (1 sec) tests at the resolution limit of the visual system. In different series, the test Landolt C was surrounded by two identical distractors located symmetrically along the horizontal or by a single distractor. The distractors were Landolt Cs or rings. At the resolution limit, the critical spacing was similar in the two series and did not depend on the type of distractor. The result supports the hypothesis that crowding at the resolution limit occurs when both the test and the distractors fall into the same smallest receptive field responsible for the target recognition. For large stimuli, at almost all separations distractors of the same shape caused greater impairment than did rings, and recognition errors were non-random. The critical spacing was equal to 0.5 test diameters only in the presence of one distracting Landolt C. This result suggests that attention is involved: When one distractor is added, involuntary attention, which is directed to the centre of gravity of the stimulus, can lead to confusion of features that are present in both tests and distractors and thus to non-random errors.
Collapse
Affiliation(s)
- V M Bondarko
- IP Pavlov Institute of Physiology, Laboratory of Visual Physiology, Nab.Makarova 6, St. Petersburg 199034, Russia
| | - V N Chikhman
- IP Pavlov Institute of Physiology, Laboratory of Information Technologies and Mathematical Modelling, Nab.Makarova 6, St. Petersburg 199034, Russia
| | - M V Danilova
- IP Pavlov Institute of Physiology, Laboratory of Visual Physiology, Nab.Makarova 6, St. Petersburg 199034, Russia.
| | - S D Solnushkin
- IP Pavlov Institute of Physiology, Laboratory of Information Technologies and Mathematical Modelling, Nab.Makarova 6, St. Petersburg 199034, Russia
| |
Collapse
|
3
|
Abstract
SIGNIFICANCE Performance on clinical tests of visual acuity can be influenced by the presence of nearby targets. This study compared the influence of neighboring flanking bars and letters on foveal and peripheral letter identification. PURPOSE Contour interaction and crowding refer to an impairment of visual resolution or discrimination produced by different types of flanking stimuli. This study compared the impairment of percent correct letter identification that is produced in normal observers when a target letter is surrounded by an array of four flanking bars (contour interaction) or four flanking letters (crowding). METHODS Performance was measured at the fovea and at eccentricities of 1.25, 2.5, and 5° for photopic (200 cd/m2) and mesopic stimuli (0.5 cd/m2) and a range of target-to-flanker separations. RESULTS Consistent with previous reports, foveal contour interaction and crowding were more pronounced for photopic than mesopic targets. However, no statistically significant difference existed between foveal contour-interaction and crowding functions at either luminance level. On the other hand, flanking bars produced much less impairment of letter identification than letter flankers at all three peripheral locations, indicating that crowding is more severe than contour interaction in peripheral vision. In contrast to the fovea, peripheral crowding and contour-interaction functions did not differ systematically for targets of photopic and mesopic luminance. CONCLUSION The similarity between foveal contour interaction and crowding and the dissimilarity between peripheral contour interaction and crowding suggest the involvement of different mechanisms at different retinal locations.
Collapse
|
4
|
Atilgan N, Yu SM, He S. Visual crowding effect in the parvocellular and magnocellular visual pathways. J Vis 2020; 20:6. [PMID: 32749447 PMCID: PMC7438633 DOI: 10.1167/jov.20.8.6] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
The crowding effect, defined as the detrimental effects of nearby items on visual object recognition, has been extensively investigated. Previous studies have primarily focused on finding the stage(s) in the visual hierarchy where crowding starts to limit target processing, while little attention has been focused on potential differences between the parvocellular (P) and magnocellular (M) pathways in crowding mechanisms. Here, we investigated the crowding effect in these parallel visual pathways. InExperiment 1, stimuli were designed to separately engage the P or M pathway, by tuning stimulus and background features (e.g., temporal frequency and color) to activate the targeted pathway and saturate the other pathway, respectively. Results showed that at the same eccentricity and with the same tasks, targets processed in the M pathway appeared to be more vulnerable to crowding effect. InExperiment 2, crowding effects were studied using three different types of stimuli and visual tasks (form, color, and motion), presumably with different degrees of dependence on the P and M pathways. Results revealed that color, motion, and form discrimination were increasingly more affected by crowding. We conclude that processing in the M and P pathways are differentially impacted by crowding; and importantly, crowding seems to affect processing of spatial forms more than other stimulus properties.
Collapse
|
5
|
Rosenholtz R, Yu D, Keshvari S. Challenges to pooling models of crowding: Implications for visual mechanisms. J Vis 2019. [DOI: 10.1167/jov.19.7.15] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Ruth Rosenholtz
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Dian Yu
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Shaiyan Keshvari
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
6
|
Rosenholtz R, Yu D, Keshvari S. Challenges to pooling models of crowding: Implications for visual mechanisms. J Vis 2019; 19:15. [PMID: 31348486 PMCID: PMC6660188 DOI: 10.1167/19.7.15] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2018] [Accepted: 03/10/2019] [Indexed: 12/02/2022] Open
Abstract
A set of phenomena known as crowding reveal peripheral vision's vulnerability in the face of clutter. Crowding is important both because of its ubiquity, making it relevant for many real-world tasks and stimuli, and because of the window it provides onto mechanisms of visual processing. Here we focus on models of the underlying mechanisms. This review centers on a popular class of models known as pooling models, as well as the phenomenology that appears to challenge a pooling account. Using a candidate high-dimensional pooling model, we gain intuitions about whether a pooling model suffices and reexamine the logic behind the pooling challenges. We show that pooling mechanisms can yield substitution phenomena and therefore predict better performance judging the properties of a set versus a particular item. Pooling models can also exhibit some similarity effects without requiring mechanisms that pool at multiple levels of processing, and without constraining pooling to a particular perceptual group. Moreover, we argue that other similarity effects may in part be due to noncrowding influences like cuing. Unlike low-dimensional straw-man pooling models, high-dimensional pooling preserves rich information about the stimulus, which may be sufficient to support high-level processing. To gain insights into the implications for pooling mechanisms, one needs a candidate high-dimensional pooling model and cannot rely on intuitions from low-dimensional models. Furthermore, to uncover the mechanisms of crowding, experiments need to separate encoding from decision effects. While future work must quantitatively examine all of the challenges to a high-dimensional pooling account, insights from a candidate model allow us to conclude that a high-dimensional pooling mechanism remains viable as a model of the loss of information leading to crowding.
Collapse
Affiliation(s)
- Ruth Rosenholtz
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Dian Yu
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Shaiyan Keshvari
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
7
|
Abstract
Biased-competition models assert that spatial attention facilitates visual perception by biasing competitive interactions in favor of relevant input. In line with this view, past work has shown that the benefits of covert spatial attention are greatest when targets must compete with interfering stimuli. Here we propose a boundary condition for the resolution of interference via exogenous attention: Attention resolves visual interference between targets and distractors, but only when they can be individuated into distinct representations. Thus, we propose that biased competition may be object-based. We replicated previous observations of larger attention effects when targets were flanked by irrelevant distractors (interference-present displays) than when targets were presented alone (interference-absent displays). Critically, we then showed that this amplification of cueing effects in the presence of interference is eliminated when strong crowding hampers individuation of the targets and distractors. Likewise, when targets were embedded within a noise mask that did not evoke the percept of an individuated distractor, the attention effects were equivalent across noise and lone-target displays. Thus, we conclude that exogenous spatial attention resolves interference in an object-based fashion that depends on the perception of individuated targets and distractors.
Collapse
Affiliation(s)
- Miranda Scolari
- Department of Psychological Sciences, Texas Tech University, Lubbock, TX USA
| | - Edward Awh
- Department of Psychology, University of Chicago, Chicago, IL USA
- Institute for Mind and Biology, University of Chicago, Chicago, IL USA
| |
Collapse
|
8
|
Doerig A, Bornet A, Rosenholtz R, Francis G, Clarke AM, Herzog MH. Beyond Bouma's window: How to explain global aspects of crowding? PLoS Comput Biol 2019; 15:e1006580. [PMID: 31075131 PMCID: PMC6530878 DOI: 10.1371/journal.pcbi.1006580] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Revised: 05/22/2019] [Accepted: 10/04/2018] [Indexed: 11/19/2022] Open
Abstract
In crowding, perception of an object deteriorates in the presence of nearby elements. Although crowding is a ubiquitous phenomenon, since elements are rarely seen in isolation, to date there exists no consensus on how to model it. Previous experiments showed that the global configuration of the entire stimulus must be taken into account. These findings rule out simple pooling or substitution models and favor models sensitive to global spatial aspects. In order to investigate how to incorporate global aspects into models, we tested a large number of models with a database of forty stimuli tailored for the global aspects of crowding. Our results show that incorporating grouping like components strongly improves model performance.
Collapse
Affiliation(s)
- Adrien Doerig
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Alban Bornet
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Ruth Rosenholtz
- Department of Brain and Cognitive Sciences, Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, United States of America
| | - Gregory Francis
- Department of Psychological Sciences, Purdue University, West Lafayette, IN, United States of America
| | - Aaron M. Clarke
- Laboratory of Computational Vision, Psychology Department, Bilkent University, Ankara, Turkey
| | - Michael H. Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
9
|
Wallis TS, Funke CM, Ecker AS, Gatys LA, Wichmann FA, Bethge M. Image content is more important than Bouma's Law for scene metamers. eLife 2019; 8:42512. [PMID: 31038458 PMCID: PMC6491040 DOI: 10.7554/elife.42512] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Accepted: 03/09/2019] [Indexed: 11/16/2022] Open
Abstract
We subjectively perceive our visual field with high fidelity, yet peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding). Prior work showed that humans could not discriminate images synthesised to match the responses of a mid-level ventral visual stream model when information was averaged in receptive fields with a scaling of about half their retinal eccentricity. This result implicated ventral visual area V2, approximated ‘Bouma’s Law’ of crowding, and has subsequently been interpreted as a link between crowding zones, receptive field scaling, and our perceptual experience. However, this experiment never assessed natural images. We find that humans can easily discriminate real and model-generated images at V2 scaling, requiring scales at least as small as V1 receptive fields to generate metamers. We speculate that explaining why scenes look as they do may require incorporating segmentation and global organisational constraints in addition to local pooling. As you read this digest, your eyes move to follow the lines of text. But now try to hold your eyes in one position, while reading the text on either side and below: it soon becomes clear that peripheral vision is not as good as we tend to assume. It is not possible to read text far away from the center of your line of vision, but you can see ‘something’ out of the corner of your eye. You can see that there is text there, even if you cannot read it, and you can see where your screen or page ends. So how does the brain generate peripheral vision, and why does it differ from what you see when you look straight ahead? One idea is that the visual system averages information over areas of the peripheral visual field. This gives rise to texture-like patterns, as opposed to images made up of fine details. Imagine looking at an expanse of foliage, gravel or fur, for example. Your eyes cannot make out the individual leaves, pebbles or hairs. Instead, you perceive an overall pattern in the form of a texture. Our peripheral vision may also consist of such textures, created when the brain averages information over areas of space. Wallis, Funke et al. have now tested this idea using an existing computer model that averages visual input in this way. By giving the model a series of photographs to process, Wallis, Funke et al. obtained images that should in theory simulate peripheral vision. If the model mimics the mechanisms that generate peripheral vision, then healthy volunteers should be unable to distinguish the processed images from the original photographs. But in fact, the participants could easily discriminate the two sets of images. This suggests that the visual system does not solely use textures to represent information in the peripheral visual field. Wallis, Funke et al. propose that other factors, such as how the visual system separates and groups objects, may instead determine what we see in our peripheral vision. This knowledge could ultimately benefit patients with eye diseases such as macular degeneration, a condition that causes loss of vision in the center of the visual field and forces patients to rely on their peripheral vision.
Collapse
Affiliation(s)
- Thomas Sa Wallis
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Christina M Funke
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Alexander S Ecker
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Bernstein Center for Computational Neuroscience, Berlin, Germany.,Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, United States.,Institute for Theoretical Physics, Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Leon A Gatys
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Felix A Wichmann
- Neural Information Processing Group, Faculty of Science, Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Matthias Bethge
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, United States.,Institute for Theoretical Physics, Eberhard Karls Universität Tübingen, Tübingen, Germany.,Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
10
|
Coates DR, Bernard JB, Chung STL. Feature contingencies when reading letter strings. Vision Res 2019; 156:84-95. [PMID: 30660632 DOI: 10.1016/j.visres.2019.01.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2018] [Revised: 01/05/2019] [Accepted: 01/09/2019] [Indexed: 11/17/2022]
Abstract
Many models posit the use of distinctive spatial features to recognize letters of the alphabet, a fundamental component of reading. It has also been hypothesized that when letters are in close proximity, visual crowding may cause features to mislocalize between nearby letters, causing identification errors. Here, we took a data-driven approach to investigate these aspects of textual processing. Using data collected from subjects identifying each letter in thousands of lower-case letter trigrams presented in the peripheral visual field, we found characteristic error patterns in the results suggestive of the use of particular spatial features. Distinctive features were seldom entirely missed, and we found evidence for errors due to doubling, masking, and migration of features. Dependencies both amongst neighboring letters and in the responses revealed the contingent nature of processing letter strings, challenging the most basic models of reading that ignore either crowding or featural decomposition.
Collapse
Affiliation(s)
| | | | - Susana T L Chung
- School of Optometry, University of California, Berkeley, United States; Vision Science Graduate Group, University of California, Berkeley, United States
| |
Collapse
|
11
|
Agaoglu S, Breitmeyer B, Ogmen H. Effects of Exogenous and Endogenous Attention on Metacontrast Masking. Vision (Basel) 2018; 2:vision2040039. [PMID: 31735902 PMCID: PMC6836134 DOI: 10.3390/vision2040039] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2018] [Revised: 09/21/2018] [Accepted: 09/22/2018] [Indexed: 11/16/2022] Open
Abstract
To efficiently use its finite resources, the visual system selects for further processing only a subset of the rich sensory information. Visual masking and spatial attention control the information transfer from visual sensory-memory to visual short-term memory. There is still a debate whether these two processes operate independently or interact, with empirical evidence supporting both arguments. However, recent studies pointed out that earlier studies showing significant interactions between common-onset masking and attention suffered from ceiling and/or floor effects. Our review of previous studies reporting metacontrast-attention interactions revealed similar artifacts. Therefore, we investigated metacontrast-attention interactions by using an experimental paradigm, in which ceiling/floor effects were avoided. We also examined whether metacontrast masking is differently influenced by endogenous and exogenous attention. We analyzed mean absolute-magnitude of response-errors and their statistical distribution. When targets are masked, our results support the hypothesis that manipulations of the levels of metacontrast and of endogenous/exogenous attention have largely independent effects. Moreover, statistical modeling of the distribution of response-errors suggests weak interactions modulating the probability of "guessing" behavior for some observers in both types of attention. Nevertheless, our data suggest that any joint effect of attention and metacontrast can be adequately explained by their independent and additive contributions.
Collapse
Affiliation(s)
- Sevda Agaoglu
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77204-4005, USA
- Center for Neuroengineering & Cognitive Science, University of Houston, Houston, TX 77204-4005, USA
| | - Bruno Breitmeyer
- Center for Neuroengineering & Cognitive Science, University of Houston, Houston, TX 77204-4005, USA
- Department of Psychology, University of Houston, Houston, TX 77204-5022, USA
| | - Haluk Ogmen
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77204-4005, USA
- Center for Neuroengineering & Cognitive Science, University of Houston, Houston, TX 77204-4005, USA
- Department of Electrical and Computer Engineering, University of Denver, Denver, CO 80208, USA
- Correspondence: ; Tel.: +1-303-871-2621
| |
Collapse
|
12
|
Harrison WJ, Bex PJ. Visual crowding is a combination of an increase of positional uncertainty, source confusion, and featural averaging. Sci Rep 2017; 7:45551. [PMID: 28378781 PMCID: PMC5381224 DOI: 10.1038/srep45551] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Accepted: 02/28/2017] [Indexed: 11/09/2022] Open
Abstract
Although we perceive a richly detailed visual world, our ability to identify individual objects is severely limited in clutter, particularly in peripheral vision. Models of such “crowding” have generally been driven by the phenomenological misidentifications of crowded targets: using stimuli that do not easily combine to form a unique symbol (e.g. letters or objects), observers typically confuse the source of objects and report either the target or a distractor, but when continuous features are used (e.g. orientated gratings or line positions) observers report a feature somewhere between the target and distractor. To reconcile these accounts, we develop a hybrid method of adjustment that allows detailed analysis of these multiple error categories. Observers reported the orientation of a target, under several distractor conditions, by adjusting an identical foveal target. We apply new modelling to quantify whether perceptual reports show evidence of positional uncertainty, source confusion, and featural averaging on a trial-by-trial basis. Our results show that observers make a large proportion of source-confusion errors. However, our study also reveals the distribution of perceptual reports that underlie performance in this crowding task more generally: aggregate errors cannot be neatly labelled because they are heterogeneous and their structure depends on target-distractor distance.
Collapse
Affiliation(s)
- William J Harrison
- Department of Psychology, University of Cambridge, Cambridge, UK.,Queensland Brain Institute, The University of Queensland, Brisbane, Australia
| | - Peter J Bex
- Department of Psychology, Northeastern University, Boston, USA
| |
Collapse
|