1
|
Mirror blindness: Our failure to recognize the target in search for mirror-reversed shapes. Atten Percept Psychophys 2023; 85:418-437. [PMID: 36653521 DOI: 10.3758/s13414-022-02641-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/13/2022] [Indexed: 01/20/2023]
Abstract
It is well known that visual search for a mirror target (i.e., a horizontally flipped item) is more difficult than search for other-oriented items (e.g., vertically flipped items). Previous studies have typically attributed costs of mirror search to early, attention-guiding processes but could not rule out contributions from later processes. In the present study we used eye tracking to distinguish between early, attention-guiding processes and later target identification processes. The results of four experiments revealed a marked human weakness in identifying mirror targets: Observers appear to frequently fail to classify a mirror target as a target on first fixation and to continue with search even after having directly looked at the target. Awareness measures corroborated that the location of a mirror target could not be reported above chance level after it had been fixated once. This mirror blindness effect explained a large proportion (45-87%) of the overall costs of mirror search, suggesting that part of the difficulties with mirror search are rooted in later, object identification processes (not attentional guidance). Mirror blindness was significantly reduced but not completely eliminated when both the target and non-targets were held constant, which shows that perfect top-down knowledge can reduce mirror blindness, without completely eliminating it. The finding that non-target certainty reduced mirror blindness suggests that object identification is not solely achieved by comparing a selected item to a target template. These results demonstrate that templates that guide search toward targets are not identical to the templates used to conclusively identify those targets.
Collapse
|
2
|
Phyo Wai AA, Ern Tchen J, Guan C. A Study of Visual Search based Calibration Protocol for EEG Attention Detection. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:5792-5795. [PMID: 34892436 DOI: 10.1109/embc46164.2021.9631083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Attention, a multi-faceted cognitive process, is essential in our daily lives. We can measure visual attention using an EEG Brain-Computer Interface for detecting different levels of attention in gaming, performance training, and clinical applications. In attention calibration, we use Flanker task to capture EEG data for attentive class. For EEG data belonging to inattentive class calibration, we instruct subject not focusing on a specific position on screen. We then classify attention levels using binary classifier trained with these surrogate ground-truth classes. However, subjects may not be in desirable attention conditions when performing repetitive boring activities over a long experiment duration. We propose attention calibration protocols in this paper that use simultaneous visual search with an audio directional change paradigm and static white noise as 'attentive' and 'inattentive' conditions, respectively. To compare the performance of proposed calibrations against baselines, we collected data from sixteen healthy subjects. For a fair comparison of classification performance; we used six basic EEG band-power features with a standard binary classifier. With the new calibration protocol, we achieved 74.37 ± 6.56% mean subject accuracy, which is about 3.73 ± 2.49% higher than the baseline, but there were no statistically significant differences. According to post-experiment survey results, new calibrations are more effective in inducing desired perceived attention levels. We will improve calibration protocols with reliable attention classifier modeling to enable better attention recognition based on these promising results.
Collapse
|
3
|
Bates CJ, Jacobs RA. Optimal attentional allocation in the presence of capacity constraints in uncued and cued visual search. J Vis 2021; 21:3. [PMID: 33944906 PMCID: PMC8107488 DOI: 10.1167/jov.21.5.3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 03/09/2021] [Indexed: 11/24/2022] Open
Abstract
The vision sciences literature contains a large diversity of experimental and theoretical approaches to the study of visual attention. We argue that this diversity arises, at least in part, from the field's inability to unify differing theoretical perspectives. In particular, the field has been hindered by a lack of a principled formal framework for simultaneously thinking about both optimal attentional processing and capacity-limited attentional processing, where capacity is limited in a general, task-independent manner. Here, we supply such a framework based on rate-distortion theory (RDT) and optimal lossy compression. Our approach defines Bayes-optimal performance when an upper limit on information processing rate is imposed. In this article, we compare Bayesian and RDT accounts in both uncued and cued visual search tasks. We start by highlighting a typical shortcoming of unlimited-capacity Bayesian models that is not shared by RDT models, namely, that they often overestimate task performance when information-processing demands are increased. Next, we reexamine data from two cued-search experiments that have previously been modeled as the result of unlimited-capacity Bayesian inference and demonstrate that they can just as easily be explained as the result of optimal lossy compression. To model cued visual search, we introduce the concept of a "conditional communication channel." This simple extension generalizes the lossy-compression framework such that it can, in principle, predict optimal attentional-shift behavior in any kind of perceptual task, even when inputs to the model are raw sensory data such as image pixels. To demonstrate this idea's viability, we compare our idealized model of cued search, which operates on a simplified abstraction of the stimulus, to a deep neural network version that performs approximately optimal lossy compression on the real (pixel-level) experimental stimuli.
Collapse
Affiliation(s)
| | - Robert A Jacobs
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
4
|
Baek J, Dosher BA, Lu ZL. Visual attention in spatial cueing and visual search. J Vis 2021; 21:1. [PMID: 33646298 PMCID: PMC7938002 DOI: 10.1167/jov.21.3.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Accepted: 01/27/2021] [Indexed: 11/30/2022] Open
Abstract
To characterize internal processes of an observer conducting perceptual tasks, we developed an observer model that combines the perceptual template model (PTM), the attention mechanisms in the PTM framework (Lu & Dosher, 1998), and uncertainty of signal detection theory (Green & Swets, 1966). The model was evaluated with a visual search experiment conducted in a range of external noise, signal contrast, and target-distractor similarity conditions. In each trial, eight Gabor patches were shown in each of two brief intervals, with one target at a different orientation from the distractors in one of the presentations. Subjects were precued to a subset of the stimuli (1, 2, 4, or 8) and asked to report (a) which interval contained the target and (b) where the target was. Individual roles of uncertainty and of attention in visual search were investigated by comparing models with and without an attention component. The results showed that decision uncertainty alone was sufficient to account for the set-size effect, even in conditions with high target-distractor similarity. Our theoretical model and empirical results provide a coherent picture regarding how visual information is selected and processed during feature search.
Collapse
Affiliation(s)
- Jongsoo Baek
- Yonsei Institute of Convergence Technology, Yonsei University, Incheon, South Korea
| | - Barbara Anne Dosher
- Department of Cognitive Sciences and Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA, USA
| | - Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China
- Center for Neural Science and Department of Psychology, New York University, NY, USA
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| |
Collapse
|
5
|
van Marlen T, van Wermeskerken M, van Gog T. Effects of visual complexity and ambiguity of verbal instructions on target identification. JOURNAL OF COGNITIVE PSYCHOLOGY 2019. [DOI: 10.1080/20445911.2018.1552700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Tim van Marlen
- Department of Education, Utrecht University Utrecht, Netherlands
| | | | - Tamara van Gog
- Department of Education, Utrecht University Utrecht, Netherlands
| |
Collapse
|
6
|
van Wermeskerken M, Ravensbergen S, van Gog T. Effects of instructor presence in video modeling examples on attention and learning. COMPUTERS IN HUMAN BEHAVIOR 2018. [DOI: 10.1016/j.chb.2017.11.038] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
7
|
Affiliation(s)
- Miguel P. Eckstein
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, California 93106-9660
| |
Collapse
|
8
|
Heuer S, Ivanova MV, Hallowell B. More Than the Verbal Stimulus Matters: Visual Attention in Language Assessment for People With Aphasia Using Multiple-Choice Image Displays. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:1348-1361. [PMID: 28520866 PMCID: PMC5755551 DOI: 10.1044/2017_jslhr-l-16-0087] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Revised: 09/02/2016] [Accepted: 01/11/2017] [Indexed: 05/27/2023]
Abstract
PURPOSE Language comprehension in people with aphasia (PWA) is frequently evaluated using multiple-choice displays: PWA are asked to choose the image that best corresponds to the verbal stimulus in a display. When a nontarget image is selected, comprehension failure is assumed. However, stimulus-driven factors unrelated to linguistic comprehension may influence performance. In this study we explore the influence of physical image characteristics of multiple-choice image displays on visual attention allocation by PWA. METHOD Eye fixations of 41 PWA were recorded while they viewed 40 multiple-choice image sets presented with and without verbal stimuli. Within each display, 3 images (majority images) were the same and 1 (singleton image) differed in terms of 1 image characteristic. The mean proportion of fixation duration (PFD) allocated across majority images was compared against the PFD allocated to singleton images. RESULTS PWA allocated significantly greater PFD to the singleton than to the majority images in both nonverbal and verbal conditions. Those with greater severity of comprehension deficits allocated greater PFD to nontarget singleton images in the verbal condition. CONCLUSION When using tasks that rely on multiple-choice displays and verbal stimuli, one cannot assume that verbal stimuli will override the effect of visual-stimulus characteristics.
Collapse
Affiliation(s)
- Sabine Heuer
- Department of Communication Sciences and Disorders, University of Wisconsin–Milwaukee
| | - Maria V. Ivanova
- National Research University Higher School of Economics, Moscow, Russia
| | - Brooke Hallowell
- School of Rehabilitation and Communication Sciences, Ohio University, Athens
| |
Collapse
|
9
|
Sakai K, Morishita M, Matsumoto H. Set-Size Effects in Simple Visual Search for Contour Curvature. Perception 2016; 36:323-34. [PMID: 17455749 DOI: 10.1068/p5511] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
In a visual-search paradigm, both perception and decision processes contribute to the set-size effects. Using yes – no search tasks in set sizes from 2 to 8 for contour curvature, we examined whether the set-size effects are predicted by either the limited-capacity model or the decision-noise model. There are limitations in perception and decision-making in the limited-capacity model, but only in decision-making in the decision-noise model. The results of four experiments showed that the slopes of the logarithm of threshold plotted against the logarithm of set size ranged from 0.24 to 0.32, when the curvature was high or low, contour convexity was upward or downward, and stimulus was masked or unmasked. These slopes were closer to the prediction of 0.23 by the decision-noise model than that of 0.73 by the limited-capacity model. We interpret this that in simple visual search for contour curvature, the decision noise mainly affects the set-size effects and perceptual capacity is not limited.
Collapse
Affiliation(s)
- Koji Sakai
- Department of Human Relations, Faculty of Human Relations, Kyoto Koka Women's College, 38 Kadono-cho, Nishikyogoku, Ukyo-ku, Kyoto 615-0882, Japan
| | | | | |
Collapse
|
10
|
Scharff A, Palmer J, Moore CM. Divided attention limits perception of 3-D object shapes. J Vis 2013; 13:18. [PMID: 23404158 PMCID: PMC5833208 DOI: 10.1167/13.2.18] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2012] [Accepted: 01/03/2013] [Indexed: 11/24/2022] Open
Abstract
Can one perceive multiple object shapes at once? We tested two benchmark models of object shape perception under divided attention: an unlimited-capacity and a fixed-capacity model. Under unlimited-capacity models, shapes are analyzed independently and in parallel. Under fixed-capacity models, shapes are processed at a fixed rate (as in a serial model). To distinguish these models, we compared conditions in which observers were presented with simultaneous or sequential presentations of a fixed number of objects (The extended simultaneous-sequential method: Scharff, Palmer, & Moore, 2011a, 2011b). We used novel physical objects as stimuli, minimizing the role of semantic categorization in the task. Observers searched for a specific object among similar objects. We ensured that non-shape stimulus properties such as color and texture could not be used to complete the task. Unpredictable viewing angles were used to preclude image-matching strategies. The results rejected unlimited-capacity models for object shape perception and were consistent with the predictions of a fixed-capacity model. In contrast, a task that required observers to recognize 2-D shapes with predictable viewing angles yielded an unlimited capacity result. Further experiments ruled out alternative explanations for the capacity limit, leading us to conclude that there is a fixed-capacity limit on the ability to perceive 3-D object shapes.
Collapse
Affiliation(s)
- Alec Scharff
- Department of Psychology, University of Washington, Seattle, WA, USA
| | - John Palmer
- Department of Psychology, University of Washington, Seattle, WA, USA
| | | |
Collapse
|
11
|
Abstract
How is visual object perception limited by divided attention? Whereas some theories have proposed that it is not limited at all (unlimited capacity), others have proposed that divided attention introduces restrictive capacity limitations or serial processing (fixed capacity). We addressed this question using a task in which observers searched for instances of particular object categories, such as a moose or squirrel. We applied an extended simultaneous-sequential paradigm to test the fixed-capacity and unlimited-capacity models (Experiment 1). The results were consistent with fixed capacity and rejected unlimited capacity. We ascertained that these results were due to attention, and not to sensory interactions such as crowding, by repeating the experiment using a cuing paradigm with physically identical displays (Experiment 2). The results from both experiments were consistent with theories of object perception that have fixed capacity, and they rejected theories with unlimited capacity. Both serial and parallel models with fixed capacity remain viable alternatives.
Collapse
|
12
|
Scharff A, Palmer J, Moore CM. Extending the simultaneous-sequential paradigm to measure perceptual capacity for features and words. J Exp Psychol Hum Percept Perform 2011; 37:813-33. [PMID: 21443383 PMCID: PMC6999820 DOI: 10.1037/a0021440] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In perception, divided attention refers to conditions in which multiple stimuli are relevant to an observer. To measure the effect of divided attention in terms of perceptual capacity, we introduce an extension of the simultaneous-sequential paradigm. The extension makes predictions for fixed-capacity models as well as for unlimited-capacity models. We apply this paradigm to two example tasks, contrast discrimination and word categorization, and find dramatically different effects of divided attention. Contrast discrimination has unlimited capacity, consistent with independent, parallel processing. Word categorization has a nearly fixed capacity, consistent with either serial processing or fixed-capacity, parallel processing. We argue that these measures of perceptual capacity rely on relatively few assumptions compared to most alternative measures.
Collapse
Affiliation(s)
- Alec Scharff
- Department of Psychology, University of Washington, Seattle, WA 98195, USA.
| | | | | |
Collapse
|
13
|
Davis ET, Shikano T, Main K, Hailston K, Michel RK, Sathian K. Mirror-image symmetry and search asymmetry: a comparison of their effects on visual search and a possible unifying explanation. Vision Res 2005; 46:1263-81. [PMID: 16376402 DOI: 10.1016/j.visres.2005.10.032] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2004] [Revised: 06/20/2005] [Accepted: 10/03/2005] [Indexed: 11/27/2022]
Abstract
Visual search may be affected by mirror-image symmetry between target and non-targets and also by switching the roles of target and non-target. Do different attention mechanisms underlie these two phenomena? Can a unifying explanation account for both? We conducted two experiments to decompose processing into component parts, and compared results to competing models' predictions. Mirror-image search was unimpaired after target discrimination had been balanced across search conditions-results were consistent with an unlimited-capacity, decision noise model. Search asymmetry affected higher-level processing, however, resulting in capacity limitations that necessitated serial processing. A unifying explanation can account for these two seemingly unrelated phenomena.
Collapse
Affiliation(s)
- Elizabeth T Davis
- School of Psychology, Georgia Institute of Technology, Atlanta, 30332-0170, USA.
| | | | | | | | | | | |
Collapse
|
14
|
Nikolaev AR, Leeuwen CV. Collinearity, curvature interpolation, and the power of perceptual integration. PSYCHOLOGICAL RESEARCH 2005; 71:427-37. [PMID: 16215743 DOI: 10.1007/s00426-005-0026-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2004] [Accepted: 09/06/2005] [Indexed: 10/25/2022]
Abstract
In three experiments, participants determined the orientation of a global triangle formed by three Gabor patches of a target spatial frequency in a field of distracters. The orientation of the target patches and their proximity were varied between conditions. When all the target patches had the same orientation this facilitated the response compared to random orientations. This effect occurred only when the patches were in close proximity. When the orientations of the target patches were different but aligned to the global triangle, facilitation occurred regardless of proximity. These contrasting types of facilitation were attributed to different early perceptual integration mechanisms that enable the perception of holistic structure.
Collapse
Affiliation(s)
- Andrey R Nikolaev
- Laboratory for Perceptual Dynamics, Brain Science Institute, RIKEN, 2-1 Hirosawa, Wako-shi, Saitama 351-0198, Japan.
| | | |
Collapse
|
15
|
van Leeuwen C, Lachmann T. Negative and positive congruence effects in letters and shapes. ACTA ACUST UNITED AC 2004; 66:908-25. [PMID: 15675640 DOI: 10.3758/bf03194984] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In six experiments in which a binary classification task was used, letter and nonletter (geometrical shapes, pseudoletters, or rotated letters) targets were presented either in isolation or surrounded by a geometrical shape. The surrounding shape could be congruent or incongruent with the target. When the classification required a distinction between letters and nonletters, either explicitly (Experiments 1-3) or implicitly (Experiment 4), a negative congruence effect was obtained for letters, contrasting with a regular, positive congruence effect for nonletters. When no distinction was to be made, letters and nonletters invariably showed a positive congruence effect (Experiments 5 and 6). In particular, between Experiments 1-4 and Experiments 5 and 6, the occurrence of negative or positive congruence effects for the same stimuli depended on the task. Feature interaction, target selection, and response competition explanations were tested against a feature integration approach. The results are explained in terms of different feature integration strategies for letters and nonletters.
Collapse
Affiliation(s)
- Cees van Leeuwen
- Laboratory for Perceptual Dynamics, Brain Science Institute, RIKEN, Wako-Shi, Saitama, Japan.
| | | |
Collapse
|