1
|
Hamblin-Frohman Z, Low JX, Becker SI. Attentional prioritisation and facilitation for similar stimuli in visual working memory. Psychol Res 2023; 87:2031-2038. [PMID: 36633707 PMCID: PMC10457231 DOI: 10.1007/s00426-023-01790-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 01/02/2023] [Indexed: 01/13/2023]
Abstract
Visual working memory (VWM) allows for the brief retention of approximately three to four items. Interestingly, when these items are similar to each other in a feature domain, memory recall performance is elevated compared to when they are dissimilar. This similarity benefit is currently not accounted for by models of VWM. Previous research has suggested that this similarity benefit may arise from selective attentional prioritisation in the maintenance phase. However, the similarity effect has not been contrasted under circumstances where dissimilar item types can adequately compete for memory resources. In Experiment 1, similarity benefits were seen for all-similar over all-dissimilar displays. This was also seen in mixed displays, change detection performance was higher when one of the two similar items changed, compared to when the dissimilar item changed. Surprisingly, the similarity effect was stronger in these mixed displays then when comparing the all-similar and all-dissimilar. Experiment 2 investigated this further by examining how attention was allocated in the memory encoding phase via eye movements. Results revealed that attention prioritised similar over dissimilar items in the mixed displays. Similar items were more likely to receive the first fixation and were fixated more often than dissimilar items. Furthermore, dwell times were elongated for dissimilar items, suggesting that encoding was less efficient. These results suggest that there is an attentional strategy towards prioritising similar items over dissimilar items, and that this strategy's influence can be observed in the memory encoding phase.
Collapse
Affiliation(s)
- Zachary Hamblin-Frohman
- School of Psychology, The University of Queensland, 1/18 Archibald Street, West End, QLD, 4101, Australia.
| | - Jia Xuan Low
- School of Psychology, The University of Queensland, 1/18 Archibald Street, West End, QLD, 4101, Australia
| | - Stefanie I Becker
- School of Psychology, The University of Queensland, 1/18 Archibald Street, West End, QLD, 4101, Australia
| |
Collapse
|
2
|
Qiu Z, Lei X, Becker SI, Pegna AJ. Faces capture spatial attention only when we want them to: An inattentional blindness EEG study. Biol Psychol 2023; 183:108665. [PMID: 37619811 DOI: 10.1016/j.biopsycho.2023.108665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Revised: 08/16/2023] [Accepted: 08/21/2023] [Indexed: 08/26/2023]
Abstract
Previous research on emotional face processing has shown that emotional faces such as fearful faces may be processed without visual awareness. However, evidence for nonconscious attention capture by fearful faces is limited. In fact, studies using sensory manipulation of awareness (e.g., backward masking paradigms) have shown that fearful faces do not attract attention during subliminal viewings nor when they were task-irrelevant. Here, we used a three-phase inattentional blindness paradigm and electroencephalography to examine whether faces (fearful and neutral) capture attention under different conditions of awareness and task-relevancy. We found that the electrophysiological marker for attention capture, the N2-posterior-contralateral (N2pc), was elicited by face stimuli only when participants were aware of the faces and when they were task-relevant (phase 3). When participants were unaware of the presence of faces (phase 1) or when the faces were irrelevant to the task (phase 2), no N2pc was observed. Together with our previous work, we concluded that fearful faces, or faces in general, do not attract attention unless we want them to.
Collapse
Affiliation(s)
- Zeguo Qiu
- School of Psychology, The University of Queensland, Brisbane 4072, Australia.
| | - Xue Lei
- School of Psychology, The University of Queensland, Brisbane 4072, Australia
| | - Stefanie I Becker
- School of Psychology, The University of Queensland, Brisbane 4072, Australia
| | - Alan J Pegna
- School of Psychology, The University of Queensland, Brisbane 4072, Australia
| |
Collapse
|
3
|
Becker SI, Hamblin-Frohman Z, Xia H, Qiu Z. Tuning to non-veridical features in attention and perceptual decision-making: An EEG study. Neuropsychologia 2023; 188:108634. [PMID: 37391127 DOI: 10.1016/j.neuropsychologia.2023.108634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Revised: 06/11/2023] [Accepted: 06/28/2023] [Indexed: 07/02/2023]
Abstract
When searching for a lost item, we tune attention to the known properties of the object. Previously, it was believed that attention is tuned to the veridical attributes of the search target (e.g., orange), or an attribute that is slightly shifted away from irrelevant features towards a value that can more optimally distinguish the target from the distractors (e.g., red-orange; optimal tuning). However, recent studies showed that attention is often tuned to the relative feature of the search target (e.g., redder), so that all items that match the relative features of the target equally attract attention (e.g., all redder items; relational account). Optimal tuning was shown to occur only at a later stage of identifying the target. However, the evidence for this division mainly relied on eye tracking studies that assessed the first eye movements. The present study tested whether this division can also be observed when the task is completed with covert attention and without moving the eyes. We used the N2pc in the EEG of participants to assess covert attention, and found comparable results: Attention was initially tuned to the relative colour of the target, as shown by a significantly larger N2pc to relatively matching distractors than a target-coloured distractor. However, in the response accuracies, a slightly shifted, "optimal" distractor interfered most strongly with target identification. These results confirm that early (covert) attention is tuned to the relative properties of an item, in line with the relational account, while later decision-making processes may be biased to optimal features.
Collapse
Affiliation(s)
| | | | - Hongfeng Xia
- School of Psychology, The University of Queensland, Australia
| | - Zeguo Qiu
- School of Psychology, The University of Queensland, Australia
| |
Collapse
|
4
|
Qiu Z, Becker SI, Xia H, Hamblin-Frohman Z, Pegna AJ. Fixation-related electrical potentials during a free visual search task reveal the timing of visual awareness. iScience 2023; 26:107148. [PMID: 37408689 PMCID: PMC10319232 DOI: 10.1016/j.isci.2023.107148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 04/26/2023] [Accepted: 06/12/2023] [Indexed: 07/07/2023] Open
Abstract
It has been repeatedly claimed that emotional faces readily capture attention, and that they may be processed without awareness. Yet some observations cast doubt on these assertions. Part of the problem may lie in the experimental paradigms employed. Here, we used a free viewing visual search task during electroencephalographic recordings, where participants searched for either fearful or neutral facial expressions among distractor expressions. Fixation-related potentials were computed for fearful and neutral targets and the response compared for stimuli consciously reported or not. We showed that awareness was associated with an electrophysiological negativity starting at around 110 ms, while emotional expressions were distinguished on the N170 and early posterior negativity only when stimuli were consciously reported. These results suggest that during unconstrained visual search, the earliest electrical correlate of awareness may emerge as early as 110 ms, and fixating at an emotional face without reporting it may not produce any unconscious processing.
Collapse
Affiliation(s)
- Zeguo Qiu
- School of Psychology, The University of Queensland, Brisbane, QLD 4072, Australia
| | - Stefanie I. Becker
- School of Psychology, The University of Queensland, Brisbane, QLD 4072, Australia
| | - Hongfeng Xia
- School of Psychology, The University of Queensland, Brisbane, QLD 4072, Australia
| | | | - Alan J. Pegna
- School of Psychology, The University of Queensland, Brisbane, QLD 4072, Australia
| |
Collapse
|
5
|
Hamblin-Frohman Z, Becker SI. Attentional selection is a sufficient cause for visual working memory interference. J Vis 2023; 23:15. [PMID: 37486298 PMCID: PMC10382781 DOI: 10.1167/jov.23.7.15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2023] Open
Abstract
Visual attention and visual working memory (VWM) are intertwined processes that allow navigation of the visual world. These systems can compete for highly limited cognitive resources, creating interference effects when both operate in tandem. Performing an attentional task while maintaining a VWM load often leads to a loss of memory information. These losses are seen even with very simple visual search tasks. Previous research has argued that this may be due to the attentional selection process, of choosing the target item out of surrounding nontarget items. Over two experiments, the current study disentangles the roles of search and selection in visual search and their influence on a retained VWM load. Experiment 1 revealed that, when search stimuli were relatively simple, target-absent searches (which did not require attentional selection) did not provoke memory interference, whereas target-present search did. In Experiment 2, the number of potential targets was varied in the search displays. In one condition, participants were required to select any one of the items displayed, requiring an attentional selection but no need to search for a specific item. Importantly, this condition led to memory interference to the same extent as a condition where a single target was presented among nontargets. Together, these results show that the process of attentional selection is a sufficient cause for interference with a concurrently maintained VWM load.
Collapse
Affiliation(s)
| | - Stefanie I Becker
- School of Psychology, The University of Queensland, Brisbane, Australia
| |
Collapse
|
6
|
Abstract
Theories of attention posit that attentional guidance operates on information held in a target template within memory. The template is often thought to contain veridical target features, akin to a photograph, and to guide attention to objects that match the exact target features. However, recent evidence suggests that attentional guidance is highly flexible and often guided by non-veridical features, a subset of features, or only associated features. We integrate these findings and propose that attentional guidance maximizes search efficiency based on a 'good-enough' principle to rapidly localize candidate target objects. Candidates are then serially interrogated to make target-match decisions using more precise information. We suggest that good-enough guidance optimizes the speed-accuracy-effort trade-offs inherent in each stage of visual search.
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA
| | - Zhiheng Zhou
- Center for Mind and Brain, University of California Davis, Davis, CA, USA
| | - Stefanie I Becker
- School of Psychology, University of Queensland, Brisbane, QLD, Australia
| | | | - Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA.
| |
Collapse
|
7
|
Becker SI, Grubert A, Horstmann G, Ansorge U. Which processes dominate visual search: Bottom-up feature contrast, top-down tuning or trial history? Cognition 2023; 236:105420. [PMID: 36905828 DOI: 10.1016/j.cognition.2023.105420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 02/21/2023] [Accepted: 02/22/2023] [Indexed: 03/11/2023]
Abstract
Previous research has identified three mechanisms that guide visual attention: bottom-up feature contrasts, top-down tuning, and the trial history (e.g., priming effects). However, only few studies have simultaneously examined all three mechanisms. Hence, it is currently unclear how they interact or which mechanisms dominate over others. With respect to local feature contrasts, it has been claimed that a pop-out target can only be selected immediately in dense displays when the target has a high local feature contrast, but not when the displays are sparse, which leads to an inverse set-size effect. The present study critically evaluated this view by systematically varying local feature contrasts (i.e., set size), top-down knowledge, and the trial history in pop-out search. We used eye tracking to distinguish between early selection and later identification-related processes. The results revealed that early visual selection was mainly dominated by top-down knowledge and the trial history: When attention was biased to the target feature, either by valid pre-cueing (top-down) or automatic priming, the target could be localised immediately, regardless of display density. Bottom-up feature contrasts only modulated selection when the target was unknown and attention was biased to the non-targets. We also replicated the often-reported finding of reliable feature contrast effects in the mean RTs, but showed that these were due to later, target identification processes (e.g., in the target dwell times). Thus, contrary to the prevalent view, bottom-up feature contrasts in dense displays do not seem to directly guide attention, but only facilitate nontarget rejection, probably by facilitating nontarget grouping.
Collapse
|
8
|
Qiu Z, Jiang J, Becker SI, Pegna AJ. Attentional capture by fearful faces requires consciousness and is modulated by task-relevancy: A dot-probe EEG study. Front Neurosci 2023; 17:1152220. [PMID: 37034154 PMCID: PMC10076762 DOI: 10.3389/fnins.2023.1152220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 03/08/2023] [Indexed: 04/11/2023] Open
Abstract
In the current EEG study, we used a dot-probe task in conjunction with backward masking to examine the neural activity underlying awareness and spatial processing of fearful faces and the neural processes for subsequent cued spatial targets. We presented face images under different viewing conditions (subliminal and supraliminal) and manipulated the relation between a fearful face in the pair and a subsequent target. Our mass univariate analysis showed that fearful faces elicit the N2-posterior-contralateral, indexing spatial attention capture, only when they are presented supraliminally. Consistent with this, the multivariate pattern analysis revealed a successful decoding of the location of the fearful face only in the supraliminal viewing condition. Additionally, the spatial attention capture by fearful faces modulated the processing of subsequent lateralised targets that were spatially congruent with the fearful face, in both al and electrophysiological data. There was no evidence for nonconscious processing of the fearful faces in the current paradigm. We conclude that spatial attentional capture by fearful faces requires visual awareness and it is modulated by top-down task demands.
Collapse
|
9
|
Qiu Z, Lei X, Becker SI, Pegna AJ. Neural activities during the Processing of unattended and unseen emotional faces: a voxel-wise Meta-analysis. Brain Imaging Behav 2022; 16:2426-2443. [PMID: 35739373 PMCID: PMC9581832 DOI: 10.1007/s11682-022-00697-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/03/2022] [Indexed: 11/27/2022]
Abstract
Voxel-wise meta-analyses of task-evoked regional activity were conducted for healthy individuals during the unconscious processing of emotional and neutral faces with an aim to examine whether and how different experimental paradigms influenced brain activation patterns. Studies were categorized into sensory and attentional unawareness paradigms. Thirty-four fMRI studies including 883 healthy participants were identified. Across experimental paradigms, unaware emotional faces elicited stronger activation of the limbic system, striatum, inferior frontal gyrus, insula and the temporal lobe, compared to unaware neutral faces. Crucially, in attentional unawareness paradigms, unattended emotional faces elicited a right-lateralized increased activation (i.e., right amygdala, right temporal pole), suggesting a right hemisphere dominance for processing emotional faces during inattention. By contrast, in sensory unawareness paradigms, unseen emotional faces elicited increased activation of the left striatum, the left amygdala and the right middle temporal gyrus. Additionally, across paradigms, unconsciously processed positive emotions were found associated with more activation in temporal and parietal cortices whereas unconsciously processed negative emotions elicited stronger activation in subcortical regions, compared to neutral faces.
Collapse
Affiliation(s)
- Zeguo Qiu
- School of Psychology, The University of Queensland, Brisbane, 4072, Australia.
| | - Xue Lei
- School of Psychology, The University of Queensland, Brisbane, 4072, Australia
| | - Stefanie I Becker
- School of Psychology, The University of Queensland, Brisbane, 4072, Australia
| | - Alan J Pegna
- School of Psychology, The University of Queensland, Brisbane, 4072, Australia
| |
Collapse
|
10
|
Qiu Z, Becker SI, Pegna AJ. Spatial Attention Shifting to Emotional Faces is Contingent on Awareness and Task Relevancy. Cortex 2022; 151:30-48. [DOI: 10.1016/j.cortex.2022.02.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 12/06/2021] [Accepted: 02/23/2022] [Indexed: 11/26/2022]
|
11
|
Abstract
It is well known that attention can be automatically attracted to salient items. However, recent studies show that it is possible to avoid distraction by a salient item (with a known feature), leading to facilitated search. This article tests a proposed mechanism for distractor inhibition: that a mental representation of the distractor feature held in visual working memory (VWM) allows attention to be guided away from the distractor. We tested this explanation by examining color-based inhibition in visual search for a shape target with and without VWM load. In Experiment 1 the presence of a distractor facilitated visual search under low and high VWM loads, as reflected in faster response times when the distractor was present (compared to absent), and in fewer eye movements to the salient distractor than the non-target items. However, the eye movement inhibition effect was noticeably weakened in the load conditions. Experiment 2 explored further, to distinguish between inhibition of the distractor color and activation of the (irrelevant) target color. Intermittently presenting single-color search trials that contained only either a target, distractor or a neutral-colored singleton revealed that the distractor color attracted attention less than the neutral color with and without VWM load. The target color, however, only attracted attention more than neutral colors under no load, whereas a VWM load completely eliminated this effect. This suggests that although VWM plays a role in guiding attention to the (irrelevant) target color, distractor-feature inhibition can operate independently.
Collapse
Affiliation(s)
| | - Stefanie I Becker
- School of Psychology, The University of Queensland, Brisbane, Australia.,
| |
Collapse
|
12
|
Li G, Shen J, Dai C, Wu J, Becker SI. ShVEEGc: EEG Clustering With Improved Cosine Similarity-Transformed Shapley Value. IEEE Trans Emerg Top Comput Intell 2022. [DOI: 10.1109/tetci.2022.3189385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Guanghui Li
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Jiahua Shen
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Chenglong Dai
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Jia Wu
- Department of Computing, Macquarie University, Sydney, NSW, Australia
| | - Stefanie I. Becker
- School of Psychology, University of Queensland, St Lucia, QLD, Australia
| |
Collapse
|
13
|
Dai C, Wu J, Pi D, Becker SI, Cui L, Zhang Q, Johnson B. Brain EEG Time-Series Clustering Using Maximum-Weight Clique. IEEE Trans Cybern 2022; 52:357-371. [PMID: 32149677 DOI: 10.1109/tcyb.2020.2974776] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Brain electroencephalography (EEG), the complex, weak, multivariate, nonlinear, and nonstationary time series, has been recently widely applied in neurocognitive disorder diagnoses and brain-machine interface developments. With its specific features, unlabeled EEG is not well addressed by conventional unsupervised time-series learning methods. In this article, we handle the problem of unlabeled EEG time-series clustering and propose a novel EEG clustering algorithm, that we call mwcEEGc. The idea is to map the EEG clustering to the maximum-weight clique (MWC) searching in an improved Fréchet similarity-weighted EEG graph. The mwcEEGc considers the weights of both vertices and edges in the constructed EEG graph and clusters EEG based on their similarity weights instead of calculating the cluster centroids. To the best of our knowledge, it is the first attempt to cluster unlabeled EEG trials using MWC searching. The mwcEEGc achieves high-quality clusters with respect to intracluster compactness as well as intercluster scatter. We demonstrate the superiority of mwcEEGc over ten state-of-the-art unsupervised learning/clustering approaches by conducting detailed experimentations with the standard clustering validity criteria on 14 real-world brain EEG datasets. We also present that mwcEEGc satisfies the theoretical properties of clustering, such as richness, consistency, and order independence.
Collapse
|
14
|
Martin A, Becker SI. A relational account of visual short-term memory (VSTM). Cortex 2021; 144:151-167. [PMID: 34666299 DOI: 10.1016/j.cortex.2021.08.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2020] [Revised: 10/31/2020] [Accepted: 08/18/2021] [Indexed: 10/20/2022]
Abstract
Visual short-term memory (VSTM) is an important resource that allows temporarily storing visual information. Current theories posit that elementary features (e.g., red, green) are encoded and stored independently of each other in VSTM. However, they have difficulty explaining the similarity effect, that similar items can be remembered better than dissimilar items. In Experiment 1, we tested (N = 20) whether the similarity effect may be due to storing items in a context-dependent manner in VSTM (e.g., as the reddest/yellowest item). In line with a relational account of VSTM, we found that the similarity effect is not due to feature similarity, but to an enhanced sensitivity for detecting changes when the relative colour of a to-be-memorised item changes (e.g., from reddest to not-reddest item; than when an item underwent the same change but retained its relative colour; e.g., still reddest). Experiment 2 (N = 20) showed that VSTM load, as indexed by the CDA amplitude in the EEG, was smaller when the colours were ordered so that they all had the same relationship than when the same colours were out-of-order, requiring encoding different relative colours. With this, we report two new effects in VSTM - a relational detection advantage that describes an enhanced sensitivity to relative changes in change detection, and a relational CDA effect, which reflects that VSTM load, as indexed by the CDA, scales with the number of (different) relative features between the memory items. These findings support a relational account of VSTM and question the view that VSTM stores features such as colours independently of each other.
Collapse
Affiliation(s)
- Aimee Martin
- The University of Queensland, School of Psychology, QLD, Brisbane, Australia.
| | - Stefanie I Becker
- The University of Queensland, School of Psychology, QLD, Brisbane, Australia.
| |
Collapse
|
15
|
Hamblin-Frohman Z, Becker SI. The attentional template in high and low similarity search: Optimal tuning or tuning to relations? Cognition 2021; 212:104732. [PMID: 33862440 DOI: 10.1016/j.cognition.2021.104732] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 04/08/2021] [Accepted: 04/09/2021] [Indexed: 10/21/2022]
Abstract
The attentional template is often described as the mental representation that drives attentional selection and guidance, for instance, in visual search. Recent research suggests that this template is not a veridical representation of the sought-for target, but instead an altered representation that allows more efficient search. The current paper contrasts two such theories. Firstly, the Optimal Tuning account which posits that the attentional template shifts to an exaggerated target value to maximise the signal-to-noise ratio between similar targets and non-targets. Secondly, the Relational account which states that instead of tuning to feature values, attention is directed to the relative value created by the search context, e.g. all redder items or the reddest item. Both theories are empirically supported, but used different paradigms (perceptual decision tasks vs. visual search), and different attentional measures (probe response accuracy vs. gaze capture). The current design incorporates both paradigms and measures. The results reveal that while Optimal Tuning shifts are observed in probe trials they do not drive early attention or eye- movement behaviour in visual search. Instead, early attention follows the Relational Account, selecting all items with the relative target colour (e.g., redder). This suggests that the masked probe trials used in Optimal Tuning do not probe the attentional template that guides attention. In Experiment 3 we find that optimal tuning shifts correspond in magnitude to purely perceptual shifts created by contrast biases in the visual search arrays. This suggests that the shift in probe responses may in fact be a perceptual artefact rather than a strategic adaptation to optimise the signal-to-noise ratio. These results highlight the distinction between early attentional mechanisms and later, target identification mechanisms. SIGNIFICANCE STATEMENT: Classical theories of attention suggest that attention is guided by a feature-specific target template. In recent designs this has been challenged by an apparent non- veridical tuning of the template in situations where the target stimulus is similar to non-targets. The current studies compare two theories that propose different explanations for non-veridical tuning; the Relational and the Optimal Tuning account. We show that the Relational account describes the mechanism that guides early search behaviour, while the Optimal Tuning account describes perceptual decision-making. Optimal Tuning effects may be due to an artefact that has not been described in visual search before (simultaneous contrast).
Collapse
Affiliation(s)
| | - Stefanie I Becker
- School of Psychology, The University of Queensland, Brisbane, Australia
| |
Collapse
|
16
|
|
17
|
Dai C, Pi D, Becker SI. Shapelet-transformed Multi-channel EEG Channel Selection. ACM T INTEL SYST TEC 2020. [DOI: 10.1145/3397850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
This article proposes an approach to select EEG channels based on EEG shapelet transformation, aiming to reduce the setup time and inconvenience for subjects and to improve the applicable performance of Brain-Computer Interfaces (BCIs). In detail, the method selects top-
k
EEG channels by solving a logistic loss-embedded minimization problem with respect to EEG shapelet learning, hyperplane learning, and EEG channel weight learning simultaneously. Especially, to learn distinguished EEG shapelets for weighting contributions of each EEG channel to the logistic loss, EEG shapelet similarity is also minimized during the procedure. Furthermore, the gradient descent strategy is adopted in the article to solve the non-convex optimization problem, which finally leads to the algorithm termed StEEGCS. In a result, classification accuracy, with those EEG channels selected by StEEGCS, is improved compared to that with all EEG channels, and classification time consumption is reduced as well. Additionally, the comparisons with several state-of-the-art EEG channel selection methods on several real-world EEG datasets also demonstrate the efficacy and superiority of StEEGCS.
Collapse
Affiliation(s)
- Chenglong Dai
- Nanjing University of Aeronautics and Astronautics, Jiangjun Avenue, Nanjing, Jiangsu Province, China
| | - Dechang Pi
- Nanjing University of Aeronautics and Astronautics, Jiangjun Avenue, Nanjing, Jiangsu Province, China
| | | |
Collapse
|
18
|
Remington RW, Vromen JMG, Becker SI, Baumann O, Mattingley JB. The Role of Frontoparietal Cortex across the Functional Stages of Visual Search. J Cogn Neurosci 2020; 33:63-76. [PMID: 32985948 DOI: 10.1162/jocn_a_01632] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Areas in frontoparietal cortex have been shown to be active in a range of cognitive tasks and have been proposed to play a key role in goal-driven activities (Dosenbach, N. U. F., Fair, D. A., Miezin, F. M., Cohen, A. L., Wenger, K. K., Dosenbach, R. A. T., et al. Distinct brain networks for adaptive and stable task control in humans. Proceedings of the National Academy of Sciences, U.S.A., 104, 11073-11078, 2007; Duncan, J. The multiple-demand (MD) system of the primate brain: Mental programs for intelligent behavior. Trends in Cognitive Sciences, 14, 172-179, 2010). Here, we examine the role this frontoparietal system plays in visual search. Visual search, like many complex tasks, consists of a sequence of operations: target selection, stimulus-response (SR) mapping, and response execution. We independently manipulated the difficulty of target selection and SR mapping in a novel visual search task that involved identical stimulus displays. Enhanced activity was observed in areas of frontal and parietal cortex during both difficult target selection and SR mapping. In addition, anterior insula and ACC showed preferential representation of SR-stage information, whereas the medial frontal gyrus, precuneus, and inferior parietal sulcus showed preferential representation of target selection-stage information. A connectivity analysis revealed dissociable neural circuits underlying visual search. We hypothesize that these circuits regulate distinct mental operations associated with the allocation of spatial attention, stimulus decisions, shifts of task set from selection to SR mapping, and SR mapping. Taken together, the results show frontoparietal involvement in all stages of visual search and a specialization with respect to cognitive operations.
Collapse
Affiliation(s)
| | | | | | | | - Jason B Mattingley
- The University of Queensland.,Canadian Institute for Advanced Research, Toronto, ON, Canada
| |
Collapse
|
19
|
York AA, Sewell DK, Becker SI. Dual target search: Attention tuned to relative features, both within and across feature dimensions. J Exp Psychol Hum Percept Perform 2020; 46:1368-1386. [PMID: 32881554 DOI: 10.1037/xhp0000851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Current models of attention propose that we can tune attention in a top-down controlled manner to a specific feature value (e.g., shape, color) to find specific items (e.g., a red car; feature-specific search). However, subsequent research has shown that attention is often tuned in a context-dependent manner to the relative features that distinguish a sought-after target from other surrounding nontarget items (e.g., larger, bluer, and faster; relational search). Currently, it is unknown whether search will be feature-specific or relational in search for multiple targets with different attributes. In the present study, observers had to search for 2 targets that differed either across 2 stimulus dimensions (color, motion; Experiment 1) or within the same stimulus dimension (color; Experiment 2: orange/redder or aqua/bluer). We distinguished between feature-specific and relational search by measuring eye movements to different types of irrelevant distractors (e.g., relatively matching vs. feature-matching). The results showed that attention was biased to the 2 relative features of the targets, both across different feature dimensions (i.e., motion and color) and within a single dimension (i.e., 2 colors; bluer and redder). The results were not due to automatic intertrial effects (dimension weighting or feature priming), and we found only small effects for valid precueing of the target feature, indicating that relational search for two targets was conducted with relative ease. This is the first demonstration that attention is top-down biased to the relative target features in dual target search, which shows that the relational account generalizes to multiple target search. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
20
|
Schönhammer JG, Becker SI, Kerzel D. Attentional capture by context cues, not inhibition of cue singletons, explains same location costs. ACTA ACUST UNITED AC 2020; 46:610-628. [DOI: 10.1037/xhp0000735] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
21
|
Harris AM, Jacoby O, Remington RW, Becker SI, Mattingley JB. Behavioral and electrophysiological evidence for a dissociation between working memory capacity and feature-based attention. Cortex 2020; 129:158-174. [PMID: 32473402 DOI: 10.1016/j.cortex.2020.04.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Revised: 02/27/2020] [Accepted: 04/09/2020] [Indexed: 10/24/2022]
Abstract
When attending to visual objects with particular features, neural processing is typically biased toward those features. Previous work has suggested that maintaining such feature-based attentional sets may involve the same neural resources as visual working memory. If so, the extent to which feature-based attention influences stimulus processing should be related to individuals' working memory capacity. Here we used electroencephalography (EEG) to record brain activity in 60 human observers while they monitored stimulus streams for targets of a specific color. Distractors presented at irrelevant locations evoked strong electrophysiological markers of attentional signal enhancement (the N2pc and PD components) despite producing little or no behavioral interference. Critically, there was no relationship between individual differences in the magnitude of these feature-based biases on distractor processing and individual differences in working memory capacity as measured using three separate working memory tasks. Bayes factor analyses indicated substantial evidence in support of the null hypothesis of no relationship between working memory capacity and the effects of feature-based attention on distractor processing. We consider three potential explanations for these findings. One is that working memory and feature-based attention draw upon distinct neural resources, contrary to previous claims. A second is that working memory is only related to feature-based attention when the attentional template has recently changed. A third is that feature-based attention tasks of the kind employed in the current study recruit just one of several subcomponents of working memory, and so are not invariably correlated with an individual's overall working memory capacity.
Collapse
Affiliation(s)
- Anthony M Harris
- Queensland Brain Institute, The University of Queensland, St Lucia, 4072, Australia.
| | - Oscar Jacoby
- Queensland Brain Institute, The University of Queensland, St Lucia, 4072, Australia
| | - Roger W Remington
- School of Psychology, The University of Queensland, St Lucia, 4072, Australia; Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Stefanie I Becker
- School of Psychology, The University of Queensland, St Lucia, 4072, Australia
| | - Jason B Mattingley
- Queensland Brain Institute, The University of Queensland, St Lucia, 4072, Australia; School of Psychology, The University of Queensland, St Lucia, 4072, Australia; Canadian Institute for Advanced Research (CIFAR), Canada
| |
Collapse
|
22
|
York A, Becker SI. Top-down modulation of gaze capture: Feature similarity, optimal tuning, or tuning to relative features? J Vis 2020; 20:6. [PMID: 32282888 PMCID: PMC7405730 DOI: 10.1167/jov.20.4.6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 01/07/2020] [Indexed: 11/24/2022] Open
Abstract
It is well-known that we can tune attention to specific features (e.g., colors). Originally, it was believed that attention would always be tuned to the exact feature value of the sought-after target (e.g., orange). However, subsequent studies showed that selection is often geared towards target-dissimilar items, which was variably attributed to (1) tuning attention to the relative target feature that distinguishes the target from other items in the surround (e.g., reddest item; relational tuning), (2) tuning attention to a shifted target feature that allows more optimal target selection (e.g., reddish orange; optimal tuning), or (3) broad attentional tuning and selection of the most salient item that is still similar to the target (combined similarity/saliency). The present study used a color search task and assessed gaze capture by differently coloured distractors to distinguish between the three accounts. The results of the first experiment showed that a very target-dissimilar distractor that matched the relative color of the target but was outside of the area of optimal tuning still captured very strongly. As shown by a control condition and a control experiment, bottom-up saliency modulated capture only weakly, ruling out a combined similarity-saliency account. With this, the results support the relational account that attention is tuned to the relative target feature (e.g., reddest), not an optimal feature value or the target feature.
Collapse
Affiliation(s)
- Ashley York
- The University of Queensland, Brisbane, Australia
| | | |
Collapse
|
23
|
Affiliation(s)
| | - Aimee Martin
- School of Psychology, The University of Queensland, Brisbane, Australia
| | | |
Collapse
|
24
|
Becker SI, Martin A, Finlayson NJ. At what stage of the visual processing hierarchy is visual search relational and context-dependent vs. feature-specific? J Vis 2019. [DOI: 10.1167/19.10.132b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
| | - Aimee Martin
- School of Psychology, The University of Queensland, Brisbane, Australia
| | - Nonie J Finlayson
- School of Psychology, The University of Queensland, Brisbane, Australia
| |
Collapse
|
25
|
Hamblin-Frohman Z, Becker SI. Attending object features interferes with visual working memory regardless of eye-movements. J Exp Psychol Hum Percept Perform 2019; 45:1049-1061. [PMID: 31021157 DOI: 10.1037/xhp0000651] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
There is currently a debate about the relationship between feature-based attention (FBA) and visual working memory (VWM). One theory proposes that the 2 constructs should be synthesized into a single concept (Kiyonaga & Egner, 2013). In this unified theory, VWM is defined as attention directed toward internal representations that competes with attention for a shared limited resource. Contrary to this account, it has been reported that only overt attention shifts (saccades), but not covert attention shifts, interfere with VWM (Tas, Luck, & Hollingworth, 2016). However, covert attention may only have required spatial attention, not FBA, so that the lack of interference may be because of the fact that spatial attention does not interfere with VWM. The current experiment varied feature versus spatial attention and overt versus covert effects upon VWM performance, as measured with a change detection paradigm. Results across three experiments show that memory interference arises when objects features are attended, regardless of whether attention was directed overtly or covertly. In a fourth experiment we show that attending spatial information interferes with spatial working memory, whereas attending feature information does not. These findings demonstrate a dissociation between spatial attention and VWM, which leaves unified concepts of FBA and VWM intact. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
26
|
Horstmann G, Becker SI. More efficient visual search for happy faces may not indicate guidance, but rather faster distractor rejection: Evidence from eye movements and fixations. ACTA ACUST UNITED AC 2019; 20:206-216. [PMID: 30730168 DOI: 10.1037/emo0000536] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The visual search paradigm has been used in emotion research to examine the relation between facial expressions of emotion and attention. Here, the better performance in a search for one facial expression category (e.g., a happy face) compared to a second category (e.g., an angry face) has been often interpreted as indicating better guidance of attention. Better guidance of attention in turn indicates that some aspect of the facial expression can be used preattentively, that is, while focused attention is directed elsewhere in the visual field. This view has been criticized because better performance may also mean better distractor rejection independently of guidance. The present study uses eye tracking to disentangle the two variables. The results show better search performance with a happy than angry face as the target. Facial emotion also influenced the time the eyes fixated a stimulus (dwelling), but not guidance related variables of search performance. A linear regression moreover showed that dwelling accounted for large amounts of variance in the overall search times. Overall, the results present clear-cut evidence that differential search performance does not need to indicate differential guidance, but may also be explained by postselective factors that influence the dwelling on stimuli. The broader implication of this demonstration is that results from the visual search paradigm have to be interpreted with caution, and that better search performance cannot be directly interpreted as an indicator of preattentive guidance of attention. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Collapse
|
27
|
Cornish L, Hill A, Horswill MS, Becker SI, Watson MO. Eye-tracking reveals how observation chart design features affect the detection of patient deterioration: An experimental study. Appl Ergon 2019; 75:230-242. [PMID: 30509531 DOI: 10.1016/j.apergo.2018.10.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2017] [Revised: 10/09/2018] [Accepted: 10/14/2018] [Indexed: 06/09/2023]
Abstract
Particular design features intended to improve usability - including graphically displayed observations and integrated colour-based scoring-systems - have been shown to increase the speed and accuracy with which users of hospital observation charts detect abnormal patient observations. We used eye-tracking to evaluate two potential cognitive mechanisms underlying these effects. Novice chart-users completed a series of experimental trials in which they viewed patient data presented on one of three observation chart designs (varied within-subjects), and indicated which observation was abnormal (or that none were). A chart that incorporated both graphically displayed observations and an integrated colour-based scoring-system yielded faster, more accurate responses and fewer, shorter fixations than a graphical chart without a colour-based scoring-system. The latter, in turn, yielded the same advantages over a tabular chart (which incorporated neither design feature). These results suggest that both colour-based scoring-systems and graphically displayed observations improve search efficiency and reduce the cognitive resources required to process vital sign data.
Collapse
Affiliation(s)
- Lillian Cornish
- School of Psychology, The University of Queensland, St Lucia, Brisbane, Queensland, 4072, Australia
| | - Andrew Hill
- School of Psychology, The University of Queensland, St Lucia, Brisbane, Queensland, 4072, Australia; Clinical Skills Development Service, Metro North Hospital and Health Service, Herston, Brisbane, Queensland, 4006, Australia.
| | - Mark S Horswill
- School of Psychology, The University of Queensland, St Lucia, Brisbane, Queensland, 4072, Australia
| | - Stefanie I Becker
- School of Psychology, The University of Queensland, St Lucia, Brisbane, Queensland, 4072, Australia
| | - Marcus O Watson
- School of Psychology, The University of Queensland, St Lucia, Brisbane, Queensland, 4072, Australia; Clinical Skills Development Service, Metro North Hospital and Health Service, Herston, Brisbane, Queensland, 4006, Australia; School of Medicine, The University of Queensland, Herston, Brisbane, Queensland, 4006, Australia
| |
Collapse
|
28
|
Martin A, Becker SI. How feature relationships influence attention and awareness: Evidence from eye movements and EEG. J Exp Psychol Hum Percept Perform 2018; 44:1865-1883. [PMID: 30211593 DOI: 10.1037/xhp0000574] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Many everyday tasks require selecting relevant objects in the visual field while ignoring irrelevant information. A widely held belief is that attention is tuned to the exact feature value(s) of a sought-after target object (e.g., color, shape). In contrast, subsequent studies have shown that attentional orienting (capture) is often determined by the relative feature(s) that the target has relative to other irrelevant items surrounding (e.g., redder, larger). However, it is unknown whether conscious awareness is also determined by relative features. Alternatively, awareness could be more strongly determined by exact feature values, which seem to determine dwelling on objects. The present study examined eye movements in a color search task with different types of irrelevant distractors to test (a) whether dwelling is more strongly influenced by exact feature matches than relative matches, and (b) which of the processes (capture vs. dwelling) is more important for conscious awareness of the distractor. A second experiment used an electrophysiological marker of attention (N2pc in the electroencephalogram of participants) to test whether the results generalize to covert attention shifts. As expected, the results revealed that the initial capture of attention was strongest for distractors matching the relative color of the target, whereas similarity to the target was the most important determiner for dwelling. Awareness was more strongly determined by the initial capture of attention than dwelling. These results provide important insights into the interplay of attention and awareness and highlight the importance of considering relative, context-dependent features in theories of awareness. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Collapse
|
29
|
|
30
|
Enns JT, Becker SI, Brockmole J, Castelhano M, Creem-Regehr S, Gray R, Hecht H, Juhasz B, Philbeck J, Woodman G. Linking contemporary research to the classics: Celebrating 125 years at APA. J Exp Psychol Hum Percept Perform 2018; 43:1695-1700. [PMID: 28967778 DOI: 10.1037/xhp0000473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
APA is celebrating 125 years this year and at the journal we are commemorating this milestone with a special issue. The inspiration came from our editorial team, who wished to acknowledge the links between game-changing articles that have influenced our research community in the past-we call them classics for short-and contemporary works. The main idea was to feature the work of nine contemporary research teams, while at the same time drawing readers' attention to their links with the classics. In this introduction, we have organized the articles according to several broad themes: active perception, perception for action, action alters perception, perception of our bodies in action, and acting on selective perceptions. As all who have read and contributed to the journal over the past few years have come to realize, it is no longer possible to study perception without considering its role in action. Nor is it possible to study action (formerly called performance, as reflected in the journal title) without understanding the perceptual contributions to action. These nine articles each exemplify, in their own way, how these dynamic interactions play out in contemporary research in our field. (PsycINFO Database Record
Collapse
|
31
|
Abstract
Attention selects behaviorally relevant stimuli for further capacity-limited processing and gates their access to awareness. Given the importance of attention for conscious perception, it is important to determine the factors and mechanisms that drive attention. A widespread view is that attention is biased to the specific feature values of a conjunction target (e.g., vertical, red, medium). By contrast, the results of the present study show that attention is tuned to the 2 relative features that distinguish a conjunction target from the irrelevant nontargets (e.g., larger and bluer). Moreover, an irrelevant conjunction cue that is briefly presented prior to the target can automatically attract attention, even in the absence of any feature contrasts. Importantly, automatic orienting to the conjunction cue was completely independent of the physical similarity between cue and target, and depended only on whether the conjunction cue matched the relative features of the target. These results demonstrate that attentional orienting is determined by a mechanism that can rapidly extract information about feature relationships and guide attention to the stimulus that best matches the relative attributes of the target. These results are difficult to reconcile with extant feature-specific accounts or object-based accounts of attention and argue for a relational account of conjunction search. (PsycINFO Database Record
Collapse
Affiliation(s)
| | | | - Ashley York
- School of Psychology, The University of Queensland
| | - Jessica Choi
- School of Psychology, The University of Queensland
| |
Collapse
|
32
|
Affiliation(s)
| | - Neelam Dutt
- School of Psychology, The University of Queensland, Brisbane, Australia
| | - Joyce M. G. Vromen
- School of Psychology, The University of Queensland, Brisbane, Australia
- Queensland Brain Institute, Brisbane, Australia
| | - Gernot Horstmann
- Department of Psychology, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
33
|
Affiliation(s)
- Josef G. Schönhammer
- Faculté de Psychologie et des Sciences de l’Éducation, Université de Genève, Genève, Switzerland
| | - Stefanie I. Becker
- School of Psychology, The University of Queensland, St Lucia, QLD, Australia
| | - Dirk Kerzel
- Faculté de Psychologie et des Sciences de l’Éducation, Université de Genève, Genève, Switzerland
| |
Collapse
|
34
|
Horstmann G, Herwig A, Becker SI. Distractor Dwelling, Skipping, and Revisiting Determine Target Absent Performance in Difficult Visual Search. Front Psychol 2016; 7:1152. [PMID: 27574510 PMCID: PMC4983613 DOI: 10.3389/fpsyg.2016.01152] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2016] [Accepted: 07/19/2016] [Indexed: 11/13/2022] Open
Abstract
Some targets in visual search are more difficult to find than others. In particular, a target that is similar to the distractors is more difficult to find than a target that is dissimilar to the distractors. Efficiency differences between easy and difficult searches are manifest not only in target-present trials but also in target-absent trials. In fact, even physically identical displays are searched through with different efficiency depending on the searched-for target. Here, we monitored eye movements in search for a target similar to the distractors (difficult search) versus a target dissimilar to the distractors (easy search). We aimed to examine three hypotheses concerning the causes of differential search efficiencies in target-absent trials: (a) distractor dwelling (b) distractor skipping, and (c) distractor revisiting. Reaction times increased with target similarity which is consistent with existing theories and replicates earlier results. Eye movement data indicated guidance in target trials, even though search was very slow. Dwelling, skipping, and revisiting contributed to low search efficiency in difficult search, with dwelling being the strongest factor. It is argued that differences in dwell time account for a large amount of total search time differences.
Collapse
Affiliation(s)
- Gernot Horstmann
- Department of Psychology, Bielefeld UniversityBielefeld, Germany; Cognitive Interaction Technology - Excellence Center, Bielefeld UniversityBielefeld, Germany; Center for Interdisciplinary Research, Bielefeld UniversityBielefeld, Germany
| | - Arvid Herwig
- Department of Psychology, Bielefeld UniversityBielefeld, Germany; Cognitive Interaction Technology - Excellence Center, Bielefeld UniversityBielefeld, Germany; Center for Interdisciplinary Research, Bielefeld UniversityBielefeld, Germany
| | | |
Collapse
|
35
|
Retell JD, Becker SI, Remington RW. An effective attentional set for a specific colour does not prevent capture by infrequently presented motion distractors. Q J Exp Psychol (Hove) 2016; 69:1340-65. [DOI: 10.1080/17470218.2015.1080738] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
An organism's survival depends on the ability to rapidly orient attention to unanticipated events in the world. Yet, the conditions needed to elicit such involuntary capture remain in doubt. Especially puzzling are spatial cueing experiments, which have consistently shown that involuntary shifts of attention to highly salient distractors are not determined by stimulus properties, but instead are contingent on attentional control settings induced by task demands. Do we always need to be set for an event to be captured by it, or is there a class of events that draw attention involuntarily even when unconnected to task goals? Recent results suggest that a task-irrelevant event will capture attention on first presentation, suggesting that salient stimuli that violate contextual expectations might automatically capture attention. Here, we investigated the role of contextual expectation by examining whether an irrelevant motion cue that was presented only rarely (∼3–6% of trials) would capture attention when observers had an active set for a specific target colour. The motion cue had no effect when presented frequently, but when rare produced a pattern of interference consistent with attentional capture. The critical dependence on the frequency with which the irrelevant motion singleton was presented is consistent with early theories of involuntary orienting to novel stimuli. We suggest that attention will be captured by salient stimuli that violate expectations, whereas top-down goals appear to modulate capture by stimuli that broadly conform to contextual expectations.
Collapse
Affiliation(s)
- James D. Retell
- School of Psychology, The University of Queensland, Brisbane, QLD, Australia
| | - Stefanie I. Becker
- School of Psychology, The University of Queensland, Brisbane, QLD, Australia
| | - Roger W. Remington
- School of Psychology, The University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
36
|
Schönhammer JG, Grubert A, Kerzel D, Becker SI. Attentional guidance by relative features: Behavioral and electrophysiological evidence. Psychophysiology 2016; 53:1074-83. [PMID: 26990008 DOI: 10.1111/psyp.12645] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2015] [Accepted: 02/19/2016] [Indexed: 11/29/2022]
Abstract
Our ability to select task-relevant information from cluttered visual environments is widely believed to be due to our ability to tune attention to the particular elementary feature values of a sought-after target (e.g., red, orange, yellow). By contrast, recent findings showed that attention is often tuned to feature relationships, that is, features that the target has relative to irrelevant features in the context (e.g., redder, yellower). However, the evidence for such a relational account is so far exclusively based on behavioral measures that do not allow a safe inference about early perceptual processes. The present study provides a critical test of the relational account, by measuring an electrophysiological marker in the EEG of participants (N2pc) in response to briefly presented distractors (cues) that could either match the physical features of the target or its relative features. In a first experiment, the target color and nontarget color were kept constant across trials. In line with a relational account, we found that only cues with the same relative color as the target were attended, regardless of whether the cues had the same physical color as the target. In a second experiment, we demonstrate that attention is biased to the exact target feature value when the target is embedded in a randomly varying context. Taken together, these results provide the first electrophysiological evidence that attention can modulate early perceptual processes differently; in a context-dependent manner versus a context-independent manner, resulting in marked differences in the range of colors that can attract attention.
Collapse
Affiliation(s)
- Josef G Schönhammer
- Faculté de Psychologie et des Sciences de l'Éducation, Université de Genève, Geneva, Switzerland
| | - Anna Grubert
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Dirk Kerzel
- Faculté de Psychologie et des Sciences de l'Éducation, Université de Genève, Geneva, Switzerland
| | - Stefanie I Becker
- School of Psychology, The University of Queensland, Brisbane, Australia
| |
Collapse
|
37
|
Abstract
Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database.
Collapse
Affiliation(s)
- Ruth A Savage
- a School of Psychology , University of Queensland , St. Lucia , QLD , Australia
| | - Stefanie I Becker
- a School of Psychology , University of Queensland , St. Lucia , QLD , Australia
| | - Ottmar V Lipp
- b School of Psychology and Speech Pathology , Curtin University , Perth , WA , Australia
| |
Collapse
|
38
|
Abstract
It is widely known that irrelevant onsets (i.e., items appearing in previously empty locations) can automatically capture attention and attract our gaze. Some studies have shown that onset capture is stronger when the onset distractor matches the target feature, indicating that onset capture can be modulated by feature-based (top-down) tuning to the target. However, it is less clear whether and to what extent the perceptual saliency of the distractor can further modulate this effect. This study examined the effects of target similarity, competition between target and distractor, and bottom-up color contrast on the ability of onset distractor to capture the gaze, by varying the color (contrast) and stimulus-onset asynchrony of the onset distractor. The results clearly show that competition and feature-based attention modulate capture by the irrelevant onset to a large extent, whereas bottom-up color contrasts do not modulate onset capture. These results indicate the need to revise current accounts of gaze control.
Collapse
Affiliation(s)
- Stefanie I Becker
- School of Psychology, The University of Queensland, Brisbane, Australia; Center for Interdisciplinacy Research, Bielefeld University, Bielefeld, Germany
| | | |
Collapse
|
39
|
Becker SI, Grubert A, Dux PE. Distinct neural networks for target feature versus dimension changes in visual search, as revealed by EEG and fMRI. Neuroimage 2014; 102 Pt 2:798-808. [DOI: 10.1016/j.neuroimage.2014.08.058] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2014] [Revised: 08/21/2014] [Accepted: 08/31/2014] [Indexed: 11/29/2022] Open
|
40
|
Schneider D, Slaughter VP, Becker SI, Dux PE. Implicit false-belief processing in the human brain. Neuroimage 2014; 101:268-75. [PMID: 25042446 DOI: 10.1016/j.neuroimage.2014.07.014] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2014] [Revised: 07/08/2014] [Accepted: 07/09/2014] [Indexed: 01/18/2023] Open
|
41
|
Venini D, Remington RW, Horstmann G, Becker SI. Centre-of-Gravity Fixations in Visual Search: When Looking at Nothing Helps to Find Something. J Ophthalmol 2014; 2014:237812. [PMID: 25002972 PMCID: PMC4065739 DOI: 10.1155/2014/237812] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2013] [Revised: 02/12/2014] [Accepted: 02/28/2014] [Indexed: 11/17/2022] Open
Abstract
In visual search, some fixations are made between stimuli on empty regions, commonly referred to as "centre-of-gravity" fixations (henceforth: COG fixations). Previous studies have shown that observers with task expertise show more COG fixations than novices. This led to the view that COG fixations reflect simultaneous encoding of multiple stimuli, allowing more efficient processing of task-related items. The present study tested whether COG fixations also aid performance in visual search tasks with unfamiliar and abstract stimuli. Moreover, to provide evidence for the multiple-item processing view, we analysed the effects of COG fixations on the number and dwell times of stimulus fixations. The results showed that (1) search efficiency increased with increasing COG fixations even in search for unfamiliar stimuli and in the absence of special higher-order skills, (2) COG fixations reliably reduced the number of stimulus fixations and their dwell times, indicating processing of multiple distractors, and (3) the proportion of COG fixations was dynamically adapted to potential information gain of COG locations. A second experiment showed that COG fixations are diminished when stimulus positions unpredictably vary across trials. Together, the results support the multiple-item processing view, which has important implications for current theories of visual search.
Collapse
Affiliation(s)
- Dustin Venini
- The University of Queensland, Brisbane, Australia
- School of Psychology, The University of Queensland, McElwain Building, Brisbane, QLD 4072, Australia
| | | | - Gernot Horstmann
- Centre for Interdisciplinary Research, Bielefeld University, 33602 Bielefeld, Germany
- The University of Bielefeld, Bielefeld, Germany
| | - Stefanie I. Becker
- The University of Queensland, Brisbane, Australia
- Centre for Interdisciplinary Research, Bielefeld University, 33602 Bielefeld, Germany
| |
Collapse
|
42
|
Craig BM, Becker SI, Lipp OV. Different faces in the crowd: a happiness superiority effect for schematic faces in heterogeneous backgrounds. ACTA ACUST UNITED AC 2014; 14:794-803. [PMID: 24821397 DOI: 10.1037/a0036043] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recently, D.V. Becker, Anderson, Mortensen, Neufeld, and Neel (2011) proposed recommendations to avoid methodological confounds in visual search studies using emotional photographic faces. These confounds were argued to cause the frequently observed Anger Superiority Effect (ASE), the faster detection of angry than happy expressions, and conceal a true Happiness Superiority Effect (HSE). In Experiment 1, we applied these recommendations (for the first time) to visual search among schematic faces that previously had consistently yielded a robust ASE. Contrary to the prevailing literature, but consistent with D.V. Becker et al. (2011), we observed a HSE with schematic faces. The HSE with schematic faces was replicated in Experiments 2 and 3 using a similar method in discrimination tasks rather than fixed target searches. Experiment 4 isolated background heterogeneity as the key determinant leading to the HSE.
Collapse
|
43
|
Abstract
In visual search for pop-out targets, search times are shorter when the target and non-target colors from the previous trial are repeated than when they change. This priming effect was originally attributed to a feature weighting mechanism that biases attention toward the target features, and away from the non-target features. However, more recent studies have shown that visual selection is strongly context-dependent: according to a relational account of feature priming, the target color is always encoded relative to the non-target color (e.g., as redder or greener). The present study provides a critical test of this hypothesis, by varying the colors of the search items such that either the relative color or the absolute color of the target always remained constant (or both). The results clearly show that color priming depends on the relative color of a target with respect to the non-targets but not on its absolute color value. Moreover, the observed priming effects did not change over the course of the experiment, suggesting that the visual system encodes colors in a relative manner from the start of the experiment. Taken together, these results strongly support a relational account of feature priming in visual search, and are inconsistent with the dominant feature-based views.
Collapse
Affiliation(s)
- Stefanie I Becker
- School of Psychology, The University of Queensland Brisbane, QLD, Australia ; Center for Interdisciplinary Research, Bielefeld University Bielefeld, Germany
| | - Christian Valuch
- Cognitive Research Platform, University of Vienna Vienna, Austria
| | - Ulrich Ansorge
- Faculty of Psychology, University of Vienna Vienna, Austria
| |
Collapse
|
44
|
Becker SI, Harris AM, Venini D, Retell JD. Visual search for color and shape: When is the gaze guided by feature relationships, when by feature values? ACTA ACUST UNITED AC 2014; 40:264-91. [DOI: 10.1037/a0033489] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
45
|
Barutchu A, Becker SI, Carter O, Hester R, Levy NL. The role of task-related learned representations in explaining asymmetries in task switching. PLoS One 2013; 8:e61729. [PMID: 23613919 PMCID: PMC3628671 DOI: 10.1371/journal.pone.0061729] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2012] [Accepted: 03/18/2013] [Indexed: 11/19/2022] Open
Abstract
Task switch costs often show an asymmetry, with switch costs being larger when switching from a difficult task to an easier task. This asymmetry has been explained by difficult tasks being represented more strongly and consequently requiring more inhibition prior to switching to the easier task. The present study shows that switch cost asymmetries observed in arithmetic tasks (addition vs. subtraction) do not depend on task difficulty: Switch costs of similar magnitudes were obtained when participants were presented with unsolvable pseudo-equations that did not differ in task difficulty. Further experiments showed that neither task switch costs nor switch cost asymmetries were due to perceptual factors (e.g., perceptual priming effects). These findings suggest that asymmetrical switch costs can be brought about by the association of some tasks with greater difficulty than others. Moreover, the finding that asymmetrical switch costs were observed (1) in the absence of a task switch proper and (2) without differences in task difficulty, suggests that present theories of task switch costs and switch cost asymmetries are in important ways incomplete and need to be modified.
Collapse
Affiliation(s)
- Ayla Barutchu
- Florey Institute of Neuroscience and Mental Health, The University of Melbourne, Melbourne, Victoria, Australia.
| | | | | | | | | |
Collapse
|
46
|
Abstract
What factors determine which stimuli of a scene will be visually selected and become available for conscious perception? The currently prevalent view is that attention operates on specific feature values, so attention will be drawn to stimuli that have features similar to those of the sought-after target. Here, we show that, instead, attentional capture depends on whether a distractor's feature relationships match the target-nontarget relations (e.g., redder). In three spatial-cuing experiments, we found that (a) a cue with the target color (e.g., orange) can fail to capture attention when the cue-cue-context relations do not match the target-nontarget relations (e.g., redder target vs. yellower cue), whereas (b) a cue with the nontarget color can capture attention when its relations match the target-nontarget relations (e.g., both are redder). These results support a relational account in which attention is biased toward feature relationships instead of particular feature values, and show that attentional capture by an irrelevant distractor does not depend on feature similarity, but rather depends on whether the distractor matches or mismatches the target's relative attributes (e.g., relative color).
Collapse
Affiliation(s)
- Stefanie I Becker
- School of Psychology, University of Queensland, Queensland 4072, Australia.
| | | | | |
Collapse
|
47
|
Becker SI, Ansorge U. Higher set sizes in pop-out search displays do not eliminate priming or enhance target selection. Vision Res 2013; 81:18-28. [DOI: 10.1016/j.visres.2013.01.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2012] [Revised: 01/10/2013] [Accepted: 01/24/2013] [Indexed: 10/27/2022]
|
48
|
Savage RA, Lipp OV, Craig BM, Becker SI, Horstmann G. In search of the emotional face: anger versus happiness superiority in visual search. ACTA ACUST UNITED AC 2013; 13:758-68. [PMID: 23527503 DOI: 10.1037/a0031970] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Previous research has provided inconsistent results regarding visual search for emotional faces, yielding evidence for either anger superiority (i.e., more efficient search for angry faces) or happiness superiority effects (i.e., more efficient search for happy faces), suggesting that these results do not reflect on emotional expression, but on emotion (un-)related low-level perceptual features. The present study investigated possible factors mediating anger/happiness superiority effects; specifically search strategy (fixed vs. variable target search; Experiment 1), stimulus choice (Nimstim database vs. Ekman & Friesen database; Experiments 1 and 2), and emotional intensity (Experiment 3 and 3a). Angry faces were found faster than happy faces regardless of search strategy using faces from the Nimstim database (Experiment 1). By contrast, a happiness superiority effect was evident in Experiment 2 when using faces from the Ekman and Friesen database. Experiment 3 employed angry, happy, and exuberant expressions (Nimstim database) and yielded anger and happiness superiority effects, respectively, highlighting the importance of the choice of stimulus materials. Ratings of the stimulus materials collected in Experiment 3a indicate that differences in perceived emotional intensity, pleasantness, or arousal do not account for differences in search efficiency. Across three studies, the current investigation indicates that prior reports of anger or happiness superiority effects in visual search are likely to reflect on low-level visual features associated with the stimulus materials used, rather than on emotion.
Collapse
Affiliation(s)
- Ruth A Savage
- School of Psychology, University of Queensland, QLD, 4072, Australia.
| | | | | | | | | |
Collapse
|
49
|
Abstract
Eye fixations allow the human viewer to perceive scene content with high acuity. If fixations drive visual memory for scenes, a viewer might repeat his/her previous fixation pattern during recognition of a familiar scene. However, visual salience alone could account for similarities between two successive fixation patterns by attracting the eyes in a stimulus-driven, task-independent manner. In the present study, we tested whether the viewer's aim to recognize a scene fosters fixations on scene content that repeats from learning to recognition as compared to the influence of visual salience alone. In Experiment 1 we compared the gaze behavior in a recognition task to that in a free-viewing task. By showing the same stimuli in both tasks, the task-independent influence of salience was held constant. We found that during a recognition task, but not during (repeated) free viewing, viewers showed a pronounced preference for previously fixated scene content. In Experiment 2 we tested whether participants remembered visual input that they fixated during learning better than salient but nonfixated visual input. To that end we presented participants with smaller cutouts from learned and new scenes. We found that cutouts featuring scene content fixated during encoding were recognized better and faster than cutouts featuring nonfixated but highly salient scene content from learned scenes. Both experiments supported the hypothesis that fixations during encoding and maybe during recognition serve visual memory over and above a stimulus-driven influence of visual salience.
Collapse
|
50
|
Schneider D, Bayliss AP, Becker SI, Dux PE. Eye movements reveal sustained implicit processing of others' mental states. ACTA ACUST UNITED AC 2012; 141:433-8. [DOI: 10.1037/a0025458] [Citation(s) in RCA: 82] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|