1
|
Sefranek M, Zokaei N, Draschkow D, Nobre AC. Comparing the impact of contextual associations and statistical regularities in visual search and attention orienting. PLoS One 2024; 19:e0302751. [PMID: 39570820 PMCID: PMC11581329 DOI: 10.1371/journal.pone.0302751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 10/06/2024] [Indexed: 11/24/2024] Open
Abstract
During visual search, we quickly learn to attend to an object's likely location. Research has shown that this process can be guided by learning target locations based on consistent spatial contextual associations or other statistical regularities. Here, we tested how different types of associations guide learning and the utilisation of established memories for different purposes. Participants learned contextual associations or rule-like statistical regularities that predicted target locations within different scenes. The consequences of this learning for subsequent performance were then evaluated on attention-orienting and memory-recall tasks. Participants demonstrated facilitated attention-orienting and recall performance based on both contextual associations and statistical regularities. Contextual associations facilitated attention orienting with a different time course compared to statistical regularities. Benefits to memory-recall performance depended on the alignment between the learned association or regularity and the recall demands. The distinct patterns of behavioural facilitation by contextual associations and statistical regularities show how different forms of long-term memory may influence neural information processing through different modulatory mechanisms.
Collapse
Affiliation(s)
- Marcus Sefranek
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Nahid Zokaei
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Dejan Draschkow
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Anna C. Nobre
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
- Wu Tsai Institute, Yale University, New Haven, CT, United States of America
- Department of Psychology, Yale University, New Haven, CT, United States of America
| |
Collapse
|
2
|
Tavera F, Haider H. The role of selective attention in implicit learning: evidence for a contextual cueing effect of task-irrelevant features. PSYCHOLOGICAL RESEARCH 2024; 89:15. [PMID: 39540996 PMCID: PMC11564302 DOI: 10.1007/s00426-024-02033-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Accepted: 10/23/2024] [Indexed: 11/16/2024]
Abstract
With attentional mechanisms, humans select and de-select information from the environment. But does selective attention modulate implicit learning? We tested whether the implicit acquisition of contingencies between features are modulated by the task-relevance of those features. We implemented the contingencies in a novel variant of the contextual cueing paradigm. In such a visual search task, participants could use non-spatial cues to predict target location, and then had to discriminate target shapes. In Experiment 1, the predictive feature for target location was the shape of the distractors (task-relevant). In Experiment 2, the color feature of distractors (task-irrelevant) cued target location. Results showed that participants learned to predict the target location from both the task-relevant and the task-irrelevant feature. Subsequent testing did not suggest explicit knowledge of the contingencies. For the purpose of further testing the significance of task-relevance in a cue competition situation, in Experiment 3, we provided two redundantly predictive cues, shape (task-relevant) and color (task-irrelevant) simultaneously, and subsequently tested them separately. There were no observed costs of single predictive cues when compared to compound cues. The results were not indicative of overshadowing effects, on the group and individual level, or of reciprocal overshadowing. We conclude that the acquisition of contingencies occurs independently of task-relevance and discuss this finding in the framework of the event coding literature.
Collapse
Affiliation(s)
- Felice Tavera
- Department of Psychology, University of Cologne, Richard-Strauss-Str. 2, 50931, Cologne, Germany.
| | - Hilde Haider
- Department of Psychology, University of Cologne, Richard-Strauss-Str. 2, 50931, Cologne, Germany
| |
Collapse
|
3
|
Hatori Y, Yuan ZX, Tseng CH, Kuriki I, Shioiri S. Modeling the dynamics of contextual cueing effect by reinforcement learning. J Vis 2024; 24:11. [PMID: 39560623 DOI: 10.1167/jov.24.12.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2024] Open
Abstract
Humans use environmental context for facilitating object searches. The benefit of context for visual search requires learning. Modeling the learning process of context for efficient processing is vital to understanding visual function in everyday environments. We proposed a model that accounts for the contextual cueing effect, which refers to the learning effect of scene context to identify the location of a target item. The model extracted the global feature of a scene and gradually strengthened the relationship between the global feature and its target location with repeated observations. We compared the model and human performance with two visual search experiments (letter arrangements on a gray background or a natural scene). The proposed model successfully simulated the faster reduction of the number of saccades required before target detection for the natural scene background compared with the uniform gray background. We further tested whether the model replicated the known characteristics of the contextual cueing effect in terms of local learning around the target, the effect of the ratio of repeated and novel stimuli, and the superiority of natural scenes.
Collapse
Affiliation(s)
- Yasuhiro Hatori
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
- National Institute of Occupational Safety and Health, Japan, Tokyo, Japan
| | - Zheng-Xiong Yuan
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| | - Chia-Huei Tseng
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| | - Ichiro Kuriki
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
- Graduate School of Science and Engineering, Saitama University, Saitama, Japan
| | - Satoshi Shioiri
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| |
Collapse
|
4
|
Clement A, Anderson BA. Statistically learned associations among objects bias attention. Atten Percept Psychophys 2024; 86:2251-2261. [PMID: 39198359 DOI: 10.3758/s13414-024-02941-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2024] [Indexed: 09/01/2024]
Abstract
A growing body of research suggests that semantic relationships among objects can influence the control of attention. There is also some evidence that learned associations among objects can bias attention. However, it is unclear whether these findings are due to statistical learning or existing semantic relationships. In the present study, we examined whether statistically learned associations among objects can bias attention in the absence of existing semantic relationships. Participants searched for one of four targets among pairs of novel shapes and identified whether the target was present or absent from the display. In an initial training phase, each target was paired with an associated distractor in a fixed spatial configuration. In a subsequent test phase, each target could be paired with the previously associated distractor or a different distractor. In our first experiment, the previously associated distractor was always presented in the same pair as the target. Participants were faster to respond when this distractor was present on target-present trials. In our second experiment, the previously associated distractor was presented in a different pair than the target in the test phase. In this case, participants were slower to respond when this distractor was present on both target-present and target-absent trials. Together, these findings provide clear evidence that statistically learned associations among objects can bias attention, analogous to the effects of semantic relationships on attention.
Collapse
Affiliation(s)
- Andrew Clement
- Department of Psychological & Brain Sciences, Texas A&M University, College Station, TX, USA.
- Department of Psychology and Neuroscience, Millsaps College, 1701 N. State St, Jackson, MS, 39210, USA.
| | - Brian A Anderson
- Department of Psychological & Brain Sciences, Texas A&M University, College Station, TX, USA
| |
Collapse
|
5
|
Zhou B, Feng Z, Liu J, Huang Z, Gao Y. A method to enhance drivers' hazard perception at night based on "knowledge-attitude-practice" theory. ACCIDENT; ANALYSIS AND PREVENTION 2024; 200:107565. [PMID: 38569350 DOI: 10.1016/j.aap.2024.107565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 03/06/2024] [Accepted: 03/30/2024] [Indexed: 04/05/2024]
Abstract
During nighttime driving, the inherent challenges of low-illuminance conditions often lead to an increased crash rate and higher fatalities by impairing drivers' ability to recognize imminent hazards. While the severity of this issue is widely recognized, a significant research void exists with regard to strategies to enhance hazard perception under such circumstances. To address this lacuna, our study examined the potential of an intervention grounded in the knowledge-attitude-practice (KAP) framework to bolster nighttime hazard detection among drivers. We engaged a cohort of sixty drivers split randomly into an intervention group (undergoing specialized training) and a control group and employed a holistic assessment that combined eye movement analytics, physiological response monitoring, and driving performance evaluations during simulated scenarios pre- and post-intervention. The data showed that the KAP-centric intervention honed drivers' visual search techniques during nighttime driving, allowing them to confront potential threats with reduced physiological tension and ensuring more adept vehicle handling. These compelling findings support the integration of this methodology in driver training curricula and present an innovative strategy to enhance road safety during nighttime journeys.
Collapse
Affiliation(s)
- Bin Zhou
- School of Automobile and Traffic Engineering, Hefei University of Technology, Hefei 230009, Anhui, PR China
| | - Zhongxiang Feng
- School of Automobile and Traffic Engineering, Hefei University of Technology, Hefei 230009, Anhui, PR China.
| | - Jing Liu
- School of Mechanical and Electrical Engineering, Anhui Jianzhu University, Hefei 230009, Anhui, PR China; Key Laboratory of Traffic Information and Safety, Hefei 230009, Anhui, PR China.
| | - Zhipeng Huang
- School of Automobile and Traffic Engineering, Hefei University of Technology, Hefei 230009, Anhui, PR China
| | - Ya Gao
- School of Automobile and Traffic Engineering, Hefei University of Technology, Hefei 230009, Anhui, PR China
| |
Collapse
|
6
|
Biggs AT, Pettijohn KA, Blacker KJ. Contextual cueing during lethal force training: How target design and repetition can alter threat assessments. MILITARY PSYCHOLOGY 2024; 36:353-365. [PMID: 38661462 PMCID: PMC11057649 DOI: 10.1080/08995605.2023.2178785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 04/27/2021] [Indexed: 03/06/2023]
Abstract
Lethal force training requires individuals to make threat assessments, which involves holistic scenario processing to identify potential threats. Photorealistic targets can make threat/non-threat judgments substantially more genuine and challenging compared to simple cardboard or silhouette targets. Unfortunately, repeated target use also brings unintended consequences that could invalidate threat assessment processes conducted during training. Contextually rich or unique targets could be implicitly memorable in a way that allows observers to recall weapon locations rather than forcing observers to conduct a naturalistic assessment. Experiment 1 demonstrated robust contextual cueing effects in a well-established shoot/don't-shoot stimulus set, and Experiment 2 extended this finding from complex scene stimuli to simple actor-only stimuli. Experiment 3 demonstrated that these effects also occurred among trained professionals using rifles rather than computer-based tasks. Taken together, these findings demonstrate the potential for uncontrolled target repetition to alter the fundamental processes of threat assessment during lethal force training.
Collapse
Affiliation(s)
- Adam T. Biggs
- Medical Department, Naval Special Warfare Command, Coronado, California
| | - Kyle A. Pettijohn
- Aeromedical Department, Naval Medical Research Unit – Dayton, Wright-Patterson AFB, Dayton, Ohio
| | - Kara J. Blacker
- Aeromedical Department, Naval Medical Research Unit – Dayton, Wright-Patterson AFB, Dayton, Ohio
| |
Collapse
|
7
|
Liu X, Ma J, Zhao G, Sun HJ. The effect of gaze information associated with the search items on contextual cueing effect. Atten Percept Psychophys 2024; 86:84-94. [PMID: 38030821 DOI: 10.3758/s13414-023-02817-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/06/2023] [Indexed: 12/01/2023]
Abstract
Previous research on the mechanisms of contextual cueing effect has been inconsistent, with some researchers showing that the contextual benefit was derived from the attentional guidance whereas others argued that the former theory was not the source of contextual cueing effect. We brought the "stare-in-the-crowd" effect that used pictures of gaze with different orientations as stimuli into a traditional contextual cueing effect paradigm to investigate whether attentional guidance plays a part in this effect. We embedded the letters used in a traditional contextual cueing effect paradigm into the gaze pictures with direct and averted orientation. In Experiment 1, we found that there was a weak interaction between the contextual cueing effect and the "stare-in-the-crowd" effect. In Experiments 2 and 3, we found that the contextual cueing effect was influenced differently when the direct gaze was combined with the target or distractors. These results suggested that attentional guidance played an important role in the generation of a contextual cueing effect and the direct gaze had a special impact on visual search. To summarize the three findings, the direct gaze on target location facilitates the contextual cueing effect, and such an effect is even greater when we compared condition with the direct gaze on target location with condition with the direct gaze on distractor location (Experiments 2 and 3). Such an effect of gaze on a contextual cueing effect is manifested even when the effect of gaze ("stare-in-the-crowd" effect) was absent in the New configurations (search trials without learning).
Collapse
Affiliation(s)
- Xingze Liu
- Second Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Jie Ma
- Department of Psychology, South China Normal University, Guangzhou, Guangdong, China
| | - Guang Zhao
- Faculty of Psychology, Tianjin Normal University, Tianjin, China.
| | - Hong-Jin Sun
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
8
|
Chen C, Lee VG. Contribution of peripheral vision to attentional learning. Atten Percept Psychophys 2024; 86:95-108. [PMID: 37985596 DOI: 10.3758/s13414-023-02808-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/09/2023] [Indexed: 11/22/2023]
Abstract
Attention is tuned towards locations that frequently contain a visual search target (location probability learning; LPL). Peripheral vision, covering a larger field than the fovea, often receives information about the target. Yet what is the role of peripheral vision in attentional learning? Using gaze-contingent eye tracking, we examined the impact of simulated peripheral vision loss on location probability learning. Participants searched for a target T among distractor Ls. Unbeknownst to them, the T appeared disproportionately often in one quadrant. Participants searched with either intact vision or "tunnel vision," restricting the visible search items to the central 6.7º (in diameter) of the current gaze. When trained with tunnel vision, participants in Experiment 1 acquired LPL, but only if they became explicitly aware of the target's location probability. The unaware participants were not faster finding the target in high-probability than in low-probability locations. When trained with intact vision, participants in Experiment 2 successfully acquired LPL, regardless of whether they were aware of the target's location probability. Thus, whereas explicit learning may proceed with central vision alone, implicit LPL is strengthened by peripheral vision. Consistent with Guided Search (Wolfe, 2021), peripheral vision supports a nonselective pathway to guide visual search.
Collapse
Affiliation(s)
- Chen Chen
- Department of Psychology, University of Minnesota, 75 East River Road, Minneapolis, MN, 55455, USA.
| | - Vanessa G Lee
- Department of Psychology, University of Minnesota, 75 East River Road, Minneapolis, MN, 55455, USA
- Center for Cognitive Sciences, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
9
|
A-Izzeddin EJ, Mattingley JB, Harrison WJ. The influence of natural image statistics on upright orientation judgements. Cognition 2024; 242:105631. [PMID: 37820487 DOI: 10.1016/j.cognition.2023.105631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 09/24/2023] [Accepted: 09/28/2023] [Indexed: 10/13/2023]
Abstract
Humans have well-documented priors for many features present in nature that guide visual perception. Despite being putatively grounded in the statistical regularities of the environment, scene priors are frequently violated due to the inherent variability of visual features from one scene to the next. However, these repeated violations do not appreciably challenge visuo-cognitive function, necessitating the broad use of priors in conjunction with context-specific information. We investigated the trade-off between participants' internal expectations formed from both longer-term priors and those formed from immediate contextual information using a perceptual inference task and naturalistic stimuli. Notably, our task required participants to make perceptual inferences about naturalistic images using their own internal criteria, rather than making comparative judgements. Nonetheless, we show that observers' performance is well approximated by a model that makes inferences using a prior for low-level image statistics, aggregated over many images. We further show that the dependence on this prior is rapidly re-weighted against contextual information, even when misleading. Our results therefore provide insight into how apparent high-level interpretations of scene appearances follow from the most basic of perceptual processes, which are grounded in the statistics of natural images.
Collapse
Affiliation(s)
- Emily J A-Izzeddin
- Queensland Brain Institute, Building 79, University of Queensland, St Lucia, QLD 4072, Australia.
| | - Jason B Mattingley
- Queensland Brain Institute, Building 79, University of Queensland, St Lucia, QLD 4072, Australia; School of Psychology, Building 24A, University of Queensland, St Lucia, QLD 4072, Australia
| | - William J Harrison
- Queensland Brain Institute, Building 79, University of Queensland, St Lucia, QLD 4072, Australia; School of Psychology, Building 24A, University of Queensland, St Lucia, QLD 4072, Australia
| |
Collapse
|
10
|
Liao MR, Kim AJ, Anderson BA. Neural correlates of value-driven spatial orienting. Psychophysiology 2023; 60:e14321. [PMID: 37171022 PMCID: PMC10524674 DOI: 10.1111/psyp.14321] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 04/10/2023] [Accepted: 04/11/2023] [Indexed: 05/13/2023]
Abstract
Reward learning has been shown to habitually guide overt spatial attention to specific regions of a scene. However, the neural mechanisms that support this bias are unknown. In the present study, participants learned to orient themselves to a particular quadrant of a scene (a high-value quadrant) to maximize monetary gains. This learning was scene-specific, with the high-value quadrant varying across different scenes. During a subsequent test phase, participants were faster at identifying a target if it appeared in the high-value quadrant (valid), and initial saccades were more likely to be made to the high-value quadrant. fMRI analyses during the test phase revealed learning-dependent priority signals in the caudate tail, superior colliculus, frontal eye field, anterior cingulate cortex, and insula, paralleling findings concerning feature-based, value-driven attention. In addition, ventral regions typically associated with scene selection and spatial information processing, including the hippocampus, parahippocampal gyrus, and temporo-occipital cortex, were also implicated. Taken together, our findings offer new insights into the neural architecture subserving value-driven attention, both extending our understanding of nodes in the attention network previously implicated in feature-based, value-driven attention and identifying a ventral network of brain regions implicated in reward's influence on scene-dependent spatial orienting.
Collapse
Affiliation(s)
- Ming-Ray Liao
- Department of Psychological and Brain Sciences, Texas A&M University, College Station, Texas, USA
| | - Andy J Kim
- Department of Psychological and Brain Sciences, Texas A&M University, College Station, Texas, USA
| | - Brian A Anderson
- Department of Psychological and Brain Sciences, Texas A&M University, College Station, Texas, USA
| |
Collapse
|
11
|
Kershner AM, Hollingworth A. Real-world object categories and scene contexts conjointly structure statistical learning for the guidance of visual search. Atten Percept Psychophys 2022; 84:1304-1316. [PMID: 35426031 PMCID: PMC9010067 DOI: 10.3758/s13414-022-02475-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/12/2022] [Indexed: 12/04/2022]
Abstract
We examined how object categories and scene contexts act in conjunction to structure the acquisition and use of statistical regularities to guide visual search. In an exposure session, participants viewed five object exemplars in each of two colors in each of 42 real-world categories. Objects were presented individually against scene context backgrounds. Exemplars within a category were presented with different contexts as a function of color (e.g., the five red staplers were presented with a classroom scene, and the five blue staplers with an office scene). Participants then completed a visual search task, in which they searched for novel exemplars matching a category label cue among arrays of eight objects superimposed over a scene background. In the context-match condition, the color of the target exemplar was consistent with the color associated with that combination of category and scene context from the exposure phase (e.g., a red stapler in a classroom scene). In the context-mismatch condition, the color of the target was not consistent with that association (e.g., a red stapler in an office scene). In two experiments, search response time was reliably lower in the context-match than in the context-mismatch condition, demonstrating that the learning of category-specific color regularities was itself structured by scene context. The results indicate that categorical templates retrieved from long-term memory are biased toward the properties of recent exemplars and that this learning is organized in a scene-specific manner.
Collapse
Affiliation(s)
- Ariel M Kershner
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City, IA, 52242, USA.
| | - Andrew Hollingworth
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City, IA, 52242, USA
| |
Collapse
|
12
|
Chen S, Shi Z, Zinchenko A, Müller HJ, Geyer T. Cross-modal contextual memory guides selective attention in visual-search tasks. Psychophysiology 2022; 59:e14025. [PMID: 35141899 DOI: 10.1111/psyp.14025] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 12/14/2021] [Accepted: 01/21/2022] [Indexed: 11/30/2022]
Abstract
Visual search is speeded when a target item is positioned consistently within an invariant (repeatedly encountered) configuration of distractor items ("contextual cueing"). Contextual cueing is also observed in cross-modal search, when the location of the-visual-target is predicted by distractors from another-tactile-sensory modality. Previous studies examining lateralized waveforms of the event-related potential (ERP) with millisecond precision have shown that learned visual contexts improve a whole cascade of search-processing stages. Drawing on ERPs, the present study tested alternative accounts of contextual cueing in tasks in which distractor-target contextual associations are established across, as compared to, within sensory modalities. To this end, we devised a novel, cross-modal search task: search for a visual feature singleton, with repeated (and nonrepeated) distractor configurations presented either within the same (visual) or a different (tactile) modality. We found reaction times (RTs) to be faster for repeated versus nonrepeated configurations, with comparable facilitation effects between visual (unimodal) and tactile (crossmodal) context cues. Further, for repeated configurations, there were enhanced amplitudes (and reduced latencies) of ERPs indexing attentional allocation (PCN) and postselective analysis of the target (CDA), respectively; both components correlated positively with the RT facilitation. These effects were again comparable between uni- and crossmodal cueing conditions. In contrast, motor-related processes indexed by the response-locked LRP contributed little to the RT effects. These results indicate that both uni- and crossmodal context cues benefit the same, visual processing stages related to the selection and subsequent analysis of the search target.
Collapse
Affiliation(s)
- Siyi Chen
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Zhuanghua Shi
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.,Munich Center for Neurosciences-Brain & Mind, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Artyom Zinchenko
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Hermann J Müller
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.,Munich Center for Neurosciences-Brain & Mind, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Thomas Geyer
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.,Munich Center for Neurosciences-Brain & Mind, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
13
|
Bendall RCA, Eachus P, Thompson C. The influence of stimuli valence, extraversion, and emotion regulation on visual search within real-world scenes. Sci Rep 2022; 12:948. [PMID: 35042925 PMCID: PMC8766590 DOI: 10.1038/s41598-022-04964-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 12/14/2021] [Indexed: 11/09/2022] Open
Abstract
Affective traits, including extraversion and emotion regulation, are important considerations in clinical psychology due to their associations with the occurrence of affective disorders. Previously, emotional real-world scenes have been shown to influence visual search. However, it is currently unknown whether extraversion and emotion regulation can influence visual search towards neutral targets embedded within real-world scenes, or whether these traits can impact the effect of emotional stimuli on visual search. An opportunity sample of healthy individuals had trait levels of extraversion and emotion regulation recorded before completing a visual search task. Participants more accurately identified search targets in neutral images compared to positive images, whilst response times were slower in negative images. Importantly, individuals with higher trait levels of expressive suppression displayed faster identification of search targets regardless of the emotional valence of the stimuli. Extraversion and cognitive reappraisal did not influence visual search. These findings add to our understanding regarding the influence of extraversion, cognitive reappraisal, and expressive suppression on our ability to allocate attention during visual search when viewing real-world scenes.
Collapse
Affiliation(s)
- Robert C A Bendall
- Directorate of Psychology and Sport, School of Health and Society, University of Salford, Allerton Building, Frederick Road, Salford, M5 4WT, UK.
| | - Peter Eachus
- Directorate of Psychology and Sport, School of Health and Society, University of Salford, Allerton Building, Frederick Road, Salford, M5 4WT, UK
| | - Catherine Thompson
- Directorate of Psychology and Sport, School of Health and Society, University of Salford, Allerton Building, Frederick Road, Salford, M5 4WT, UK
| |
Collapse
|
14
|
OUP accepted manuscript. Cereb Cortex 2022; 32:4156-4171. [DOI: 10.1093/cercor/bhab472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Revised: 11/15/2021] [Accepted: 11/16/2021] [Indexed: 11/14/2022] Open
|
15
|
Simulated central vision loss does not impair implicit location probability learning when participants search through simple displays. Atten Percept Psychophys 2021; 84:1901-1912. [PMID: 34921336 PMCID: PMC8682040 DOI: 10.3758/s13414-021-02416-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/15/2021] [Indexed: 11/08/2022]
Abstract
Central vision loss disrupts voluntary shifts of spatial attention during visual search. Recently, we reported that a simulated scotoma impaired learned spatial attention towards regions likely to contain search targets. In that task, search items were overlaid on natural scenes. Because natural scenes can induce explicit awareness of learned biases leading to voluntary shifts of attention, here we used a search display with a blank background less likely to induce awareness of target location probabilities. Participants searched both with and without a simulated central scotoma: a training phase contained targets more often in one screen quadrant and a testing phase contained targets equally often in all quadrants. In Experiment 1, training used no scotoma, while testing alternated between blocks of scotoma and no-scotoma search. Experiment 2 training included the scotoma and testing again alternated between scotoma and no-scotoma search. Response times and saccadic behaviors in both experiments showed attentional biases towards the high-probability target quadrant during scotoma and no-scotoma search. Whereas simulated central vision loss impairs learned spatial attention in the context of natural scenes, our results show that this may not arise from impairments to the basic mechanisms of attentional learning indexed by visual search tasks without scenes.
Collapse
|
16
|
Anderson BA, Kim H, Kim AJ, Liao MR, Mrkonja L, Clement A, Grégoire L. The past, present, and future of selection history. Neurosci Biobehav Rev 2021; 130:326-350. [PMID: 34499927 PMCID: PMC8511179 DOI: 10.1016/j.neubiorev.2021.09.004] [Citation(s) in RCA: 56] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 07/08/2021] [Accepted: 09/02/2021] [Indexed: 01/22/2023]
Abstract
The last ten years of attention research have witnessed a revolution, replacing a theoretical dichotomy (top-down vs. bottom-up control) with a trichotomy (biased by current goals, physical salience, and selection history). This third new mechanism of attentional control, selection history, is multifaceted. Some aspects of selection history must be learned over time whereas others reflect much more transient influences. A variety of different learning experiences can shape the attention system, including reward, aversive outcomes, past experience searching for a target, target‒non-target relations, and more. In this review, we provide an overview of the historical forces that led to the proposal of selection history as a distinct mechanism of attentional control. We then propose a formal definition of selection history, with concrete criteria, and identify different components of experience-driven attention that fit within this definition. The bulk of the review is devoted to exploring how these different components relate to one another. We conclude by proposing an integrative account of selection history centered on underlying themes that emerge from our review.
Collapse
Affiliation(s)
- Brian A Anderson
- Texas A&M University, College Station, TX, 77843, United States.
| | - Haena Kim
- Texas A&M University, College Station, TX, 77843, United States
| | - Andy J Kim
- Texas A&M University, College Station, TX, 77843, United States
| | - Ming-Ray Liao
- Texas A&M University, College Station, TX, 77843, United States
| | - Lana Mrkonja
- Texas A&M University, College Station, TX, 77843, United States
| | - Andrew Clement
- Texas A&M University, College Station, TX, 77843, United States
| | | |
Collapse
|
17
|
Abstract
There is growing appreciation for the role of long-term memory in guiding temporal preparation in speeded reaction time tasks. In experiments with variable foreperiods between a warning stimulus (S1) and a target stimulus (S2), preparation is affected by foreperiod distributions experienced in the past, long after the distribution has changed. These effects from memory can shape preparation largely implicitly, outside of participants' awareness. Recent studies have demonstrated the associative nature of memory-guided preparation. When distinct S1s predict different foreperiods, they can trigger differential preparation accordingly. Here, we propose that memory-guided preparation allows for another key feature of learning: the ability to generalize across acquired associations and apply them to novel situations. Participants completed a variable foreperiod task where S1 was a unique image of either a face or a scene on each trial. Images of either category were paired with different distributions with predominantly shorter versus predominantly longer foreperiods. Participants displayed differential preparation to never-before seen images of either category, without being aware of the predictive nature of these categories. They continued doing so in a subsequent Transfer phase, after they had been informed that these contingencies no longer held. A novel rolling regression analysis revealed at a fine timescale how category-guided preparation gradually developed throughout the task, and that explicit information about these contingencies only briefly disrupted memory-guided preparation. These results offer new insights into temporal preparation as the product of a largely implicit process governed by associative learning from past experiences.
Collapse
|
18
|
Peacock CE, Cronin DA, Hayes TR, Henderson JM. Meaning and expected surfaces combine to guide attention during visual search in scenes. J Vis 2021; 21:1. [PMID: 34609475 PMCID: PMC8496418 DOI: 10.1167/jov.21.11.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 09/02/2021] [Indexed: 11/24/2022] Open
Abstract
How do spatial constraints and meaningful scene regions interact to control overt attention during visual search for objects in real-world scenes? To answer this question, we combined novel surface maps of the likely locations of target objects with maps of the spatial distribution of scene semantic content. The surface maps captured likely target surfaces as continuous probabilities. Meaning was represented by meaning maps highlighting the distribution of semantic content in local scene regions. Attention was indexed by eye movements during the search for target objects that varied in the likelihood they would appear on specific surfaces. The interaction between surface maps and meaning maps was analyzed to test whether fixations were directed to meaningful scene regions on target-related surfaces. Overall, meaningful scene regions were more likely to be fixated if they appeared on target-related surfaces than if they appeared on target-unrelated surfaces. These findings suggest that the visual system prioritizes meaningful scene regions on target-related surfaces during visual search in scenes.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - Deborah A Cronin
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| |
Collapse
|
19
|
Blakley EC, Gaspelin N, Gerhardstein P. The development of oculomotor suppression of salient distractors in children. J Exp Child Psychol 2021; 214:105291. [PMID: 34607075 DOI: 10.1016/j.jecp.2021.105291] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 07/10/2021] [Accepted: 08/25/2021] [Indexed: 10/20/2022]
Abstract
There is considerable evidence that adults can prevent attentional capture by physically salient stimuli via proactive inhibition. A key question is whether young children can also inhibit salient stimuli to prevent visual distraction. The current study directly compared attentional capture in children (Mage = 5.5 years) and adults (Mage = 19.3 years) by measuring overt eye movements. Participants searched for a target shape among heterogeneous distractor shapes and attempted to ignore a salient color singleton distractor. The destination of first saccades was used to assess attentional capture by the salient distractor, providing a more direct index of attentional allocation than prior developmental studies. Adults were able to suppress saccades to the singleton distractor, replicating previous studies. Children, however, demonstrated no such oculomotor suppression; first saccades were equally likely to be directed to the singleton distractor and nonsingleton distractors. Subsequent analyses indicated that children were able to suppress the distractor, but this occurred approximately 550 ms after stimulus presentation. The current results suggest that children possess some level of top-down control over visual attention, but this top-down control is delayed compared with adults. Development of this ability may be related to executive functions, which include goal-directed behavior such as organized search and impulse control as well as preparatory and inhibitory cognitive functions.
Collapse
Affiliation(s)
- Emily C Blakley
- Department of Psychology, Binghamton University, State University of New York, Binghamton, NY 13902, USA.
| | - Nicholas Gaspelin
- Department of Psychology, Binghamton University, State University of New York, Binghamton, NY 13902, USA
| | - Peter Gerhardstein
- Department of Psychology, Binghamton University, State University of New York, Binghamton, NY 13902, USA
| |
Collapse
|
20
|
No explicit memory for individual trial display configurations in a visual search task. Mem Cognit 2021; 49:1705-1721. [PMID: 34100195 DOI: 10.3758/s13421-021-01185-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/27/2021] [Indexed: 11/08/2022]
Abstract
Previous evidence demonstrated that individuals can recall a target's location in a search display even if location information is completely task-irrelevant. This finding raises the question: does this ability to automatically encode a single item's location into a reportable memory trace extend to other aspects of spatial information as well? We tested this question using a paradigm designed to elicit attribute amnesia (Chen & Wyble, Psychological Science, 26(2) 203-210, 2015a). Participants were initially asked to report the location of a target letter among digits with stimuli arranged to form one of two or four spatial configurations varying randomly across trials. After completing numerous trials that matched their expectations, participants were surprised with a series of unexpected questions probing their memory for various aspects of the display they had just viewed. Participants had a profound inability to report which spatial configuration they had just perceived when the target's location was not unique to a specific configuration (i.e., orthogonal). Despite being unable to report the most recent configuration, answer choices on the surprise trial were focused around previously seen configurations, rather than novel configurations. Thus, there were clear memories of the set of configurations that had been viewed during the experiment but not of the specific configuration from the most recent trial. This finding helps to set boundary conditions on previous findings regarding the automatic encoding of location information into memory.
Collapse
|
21
|
Kosovicheva A, Bex PJ. Gravitational effects of scene information in object localization. Sci Rep 2021; 11:11520. [PMID: 34075169 PMCID: PMC8169838 DOI: 10.1038/s41598-021-91006-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Accepted: 05/20/2021] [Indexed: 02/04/2023] Open
Abstract
We effortlessly interact with objects in our environment, but how do we know where something is? An object's apparent position does not simply correspond to its retinotopic location but is influenced by its surrounding context. In the natural environment, this context is highly complex, and little is known about how visual information in a scene influences the apparent location of the objects within it. We measured the influence of local image statistics (luminance, edges, object boundaries, and saliency) on the reported location of a brief target superimposed on images of natural scenes. For each image statistic, we calculated the difference between the image value at the physical center of the target and the value at its reported center, using observers' cursor responses, and averaged the resulting values across all trials. To isolate image-specific effects, difference scores were compared to a randomly-permuted null distribution that accounted for any response biases. The observed difference scores indicated that responses were significantly biased toward darker regions, luminance edges, object boundaries, and areas of high saliency, with relatively low shared variance among these measures. In addition, we show that the same image statistics were associated with observers' saccade errors, despite large differences in response time, and that some effects persisted when high-level scene processing was disrupted by 180° rotations and color negatives of the originals. Together, these results provide evidence for landmark effects within natural images, in which feature location reports are pulled toward low- and high-level informative content in the scene.
Collapse
Affiliation(s)
- Anna Kosovicheva
- grid.17063.330000 0001 2157 2938Department of Psychology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, ON L5L 1C6 Canada ,grid.261112.70000 0001 2173 3359Department of Psychology, Northeastern University, 125 Nightingale Hall, 360 Huntington Ave., Boston, MA 02115 USA
| | - Peter J. Bex
- grid.261112.70000 0001 2173 3359Department of Psychology, Northeastern University, 125 Nightingale Hall, 360 Huntington Ave., Boston, MA 02115 USA
| |
Collapse
|
22
|
Abstract
In healthy vision, the fovea provides high acuity and serves as the locus for fixation achieved through saccadic eye movements. Bilateral loss of the foveal regions in both eyes causes individuals to adopt an eccentric locus for fixation. This review deals with the eye movement consequences of the loss of the foveal oculomotor reference and the ability of individuals to use an eccentric fixation locus as the new oculomotor reference. Eye movements are an integral part of everyday activities, such as reading, searching for an item of interest, eye-hand coordination, navigation, or tracking an approaching car. We consider how these tasks are impacted by the need to use an eccentric locus for fixation and as a reference for eye movements, specifically saccadic and smooth pursuit eye movements. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Preeti Verghese
- The Smith-Kettlewell Eye Research Institute, San Francisco, California 94115, USA;
| | - Cécile Vullings
- The Smith-Kettlewell Eye Research Institute, San Francisco, California 94115, USA;
| | - Natela Shanidze
- The Smith-Kettlewell Eye Research Institute, San Francisco, California 94115, USA;
| |
Collapse
|
23
|
Kristjánsson Á, Draschkow D. Keeping it real: Looking beyond capacity limits in visual cognition. Atten Percept Psychophys 2021; 83:1375-1390. [PMID: 33791942 PMCID: PMC8084831 DOI: 10.3758/s13414-021-02256-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 11/23/2022]
Abstract
Research within visual cognition has made tremendous strides in uncovering the basic operating characteristics of the visual system by reducing the complexity of natural vision to artificial but well-controlled experimental tasks and stimuli. This reductionist approach has for example been used to assess the basic limitations of visual attention, visual working memory (VWM) capacity, and the fidelity of visual long-term memory (VLTM). The assessment of these limits is usually made in a pure sense, irrespective of goals, actions, and priors. While it is important to map out the bottlenecks our visual system faces, we focus here on selected examples of how such limitations can be overcome. Recent findings suggest that during more natural tasks, capacity may be higher than reductionist research suggests and that separable systems subserve different actions, such as reaching and looking, which might provide important insights about how pure attentional or memory limitations could be circumvented. We also review evidence suggesting that the closer we get to naturalistic behavior, the more we encounter implicit learning mechanisms that operate "for free" and "on the fly." These mechanisms provide a surprisingly rich visual experience, which can support capacity-limited systems. We speculate whether natural tasks may yield different estimates of the limitations of VWM, VLTM, and attention, and propose that capacity measurements should also pass the real-world test within naturalistic frameworks. Our review highlights various approaches for this and suggests that our understanding of visual cognition will benefit from incorporating the complexities of real-world cognition in experimental approaches.
Collapse
Affiliation(s)
- Árni Kristjánsson
- School of Health Sciences, University of Iceland, Reykjavík, Iceland.
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia.
| | - Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| |
Collapse
|
24
|
The effects of perceptual cues on visual statistical learning: Evidence from children and adults. Mem Cognit 2021; 49:1645-1664. [PMID: 33876401 DOI: 10.3758/s13421-021-01179-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/01/2021] [Indexed: 11/08/2022]
Abstract
In visual statistical learning, one can extract the statistical regularities of target locations in an incidental manner. The current study examined the impact of salient perceptual cues on one type of visual statistical learning: probability cueing effects. In a visual search task, the target appeared more often in one quadrant (i.e., rich) than the other quadrants (i.e., sparse). Then, the screen was rotated by 90° and the targets appeared in the four quadrants with equal probabilities. In Experiment 1 without the addition of salient perceptual cues, adults showed significant probability cueing effects, but did not show a persistent attentional bias in the testing phase. In Experiments 2, 3, and 4, salient perceptual cues were added to the rich or the sparse quadrants. Adults showed significant probability cueing effects but no persistent attentional bias. In Experiment 5, younger children, older children, and adults showed significant probability cueing effects. All three groups also showed an attentional gradient phenomenon: reaction times were slower when the targets were in the sparse quadrant diagonal to, rather than adjacent to, the rich quadrant. Furthermore, both children groups showed a persistent egocentric attentional bias in the testing phase. These findings indicated that salient perceptual cues enhanced but did not reduce probability cueing effects, children and adults shared similar basic attentional mechanisms in probability cueing effects, and children and adults showed differences in the persistence of attentional bias.
Collapse
|
25
|
Rehrig GL, Cheng M, McMahan BC, Shome R. Why are the batteries in the microwave?: Use of semantic information under uncertainty in a search task. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:32. [PMID: 33855644 PMCID: PMC8046897 DOI: 10.1186/s41235-021-00294-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 03/23/2021] [Indexed: 11/10/2022]
Abstract
A major problem in human cognition is to understand how newly acquired information and long-standing beliefs about the environment combine to make decisions and plan behaviors. Over-dependence on long-standing beliefs may be a significant source of suboptimal decision-making in unusual circumstances. While the contribution of long-standing beliefs about the environment to search in real-world scenes is well-studied, less is known about how new evidence informs search decisions, and it is unclear whether the two sources of information are used together optimally to guide search. The present study expanded on the literature on semantic guidance in visual search by modeling a Bayesian ideal observer's use of long-standing semantic beliefs and recent experience in an active search task. The ability to adjust expectations to the task environment was simulated using the Bayesian ideal observer, and subjects' performance was compared to ideal observers that depended on prior knowledge and recent experience to varying degrees. Target locations were either congruent with scene semantics, incongruent with what would be expected from scene semantics, or random. Half of the subjects were able to learn to search for the target in incongruent locations over repeated experimental sessions when it was optimal to do so. These results suggest that searchers can learn to prioritize recent experience over knowledge of scenes in a near-optimal fashion when it is beneficial to do so, as long as the evidence from recent experience was learnable.
Collapse
Affiliation(s)
- Gwendolyn L Rehrig
- Department of Psychology, University of California, Davis, CA, 95616, USA.
| | - Michelle Cheng
- School of Social Sciences, Nanyang Technological University, Singapore, 639798, Singapore
| | - Brian C McMahan
- Department of Computer Science, Rutgers University-New Brunswick, New Brunswick, USA
| | - Rahul Shome
- Department of Computer Science, Rice University, Houston, USA
| |
Collapse
|
26
|
Time-course change in attentional resource allocation during a spot-the-difference task: investigation using an eye fixation-related brain potential. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-01623-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
27
|
Thompson C, Pasquini A, Hills PJ. Carry-over of attentional settings between distinct tasks: A transient effect independent of top-down contextual biases. Conscious Cogn 2021; 90:103104. [PMID: 33662677 DOI: 10.1016/j.concog.2021.103104] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 02/12/2021] [Accepted: 02/12/2021] [Indexed: 10/22/2022]
Abstract
Top-down attentional settings can persist between two unrelated tasks, influencing visual attention and performance. This study investigated whether top-down contextual information in a second task could moderate this "attentional inertia" effect. Forty participants searched through letter strings arranged horizontally, vertically, or randomly and then made a judgement about road, nature, or fractal images. Eye movements were recorded to the picture search and findings showed greater horizontal search in the pictures following horizontal letter strings and narrower horizontal search following vertical letter strings, but only in the first 1000 ms. This shows a brief persistence of attentional settings, consistent with past findings. Crucially, attentional inertia did not vary according to image type. This indicates that top-down contextual biases within a scene have limited impact on the persistence of previously relevant, but now irrelevant, attentional settings.
Collapse
Affiliation(s)
| | - Alessia Pasquini
- School of Health and Society, University of Salford, Salford, UK
| | - Peter J Hills
- Department of Psychology, Bournemouth University, Poole, UK
| |
Collapse
|
28
|
Rieger T, Heilmann L, Manzey D. Visual search behavior and performance in luggage screening: effects of time pressure, automation aid, and target expectancy. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:12. [PMID: 33630179 PMCID: PMC7907401 DOI: 10.1186/s41235-021-00280-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 02/12/2021] [Indexed: 11/10/2022]
Abstract
Visual inspection of luggage using X-ray technology at airports is a time-sensitive task that is often supported by automated systems to increase performance and reduce workload. The present study evaluated how time pressure and automation support influence visual search behavior and performance in a simulated luggage screening task. Moreover, we also investigated how target expectancy (i.e., targets appearing in a target-often location or not) influenced performance and visual search behavior. We used a paradigm where participants used the mouse to uncover a portion of the screen which allowed us to track how much of the stimulus participants uncovered prior to their decision. Participants were randomly assigned to either a high (5-s time per trial) or a low (10-s time per trial) time-pressure condition. In half of the trials, participants were supported by an automated diagnostic aid (85% reliability) in deciding whether a threat item was present. Moreover, within each half, in target-present trials, targets appeared in a predictable location (i.e., 70% of targets appeared in the same quadrant of the image) to investigate effects of target expectancy. The results revealed better detection performance with low time pressure and faster response times with high time pressure. There was an overall negative effect of automation support because the automation was only moderately reliable. Participants also uncovered a smaller amount of the stimulus under high time pressure in target-absent trials. Target expectancy of target location improved accuracy, speed, and the amount of uncovered space needed for the search.Significance Statement Luggage screening is a safety-critical real-world visual search task which often has to be done under time pressure. The present research found that time pressure compromises performance and increases the risk to miss critical items even with automation support. Moreover, even highly reliable automated support may not improve performance if it does not exceed the manual capabilities of the human screener. Lastly, the present research also showed that heuristic search strategies (e.g., areas where targets appear more often) seem to guide attention also in luggage screening.
Collapse
Affiliation(s)
- Tobias Rieger
- Department of Psychology and Ergonomics, Chair of Work, Engineering, and Organizational Psychology, F7, Technische Universität Berlin, Marchstr. 12, 10587, Berlin, Germany.
| | - Lydia Heilmann
- Department of Psychology and Ergonomics, Chair of Work, Engineering, and Organizational Psychology, F7, Technische Universität Berlin, Marchstr. 12, 10587, Berlin, Germany
| | - Dietrich Manzey
- Department of Psychology and Ergonomics, Chair of Work, Engineering, and Organizational Psychology, F7, Technische Universität Berlin, Marchstr. 12, 10587, Berlin, Germany
| |
Collapse
|
29
|
Tatler BW. Searching in CCTV: effects of organisation in the multiplex. Cogn Res Princ Implic 2021; 6:11. [PMID: 33599890 PMCID: PMC7892658 DOI: 10.1186/s41235-021-00277-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 02/03/2021] [Indexed: 11/10/2022] Open
Abstract
CCTV plays a prominent role in public security, health and safety. Monitoring large arrays of CCTV camera feeds is a visually and cognitively demanding task. Arranging the scenes by geographical proximity in the surveilled environment has been recommended to reduce this demand, but empirical tests of this method have failed to find any benefit. The present study tests an alternative method for arranging scenes, based on psychological principles from literature on visual search and scene perception: grouping scenes by semantic similarity. Searching for a particular scene in the array-a common task in reactive and proactive surveillance-was faster when scenes were arranged by semantic category. This effect was found only when scenes were separated by gaps for participants who were not made aware that scenes in the multiplex were grouped by semantics (Experiment 1), but irrespective of whether scenes were separated by gaps or not for participants who were made aware of this grouping (Experiment 2). When target frequency varied between scene categories-mirroring unequal distributions of crime over space-the benefit of organising scenes by semantic category was enhanced for scenes in the most frequently searched-for category, without any statistical evidence for a cost when searching for rarely searched-for categories (Experiment 3). The findings extend current understanding of the role of within-scene semantics in visual search, to encompass between-scene semantic relationships. Furthermore, the findings suggest that arranging scenes in the CCTV control room by semantic category is likely to assist operators in finding specific scenes during surveillance.
Collapse
Affiliation(s)
- Benjamin W Tatler
- School of Psychology, University of Aberdeen, Aberdeen, AB24 3FX, Scotland, UK.
| |
Collapse
|
30
|
Pollmann S, Rosenblum L, Linnhoff S, Porracin E, Geringswald F, Herbik A, Renner K, Hoffmann MB. Preserved Contextual Cueing in Realistic Scenes in Patients with Age-Related Macular Degeneration. Brain Sci 2020; 10:brainsci10120941. [PMID: 33297319 PMCID: PMC7762266 DOI: 10.3390/brainsci10120941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 11/30/2020] [Accepted: 12/04/2020] [Indexed: 11/25/2022] Open
Abstract
Foveal vision loss has been shown to reduce efficient visual search guidance due to contextual cueing by incidentally learned contexts. However, previous studies used artificial (T- among L-shape) search paradigms that prevent the memorization of a target in a semantically meaningful scene. Here, we investigated contextual cueing in real-life scenes that allow explicit memory of target locations in semantically rich scenes. In contrast to the contextual cueing deficits in artificial scenes, contextual cueing in patients with age-related macular degeneration (AMD) did not differ from age-matched normal-sighted controls. We discuss this in the context of visuospatial working-memory demands for which both eye movement control in the presence of central vision loss and memory-guided search may compete. Memory-guided search in semantically rich scenes may depend less on visuospatial working memory than search in abstract displays, potentially explaining intact contextual cueing in the former but not the latter. In a practical sense, our findings may indicate that patients with AMD are less deficient than expected after previous lab experiments. This shows the usefulness of realistic stimuli in experimental clinical research.
Collapse
Affiliation(s)
- Stefan Pollmann
- Department of Experimental Psychology, Otto-von-Guericke-University, Postfach 4120, 39016 Magdeburg, Germany; (L.R.); (S.L.); (E.P.); (F.G.)
- Center for Behavioral Brain Sciences, Otto-von-Guericke-University, 39016 Magdeburg, Germany;
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing 100048, China
- Correspondence: ; Tel.: +49-391-67-58474; Fax: +49-391-67-11947
| | - Lisa Rosenblum
- Department of Experimental Psychology, Otto-von-Guericke-University, Postfach 4120, 39016 Magdeburg, Germany; (L.R.); (S.L.); (E.P.); (F.G.)
| | - Stefanie Linnhoff
- Department of Experimental Psychology, Otto-von-Guericke-University, Postfach 4120, 39016 Magdeburg, Germany; (L.R.); (S.L.); (E.P.); (F.G.)
| | - Eleonora Porracin
- Department of Experimental Psychology, Otto-von-Guericke-University, Postfach 4120, 39016 Magdeburg, Germany; (L.R.); (S.L.); (E.P.); (F.G.)
| | - Franziska Geringswald
- Department of Experimental Psychology, Otto-von-Guericke-University, Postfach 4120, 39016 Magdeburg, Germany; (L.R.); (S.L.); (E.P.); (F.G.)
- Laboratoire de Neurosciences Cognitives UMR 7291, Aix-Marseille Université & CNRS, 13331 Marseille, France
| | - Anne Herbik
- Department of Ophthalmology, Otto-von-Guericke-University, 39016 Magdeburg, Germany;
| | - Katja Renner
- Eye Clinic Am Johannisplatz, 04103 Leipzig, Germany;
| | - Michael B. Hoffmann
- Center for Behavioral Brain Sciences, Otto-von-Guericke-University, 39016 Magdeburg, Germany;
- Department of Ophthalmology, Otto-von-Guericke-University, 39016 Magdeburg, Germany;
| |
Collapse
|
31
|
Visual statistical learning in children and adults: evidence from probability cueing. PSYCHOLOGICAL RESEARCH 2020; 85:2911-2921. [PMID: 33170355 DOI: 10.1007/s00426-020-01445-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Accepted: 10/26/2020] [Indexed: 10/23/2022]
Abstract
In visual statistical learning (VSL), one can extract and exhibit memory for the statistical regularities of target locations in an incidental manner. The current study examined the development of VSL using the probability cueing paradigm with salient perceptual cues. We also investigated the elicited attention gradient phenomenon in VSL. In a visual search task, the target first appeared more often in one quadrant (i.e., rich) than the other quadrants (i.e., sparse). Then, the participants rotated the screen by 90° and the targets appeared in the four quadrants with equal probabilities. Each quadrant had a unique background color and was, hence, associated with salient perceptual cues. 1st-4th graders and adults participated. All participants showed probability cueing effects to a similar extent. We observed an attention gradient phenomenon, as all participants responded slower to the sparse quadrant that was distant from, rather than the ones that were adjacent to the rich quadrant. In the testing phase, all age groups showed persistent attentional biases based on both egocentric and allocentric perspectives. These findings showed that probability cueing effects may develop early, that perceptual cues can bias attention guidance during VSL for both children and adults, and that VSL can elicit a spaced-based attention gradient phenomenon for children and adults.
Collapse
|
32
|
Lauer T, Willenbockel V, Maffongelli L, Võ MLH. The influence of scene and object orientation on the scene consistency effect. Behav Brain Res 2020; 394:112812. [DOI: 10.1016/j.bbr.2020.112812] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 07/06/2020] [Accepted: 07/14/2020] [Indexed: 01/18/2023]
|
33
|
|
34
|
Pollmann S, Geringswald F, Wei P, Porracin E. Intact Contextual Cueing for Search in Realistic Scenes with Simulated Central or Peripheral Vision Loss. Transl Vis Sci Technol 2020; 9:15. [PMID: 32855862 PMCID: PMC7422911 DOI: 10.1167/tvst.9.8.15] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 05/29/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose Search in repeatedly presented visual search displays can benefit from implicit learning of the display items' spatial configuration. This effect has been named contextual cueing. Previously, contextual cueing was found to be reduced in observers with foveal or peripheral vision loss. Whereas this previous work used symbolic (T among L-shape) search displays with arbitrary configurations, here we investigated search in realistic scenes. Search in meaningful realistic scenes may benefit much more from explicit memory of the target location. We hypothesized that this explicit recall of the target location reduces visuospatial working memory demands on search considerably, thereby enabling efficient search guidance by learnt contextual cues in observers with vision loss. Methods Two experiments with gaze-contingent scotoma simulation (Experiment 1: central scotoma, Experiment 2: peripheral scotoma) were carried out with normal-sighted observers (total n = 39/40). Observers had to find a cup in pseudorealistic indoor scenes and discriminate the direction of the cup's handle. Results With both central and peripheral scotoma simulation, contextual cueing was observed in repeatedly presented configurations. Conclusions The data show that patients suffering from central or peripheral vision loss may benefit more from memory-guided visual search than would be expected from scotoma simulation and patient studies using abstract symbolic search displays. Translational Relevance In the assessment of visual search in patients with vision loss, semantically meaningless abstract search displays may gain insights into deficient search functions, but more realistic meaningful search scenes are needed to assess whether search deficits can be compensated.
Collapse
Affiliation(s)
- Stefan Pollmann
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, China.,Department of Psychology, Otto-von-Guericke-University, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke-University, Magdeburg, Germany
| | | | - Ping Wei
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, China
| | - Eleonora Porracin
- Department of Psychology, Otto-von-Guericke-University, Magdeburg, Germany
| |
Collapse
|
35
|
Helbing J, Draschkow D, Võ MLH. Search superiority: Goal-directed attentional allocation creates more reliable incidental identity and location memory than explicit encoding in naturalistic virtual environments. Cognition 2020; 196:104147. [PMID: 32004760 DOI: 10.1016/j.cognition.2019.104147] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 11/19/2019] [Accepted: 11/20/2019] [Indexed: 01/23/2023]
Abstract
We use representations and expectations formed during life-long learning to support attentional allocation and perception. In comparison to traditional laboratory investigations, real-world memory formation is usually achieved without explicit instruction and on-the-fly as a by-product of natural interactions with our environment. Understanding this process and the quality of naturally formed representations is critical to understanding how memory is used to guide attention and perception. Utilizing immersive, navigable, and realistic virtual environments, we investigated incidentally generated memory representations by comparing them to memories for items which were explicitly memorized. Participants either searched for objects embedded in realistic indoor environments or explicitly memorized them for follow-up identity and location memory tests. We show for the first time that memory for the identity of naturalistic objects and their location in 3D space is higher after incidental encoding compared to explicit memorization, even though the subsequent memory tests came as a surprise to participants. Relating gaze behavior to memory performance revealed that encoding time was more predictive of subsequent memory when participants explicitly memorized an item, compared to incidentally encoding it. Our results suggest that the active nature of guiding attentional allocation during proactive behavior allows for behaviorally optimal formation and utilization of representations. This highlights the importance of investigating cognition under ecologically valid conditions and shows that understanding the most natural processes for encoding and maintaining information is critical for understanding adaptive behavior.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Dejan Draschkow
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany; Department of Psychiatry, University of Oxford, Oxford, England, United Kingdom of Great Britain and Northern Ireland.
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
36
|
Williams CC. Looking for your keys: The interaction of attention, memory, and eye movements in visual search. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
37
|
Nishimura N, Uchimura M, Kitazawa S. Automatic encoding of a target position relative to a natural scene. J Neurophysiol 2019; 122:1849-1860. [PMID: 31509471 DOI: 10.1152/jn.00032.2018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We previously showed that the brain automatically represents a target position for reaching relative to a large square in the background. In the present study, we tested whether a natural scene with many complex details serves as an effective background for representing a target. In the first experiment, we used upright and inverted pictures of a natural scene. A shift of pictures significantly attenuated prism adaptation of reaching movements as long as they were upright. In one-third of participants, adaptation was almost completely cancelled whether the pictures were upright or inverted. It was remarkable that there were two distinct groups of participants, one who relies fully on the allocentric coordinate and the other who depended only when the scene was upright. In the second experiment, we examined how long it takes for a novel upright scene to serve as a background. A shift of the novel scene had no significant effects when it was presented for 500 ms before presenting a target, but significant effects were recovered when presented for 1,500 ms. These results show that a natural scene serves as a background against which a target is automatically represented once we spend 1,500 ms in the scene.NEW & NOTEWORTHY Prism adaptation of reaching was attenuated by a shift of natural scenes as long as they were upright. In one-third of participants, adaptation was fully canceled whether the scene was upright or inverted. When an upright scene was novel, it took 1,500 ms to prepare the scene for allocentric coding. These results show that a natural scene serves as a background against which a target is automatically represented once we spend 1,500 ms in the scene.
Collapse
Affiliation(s)
- Nobuyuki Nishimura
- Department of Anesthesiology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Motoaki Uchimura
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan
| | - Shigeru Kitazawa
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| |
Collapse
|
38
|
Exploring the effect of context and expertise on attention: is attention shifted by information in medical images? Atten Percept Psychophys 2019; 81:1283-1296. [PMID: 30825115 PMCID: PMC6647457 DOI: 10.3758/s13414-019-01695-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
Radiologists make critical decisions based on searching and interpreting medical images. The probability of a lung nodule differs across anatomical regions within the chest, raising the possibility that radiologists might have a prior expectation that creates an attentional bias. The development of expertise is also thought to cause “tuning” to relevant features, allowing radiologists to become faster and more accurate at detecting potential masses within their domain of expertise. Here, we tested both radiologists and control participants with a novel attentional-cueing paradigm to investigate whether the deployment of attention was affected (1) by a context that might invoke prior knowledge for experts, (2) by a nodule localized either on the same or on opposite sides as a subsequent target, and (3) by inversion of the nodule-present chest radiographs, to assess the orientation specificity of any effects. The participants also performed a nodule detection task to verify that our presentation duration was sufficient to extract diagnostic information. We saw no evidence of priors triggered by a normal chest radiograph cue affecting attention. When the cue was an upright abnormal chest radiograph, radiologists were faster when the lateralised nodule and the subsequent target appeared at the same rather than at opposite locations, suggesting attention was captured by the nodule. The opposite pattern was present for inverted images. We saw no evidence of cueing for control participants in any condition, which suggests that radiologists are indeed more sensitive to visual features that are not perceived as salient by naïve observers.
Collapse
|
39
|
Nobre AC, Stokes MG. Premembering Experience: A Hierarchy of Time-Scales for Proactive Attention. Neuron 2019; 104:132-146. [PMID: 31600510 PMCID: PMC6873797 DOI: 10.1016/j.neuron.2019.08.030] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 08/07/2019] [Accepted: 08/20/2019] [Indexed: 12/30/2022]
Abstract
Memories are about the past, but they serve the future. Memory research often emphasizes the former aspect: focusing on the functions that re-constitute (re-member) experience and elucidating the various types of memories and their interrelations, timescales, and neural bases. Here we highlight the prospective nature of memory in guiding selective attention, focusing on functions that use previous experience to anticipate the relevant events about to unfold-to "premember" experience. Memories of various types and timescales play a fundamental role in guiding perception and performance adaptively, proactively, and dynamically. Consonant with this perspective, memories are often recorded according to expected future demands. Using working memory as an example, we consider how mnemonic content is selected and represented for future use. This perspective moves away from the traditional representational account of memory toward a functional account in which forward-looking memory traces are informationally and computationally tuned for interacting with incoming sensory signals to guide adaptive behavior.
Collapse
Affiliation(s)
- Anna C Nobre
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| | - Mark G Stokes
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
40
|
Jiang YV, Sisk CA, Toh YN. Implicit guidance of attention in contextual cueing: Neuropsychological and developmental evidence. Neurosci Biobehav Rev 2019; 105:115-125. [DOI: 10.1016/j.neubiorev.2019.07.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Revised: 05/28/2019] [Accepted: 07/01/2019] [Indexed: 12/13/2022]
|
41
|
Schmidt A, Geringswald F, Pollmann S. Spatial Contextual Cueing, Assessed in a Computerized Task, Is Not a Limiting Factor for Expert Performance in the Domain of Team Sports or Action Video Game Playing. JOURNAL OF COGNITIVE ENHANCEMENT 2019. [DOI: 10.1007/s41465-018-0096-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
42
|
Borges MT, Fernandes EG, Coco MI. Age-related differences during visual search: the role of contextual expectations and cognitive control mechanisms. AGING NEUROPSYCHOLOGY AND COGNITION 2019; 27:489-516. [DOI: 10.1080/13825585.2019.1632256] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Miguel T. Borges
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | | | - Moreno I. Coco
- School of Psychology, University of East London, London, United Kingdom of Great Britain and Northern Ireland
| |
Collapse
|
43
|
Williams CC, Castelhano MS. The Changing Landscape: High-Level Influences on Eye Movement Guidance in Scenes. Vision (Basel) 2019; 3:E33. [PMID: 31735834 PMCID: PMC6802790 DOI: 10.3390/vision3030033] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Revised: 06/20/2019] [Accepted: 06/24/2019] [Indexed: 11/16/2022] Open
Abstract
The use of eye movements to explore scene processing has exploded over the last decade. Eye movements provide distinct advantages when examining scene processing because they are both fast and spatially measurable. By using eye movements, researchers have investigated many questions about scene processing. Our review will focus on research performed in the last decade examining: (1) attention and eye movements; (2) where you look; (3) influence of task; (4) memory and scene representations; and (5) dynamic scenes and eye movements. Although typically addressed as separate issues, we argue that these distinctions are now holding back research progress. Instead, it is time to examine the intersections of these seemingly separate influences and examine the intersectionality of how these influences interact to more completely understand what eye movements can tell us about scene processing.
Collapse
Affiliation(s)
- Carrick C. Williams
- Department of Psychology, California State University San Marcos, San Marcos, CA 92069, USA
| | | |
Collapse
|
44
|
Meyer T, Quaedflieg CW, Bisby JA, Smeets T. Acute stress – but not aversive scene content – impairs spatial configuration learning. Cogn Emot 2019; 34:201-216. [DOI: 10.1080/02699931.2019.1604320] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Affiliation(s)
- Thomas Meyer
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Research Department of Clinical and Health Psychology, University College London, London, UK
- Psychology and Psychotherapy, University of Münster, Münster, Germany
| | | | - James A. Bisby
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Tom Smeets
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Department of Medical and Clinical Psychology, Center of Research on Psychological and Somatic Disorders (CoRPS), Tilburg University, Tilburg, The Netherlands
| |
Collapse
|
45
|
Huestegge L, Herbort O, Gosch N, Kunde W, Pieczykolan A. Free-choice saccades and their underlying determinants: Explorations of high-level voluntary oculomotor control. J Vis 2019; 19:14. [PMID: 30924842 DOI: 10.1167/19.3.14] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Models of eye-movement control distinguish between different control levels, ranging from automatic (bottom-up, stimulus-driven selection) and automatized (based on well-learned routines) to voluntary (top-down, goal-driven selection, e.g., based on instructions). However, one type of voluntary control has yet only been examined in the manual and not in the oculomotor domain, namely free-choice selection among arbitrary targets, that is, targets that are of equal interest from both a bottom-up and top-down processing perspective. Here, we ask which features of targets (identity- or location-related) are used to determine such oculomotor free-choice behavior. In two experiments, participants executed a saccade to one of four peripheral targets in three different choice conditions: unconstrained free choice, constrained free choice based on target identity (color), and constrained free choice based on target location. The analysis of choice frequencies revealed that unconstrained free-choice selection closely resembled constrained choice based on target location. The results suggest that free-choice oculomotor control is mainly guided by spatial (location-based) target characteristics. We explain these results by assuming that participants tend to avoid less parsimonious recoding of target-identity representations into spatial codes, the latter being a necessary prerequisite to configure oculomotor commands.
Collapse
Affiliation(s)
| | | | - Nora Gosch
- Würzburg University, Würzburg, Germany.,Technische Universität Braunschweig, Braunschweig, Germany
| | | | - Aleks Pieczykolan
- Würzburg University, Würzburg, Germany.,Human Technology Center, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
46
|
Ramey MM, Yonelinas AP, Henderson JM. Conscious and unconscious memory differentially impact attention: Eye movements, visual search, and recognition processes. Cognition 2019; 185:71-82. [PMID: 30665071 DOI: 10.1016/j.cognition.2019.01.007] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Revised: 01/08/2019] [Accepted: 01/08/2019] [Indexed: 12/27/2022]
Abstract
A hotly debated question is whether memory influences attention through conscious or unconscious processes. To address this controversy, we measured eye movements while participants searched repeated real-world scenes for embedded targets, and we assessed memory for each scene using confidence-based methods to isolate different states of subjective memory awareness. We found that memory-informed eye movements during visual search were predicted both by conscious recollection, which led to a highly precise first eye movement toward the remembered location, and by unconscious memory, which increased search efficiency by gradually directing the eyes toward the target throughout the search trial. In contrast, these eye movement measures were not influenced by familiarity-based memory (i.e., changes in subjective reports of memory strength). The results indicate that conscious recollection and unconscious memory can each play distinct and complementary roles in guiding attention to facilitate efficient extraction of visual information.
Collapse
Affiliation(s)
- Michelle M Ramey
- Department of Psychology, University of California, Davis, CA, USA; Center for Neuroscience, University of California, Davis, CA, USA; Center for Mind and Brain, University of California, Davis, CA, USA.
| | - Andrew P Yonelinas
- Department of Psychology, University of California, Davis, CA, USA; Center for Neuroscience, University of California, Davis, CA, USA
| | - John M Henderson
- Department of Psychology, University of California, Davis, CA, USA; Center for Mind and Brain, University of California, Davis, CA, USA
| |
Collapse
|
47
|
Kahana-Levy N, Shavitzky-Golkin S, Borowsky A, Vakil E. The effects of repetitive presentation of specific hazards on eye movements in hazard perception training, of experienced and young-inexperienced drivers. ACCIDENT; ANALYSIS AND PREVENTION 2019; 122:255-267. [PMID: 30391702 DOI: 10.1016/j.aap.2018.09.033] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2017] [Revised: 09/16/2018] [Accepted: 09/29/2018] [Indexed: 06/08/2023]
Abstract
Recent evidence shows that compared to experienced drivers, young-inexperienced drivers are more likely to be involved in a crash mainly due to their poor hazard perception (HP) abilities. This skill develops with experience and may be developed through training. We assumed that as any other skill, HP developed through implicit learning. Nevertheless, current training methods, rely on deliberate learning where young-inexperienced drivers are instructed what hazards that they should seek and where they might be located. In this exploratory study, we investigated the effectiveness of a novel training procedure, in which learners were repeatedly exposed to target video clips of driving scenarios embedded within filler scenarios. Each of the target videos included scenarios of either a visible hazard, a hidden materialized hazard or hidden unmaterialized hazard. Twenty-three young-inexperienced drivers and 35 experienced drivers participated in training session followed by a learning transference testing session and 24 additional young-inexperienced drivers participated only in the transference testing session with no training, during which participants were shown novel hazards video clips. Participants responded by pressing a button when they identified a hazard. Eye movement was also tracked using fixations patterns as a proxy to evaluate HP performance. During training, young-inexperienced drivers gradually increased their focus on visible materialized hazards but exhibited no learning curve with respect to hidden hazards. During the learning transference session, both trained groups focused on hazards earlier compared to untrained drivers. These results imply that repetitive training may facilitate HP acquisition among young-inexperienced drivers. Patterns concerning experienced drivers are also discussed.
Collapse
Affiliation(s)
| | | | - Avinoam Borowsky
- Ben-Gurion University of the Negev, Department of Industrial Engineering and Management, Beer-Sheva, Israel
| | - Eli Vakil
- Psychology Department, Bar-Ilan University, Ramat-Gan, Israel
| |
Collapse
|
48
|
Abstract
Research shows that emotional stimuli can capture attention, and this can benefit or impair performance, depending on the characteristics of a task. Additionally, whilst some findings show that attention expands under positive conditions, others show that emotion has no influence on the broadening of attention. The current study investigated whether emotional real-world scenes influence attention in a visual search task. Participants were asked to identify a target letter embedded in the centre or periphery of emotional images. Identification accuracy was lower in positive images compared to neutral images, and response times were slower in negative images. This suggests that real-world emotional stimuli have a distracting effect on visual attention and search. There was no evidence that emotional images influenced the spatial spread of attention. Instead, it is suggested that findings may provide support for the argument that positive emotion encourages a global processing style and negative emotion promotes local processing.
Collapse
|
49
|
Anderson BA, Kim H. On the representational nature of value-driven spatial attentional biases. J Neurophysiol 2018; 120:2654-2658. [PMID: 30303748 DOI: 10.1152/jn.00489.2018] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Reward learning biases attention toward both reward-associated objects and reward-associated regions of space. The relationship between objects and space in the value-based control of attention, as well as the contextual specificity of space-reward pairings, remains unclear. In the present study, using a free-viewing task, we provide evidence of overt attentional biases toward previously rewarded regions of texture scenes that lack objects. When scrutinizing a texture scene, participants look more frequently toward, and spend a longer amount of time looking at, regions that they have repeatedly oriented to in the past as a result of performance feedback. These biases were scene specific, such that different spatial contexts produced different patterns of habitual spatial orienting. Our findings indicate that reinforcement learning can modify looking behavior via a representation that is purely spatial in nature in a context-specific manner. NEW & NOTEWORTHY The representational nature of space in the value-driven control of attention remains unclear. Here, we provide evidence for scene-specific overt spatial attentional biases following reinforcement learning, even though the scenes contained no objects. Our findings indicate that reinforcement learning can modify looking behavior via a representation that is purely spatial in nature in a context-specific manner.
Collapse
Affiliation(s)
| | - Haena Kim
- Texas A&M University , College Station, Texas
| |
Collapse
|
50
|
Anderson BA, Kim H. Mechanisms of value-learning in the guidance of spatial attention. Cognition 2018; 178:26-36. [DOI: 10.1016/j.cognition.2018.05.005] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Revised: 04/24/2018] [Accepted: 05/05/2018] [Indexed: 12/20/2022]
|