51
|
Tummeltshammer K, Amso D. Top-down contextual knowledge guides visual attention in infancy. Dev Sci 2018; 21:e12599. [PMID: 29071811 PMCID: PMC5920787 DOI: 10.1111/desc.12599] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Accepted: 06/23/2017] [Indexed: 01/23/2023]
Abstract
The visual context in which an object or face resides can provide useful top-down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye-tracking experiment, 6- and 10-month-old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6- and 10-month-olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top-down information to facilitate orienting during memory-guided visual search.
Collapse
Affiliation(s)
- Kristen Tummeltshammer
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, RI, USA
| | - Dima Amso
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, RI, USA
| |
Collapse
|
52
|
Dodge S, Karam L. Visual Saliency Prediction Using a Mixture of Deep Neural Networks. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:4080-4090. [PMID: 29993885 DOI: 10.1109/tip.2018.2834826] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Visual saliency models have recently begun to incorporate deep learning to achieve predictive capacity much greater than previous unsupervised methods. However, most existing models predict saliency without explicit knowledge of global scene semantic information. We propose a model (MxSalNet) that incorporates global scene semantic information in addition to local information gathered by a convolutional neural network. Our model is formulated as a mixture of experts. Each expert network is trained to predict saliency for a set of closely related images. The final saliency map is computed as a weighted mixture of the expert networks' output, with weights determined by a separate gating network. This gating network is guided by global scene information to predict weights. The expert networks and the gating network are trained simultaneously in an end-toend manner. We show that our mixture formulation leads to improvement in performance over an otherwise identical nonmixture model that does not incorporate global scene information. Additionally, we show that our model achieves better performance than several other visual saliency models.
Collapse
|
53
|
Visual search for changes in scenes creates long-term, incidental memory traces. Atten Percept Psychophys 2018; 80:829-843. [PMID: 29427122 DOI: 10.3758/s13414-018-1486-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.
Collapse
|
54
|
Aldi GA, Lange I, Gigli C, Goossens L, Schruers KR, Cosci F. Validation of the Mnemonic Similarity Task - Context Version. ACTA ACUST UNITED AC 2018; 40:432-440. [PMID: 29412339 PMCID: PMC6899373 DOI: 10.1590/1516-4446-2017-2379] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Accepted: 09/25/2017] [Indexed: 12/25/2022]
Abstract
Objective: Pattern separation (PS) is the ability to represent similar experiences as separate, non-overlapping representations. It is usually assessed via the Mnemonic Similarity Task – Object Version (MST-O) which, however, assesses PS performance without taking behavioral context discrimination into account, since it is based on pictures of everyday simple objects on a white background. We here present a validation study for a new task, the Mnemonic Similarity Task – Context Version (MST-C), which is designed to measure PS while taking behavioral context discrimination into account by using real-life context photographs. Methods: Fifty healthy subjects underwent the two MST tasks to assess convergent evidence. Instruments assessing memory and attention were also administered to study discriminant evidence. The test-retest reliability of MST-C was analyzed. Results: Weak evidence supports convergent validity between the MST-C task and the MST-O as measures of PS (rs = 0.464; p < 0.01); PS performance assessed via the MST-C did not correlate with memory or attention; a moderate test-retest reliability was found (rs = 0.595; p < 0.01). Conclusion: The MST-C seems useful for assessing PS performance conceptualized as the ability to discriminate complex and realistic spatial contexts. Future studies are welcome to evaluate the validity of the MST-C task as a measure of PS in clinical populations.
Collapse
Affiliation(s)
- Giulia A Aldi
- Dipartimento di Scienze della Salute, Università di Firenze, Firenze, Italy
| | - Iris Lange
- Department of Psychiatry and Neuropsychology, Maastricht University, Maastricht, The Netherlands
| | - Cristiana Gigli
- Dipartimento di Scienze della Salute, Università di Firenze, Firenze, Italy
| | - Lies Goossens
- Department of Psychiatry and Neuropsychology, Maastricht University, Maastricht, The Netherlands
| | - Koen R Schruers
- Department of Psychiatry and Neuropsychology, Maastricht University, Maastricht, The Netherlands
| | - Fiammetta Cosci
- Dipartimento di Scienze della Salute, Università di Firenze, Firenze, Italy
| |
Collapse
|
55
|
Brockmole JR, Henderson JM. Short Article: Recognition and Attention Guidance during Contextual Cueing in Real-World Scenes: Evidence from Eye Movements. Q J Exp Psychol (Hove) 2018; 59:1177-87. [PMID: 16769618 DOI: 10.1080/17470210600665996] [Citation(s) in RCA: 107] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.
Collapse
|
56
|
Affiliation(s)
- Miguel P. Eckstein
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, California 93106-9660
| |
Collapse
|
57
|
Meaning in learning: Contextual cueing relies on objects' visual features and not on objects' meaning. Mem Cognit 2017; 46:58-67. [PMID: 28770539 DOI: 10.3758/s13421-017-0745-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People easily learn regularities embedded in the environment and utilize them to facilitate visual search. Using images of real-world objects, it has been recently shown that this learning, termed contextual cueing (CC), occurs even in complex, heterogeneous environments, but only when the same distractors are repeated at the same locations. Yet it is not clear what exactly is being learned under these conditions: the visual features of the objects or their meaning. In this study, Experiment 1 demonstrated that meaning is not necessary for this type of learning, as a similar pattern of results was found even when the objects' meaning was largely removed. Experiments 2 and 3 showed that after learning meaningful objects, CC was not diminished by a manipulation that distorted the objects' meaning but preserved most of their visual properties. By contrast, CC was eliminated when the learned objects were replaced with different category exemplars that preserved the objects' meaning but altered their visual properties. Together, these data strongly suggest that the acquired context that facilitates real-world objects search relies primarily on the visual properties and the spatial locations of the objects, but not on their meaning.
Collapse
|
58
|
Alards-Tomalin D, Brosowsky NP, Mondor TA. Auditory statistical learning: predictive frequency information affects the deployment of contextually mediated attentional resources on perceptual tasks. JOURNAL OF COGNITIVE PSYCHOLOGY 2017. [DOI: 10.1080/20445911.2017.1353518] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
| | - Nicholaus P. Brosowsky
- Department of Psychology, The Graduate Center of the City University of New York, New York, NY, USA
| | - Todd A. Mondor
- Department of Psychology, University of Manitoba, Winnipeg, MB, Canada
| |
Collapse
|
59
|
Abstract
How do we find what we are looking for? Fundamental limits on visual processing mean that even when the desired target is in our field of view, we often need to search, because it is impossible to recognize everything at once. Searching involves directing attention to objects that might be the target. This deployment of attention is not random. It is guided to the most promising items and locations by five factors discussed here: Bottom-up salience, top-down feature guidance, scene structure and meaning, the previous history of search over time scales from msec to years, and the relative value of the targets and distractors. Modern theories of search need to specify how all five factors combine to shape search behavior. An understanding of the rules of guidance can be used to improve the accuracy and efficiency of socially-important search tasks, from security screening to medical image perception.
Collapse
|
60
|
Independence of long-term contextual memory and short-term perceptual hypotheses: Evidence from contextual cueing of interrupted search. Atten Percept Psychophys 2017; 79:508-521. [PMID: 27921267 PMCID: PMC5306304 DOI: 10.3758/s13414-016-1246-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Observers are able to resume an interrupted search trial faster relative to responding to a new, unseen display. This finding of rapid resumption is attributed to short-term perceptual hypotheses generated on the current look and confirmed upon subsequent looks at the same display. It has been suggested that the contents of perceptual hypotheses are similar to those of other forms of memory acquired long-term through repeated exposure to the same search displays over the course of several trials, that is, the memory supporting “contextual cueing.” In three experiments, we investigated the relationship between short-term perceptual hypotheses and long-term contextual memory. The results indicated that long-term, contextual memory of repeated displays neither affected the generation nor the confirmation of short-term perceptual hypotheses for these displays. Furthermore, the analysis of eye movements suggests that long-term memory provides an initial benefit in guiding attention to the target, whereas in subsequent looks guidance is entirely based on short-term perceptual hypotheses. Overall, the results reveal a picture of both long- and short-term memory contributing to reliable performance gains in interrupted search, while exerting their effects in an independent manner.
Collapse
|
61
|
Abstract
According to consciousness involvement, human’s learning can be roughly classified into explicit learning and implicit learning. Contrasting strongly to explicit learning with clear targets and rules, such as our school study of mathematics, learning is implicit when we acquire new information without intending to do so. Research from psychology indicates that implicit learning is ubiquitous in our daily life. Moreover, implicit learning plays an important role in human visual perception. But in the past 60 years, most of the well-known machine-learning models aimed to simulate explicit learning while the work of modeling implicit learning was relatively limited, especially for computer vision applications. This article proposes a novel unsupervised computational model for implicit visual learning by exploring dissipative system, which provides a unifying macroscopic theory to connect biology with physics. We test the proposed Dissipative Implicit Learning Model (DILM) on various datasets. The experiments show that DILM not only provides a good match to human behavior but also improves the explicit machine-learning performance obviously on image classification tasks.
Collapse
Affiliation(s)
- Yan Liu
- The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Yang Liu
- The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shenghua Zhong
- The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Songtao Wu
- The Hong Kong Polytechnic University, Hong Kong SAR, China
| |
Collapse
|
62
|
Henderson JM. Gaze Control as Prediction. Trends Cogn Sci 2017; 21:15-23. [DOI: 10.1016/j.tics.2016.11.003] [Citation(s) in RCA: 98] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2016] [Revised: 11/07/2016] [Accepted: 11/08/2016] [Indexed: 11/25/2022]
|
63
|
The influence of action video game playing on eye movement behaviour during visual search in abstract, in-game and natural scenes. Atten Percept Psychophys 2016; 79:484-497. [PMID: 27981521 DOI: 10.3758/s13414-016-1256-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.
Collapse
|
64
|
Beighley S, Intraub H. Does inversion affect boundary extension for briefly-presented views? VISUAL COGNITION 2016. [DOI: 10.1080/13506285.2016.1229369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
65
|
Johnston KA, Scialfa CT. Hazard perception in emergency medical service responders. ACCIDENT; ANALYSIS AND PREVENTION 2016; 95:91-96. [PMID: 27415813 DOI: 10.1016/j.aap.2016.06.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2015] [Revised: 05/12/2016] [Accepted: 06/22/2016] [Indexed: 06/06/2023]
Abstract
The perception of on-road hazards is critically important to emergency medical services (EMS) professionals, the patients they transport and the general public. This study compared hazard perception in EMS and civilian drivers of similar age and personal driving experience. Twenty-nine EMS professionals and 24 non-professional drivers were given a dynamic hazard perception test (HPT). The EMS group demonstrated an advantage in HPT that was independent of simple reaction time, another indication of the validity of the test. These results are also consistent with the view that professional driving experience results in changes in the ability to identify and respond to on-road hazards. Directions for future research include the development of a profession-specific hazard perception tool for both assessment and training purposes.
Collapse
Affiliation(s)
- K A Johnston
- University of Calgary, 2500 Univeristy Drive NW, Calgary, AB T2N 1N4, Canada
| | - C T Scialfa
- University of Calgary, 2500 Univeristy Drive NW, Calgary, AB T2N 1N4, Canada.
| |
Collapse
|
66
|
Abstract
The scientific community has witnessed growing concern about the high rate of false positives and unreliable results within the psychological literature, but the harmful impact of false negatives has been largely ignored. False negatives are particularly concerning in research areas where demonstrating the absence of an effect is crucial, such as studies of unconscious or implicit processing. Research on implicit processes seeks evidence of above-chance performance on some implicit behavioral measure at the same time as chance-level performance (that is, a null result) on an explicit measure of awareness. A systematic review of 73 studies of contextual cuing, a popular implicit learning paradigm, involving 181 statistical analyses of awareness tests, reveals how underpowered studies can lead to failure to reject a false null hypothesis. Among the studies that reported sufficient information, the meta-analytic effect size across awareness tests was dz = 0.31 (95 % CI 0.24–0.37), showing that participants’ learning in these experiments was conscious. The unusually large number of positive results in this literature cannot be explained by selective publication. Instead, our analyses demonstrate that these tests are typically insensitive and underpowered to detect medium to small, but true, effects in awareness tests. These findings challenge a widespread and theoretically important claim about the extent of unconscious human cognition.
Collapse
|
67
|
Zang X, Geyer T, Assumpção L, Müller HJ, Shi Z. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search. Front Psychol 2016; 7:852. [PMID: 27375530 PMCID: PMC4894892 DOI: 10.3389/fpsyg.2016.00852] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2016] [Accepted: 05/23/2016] [Indexed: 11/13/2022] Open
Abstract
Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor 'L's and a target 'T', was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search.
Collapse
Affiliation(s)
- Xuelian Zang
- China Centre for Special Economic Zone Research, Research Centre of Brain Function and Psychological Science, Shenzhen UniversityShenzhen, China; General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität MunichMunich, Germany
| | - Thomas Geyer
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität Munich Munich, Germany
| | - Leonardo Assumpção
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität Munich Munich, Germany
| | - Hermann J Müller
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität MunichMunich, Germany; Department of Psychological Science, Birkbeck, University of LondonLondon, UK
| | - Zhuanghua Shi
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität Munich Munich, Germany
| |
Collapse
|
68
|
Castelhano MS, Witherspoon RL. How You Use It Matters. Psychol Sci 2016; 27:606-21. [DOI: 10.1177/0956797616629130] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2013] [Accepted: 01/06/2016] [Indexed: 11/16/2022] Open
Abstract
How does one know where to look for objects in scenes? Objects are seen in context daily, but also used for specific purposes. Here, we examined whether an object’s function can guide attention during visual search in scenes. In Experiment 1, participants studied either the function (function group) or features (feature group) of a set of invented objects. In a subsequent search, the function group located studied objects faster than novel (unstudied) objects, whereas the feature group did not. In Experiment 2, invented objects were positioned in locations that were either congruent or incongruent with the objects’ functions. Search for studied objects was faster for function-congruent locations and hampered for function-incongruent locations, relative to search for novel objects. These findings demonstrate that knowledge of object function can guide attention in scenes, and they have important implications for theories of visual cognition, cognitive neuroscience, and developmental and ecological psychology.
Collapse
|
69
|
Wynn JS, Bone MB, Dragan MC, Hoffman KL, Buchsbaum BR, Ryan JD. Selective scanpath repetition during memory-guided visual search. VISUAL COGNITION 2016; 24:15-37. [PMID: 27570471 PMCID: PMC4975086 DOI: 10.1080/13506285.2016.1175531] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 03/31/2016] [Accepted: 04/01/2016] [Indexed: 10/26/2022]
Abstract
Visual search efficiency improves with repetition of a search display, yet the mechanisms behind these processing gains remain unclear. According to Scanpath Theory, memory retrieval is mediated by repetition of the pattern of eye movements or "scanpath" elicited during stimulus encoding. Using this framework, we tested the prediction that scanpath recapitulation reflects relational memory guidance during repeated search events. Younger and older subjects were instructed to find changing targets within flickering naturalistic scenes. Search efficiency (search time, number of fixations, fixation duration) and scanpath similarity (repetition) were compared across age groups for novel (V1) and repeated (V2) search events. Younger adults outperformed older adults on all efficiency measures at both V1 and V2, while the search time benefit for repeated viewing (V1-V2) did not differ by age. Fixation-binned scanpath similarity analyses revealed repetition of initial and final (but not middle) V1 fixations at V2, with older adults repeating more initial V1 fixations than young adults. In young adults only, early scanpath similarity correlated negatively with search time at test, indicating increased efficiency, whereas the similarity of V2 fixations to middle V1 fixations predicted poor search performance. We conclude that scanpath compression mediates increased search efficiency by selectively recapitulating encoding fixations that provide goal-relevant input. Extending Scanpath Theory, results suggest that scanpath repetition varies as a function of time and memory integrity.
Collapse
Affiliation(s)
- Jordana S. Wynn
- Department of Psychology, University of Toronto, Toronto, ON, CanadaM5S 3G3
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON, CanadaM6A 2E1
| | - Michael B. Bone
- Department of Psychology, University of Toronto, Toronto, ON, CanadaM5S 3G3
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON, CanadaM6A 2E1
| | | | - Kari L. Hoffman
- Department of Biology, York University, Toronto, ON, CanadaM3J 1P3
- Department of Psychology, York University, Toronto, ON, CanadaM3J 1P3
- Centre for Vision Research, York University, Toronto, ON, CanadaM3J 1P3
| | - Bradley R. Buchsbaum
- Department of Psychology, University of Toronto, Toronto, ON, CanadaM5S 3G3
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON, CanadaM6A 2E1
| | - Jennifer D. Ryan
- Department of Psychology, University of Toronto, Toronto, ON, CanadaM5S 3G3
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON, CanadaM6A 2E1
| |
Collapse
|
70
|
Thompson C, Howting L, Hills P. The transference of visual search between two unrelated tasks: Measuring the temporal characteristics of carry-over. Q J Exp Psychol (Hove) 2015; 68:2255-73. [DOI: 10.1080/17470218.2015.1013042] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Investigations into the persistence of top-down control settings do not accurately reflect the nature of dynamic tasks. They typically involve extended practice with an initial task, and this initial task usually shares similar stimuli with a second task. Recent work shows that visual attention and search can be affected by limited exposure to a preceding, unrelated task, and the current study explored the temporal characteristics of this “carry-over” effect. Thirty-four participants completed one, four, or eight simple letter searches and then searched a natural scene. The spatial layout of letters influenced spread of search in the pictures, and this was further impacted by the time spent in the initial task, yet the carry-over effect diminished quickly. The results have implications for theories of top-down control and models that attempt to predict search in natural scenes. They are also relevant to real-world tasks in which performance is closely related to visual attention and search.
Collapse
Affiliation(s)
| | | | - Peter Hills
- Psychology Research Group, Bournemouth University, Poole, UK
| |
Collapse
|
71
|
Goujon A, Didierjean A, Thorpe S. Investigating implicit statistical learning mechanisms through contextual cueing. Trends Cogn Sci 2015; 19:524-33. [PMID: 26255970 DOI: 10.1016/j.tics.2015.07.009] [Citation(s) in RCA: 102] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Since its inception, the contextual cueing (CC) paradigm has generated considerable interest in various fields of cognitive sciences because it constitutes an elegant approach to understanding how statistical learning (SL) mechanisms can detect contextual regularities during a visual search. In this article we review and discuss five aspects of CC: (i) the implicit nature of learning, (ii) the mechanisms involved in CC, (iii) the mediating factors affecting CC, (iv) the generalization of CC phenomena, and (v) the dissociation between implicit and explicit CC phenomena. The findings suggest that implicit SL is an inherent component of ongoing processing which operates through clustering, associative, and reinforcement processes at various levels of sensory-motor processing, and might result from simple spike-timing-dependent plasticity.
Collapse
Affiliation(s)
- Annabelle Goujon
- Centre de Recherche Cerveau et Cognition (CerCo), Centre National de la Recherche Scientifique (CNRS), Université Paul Sabatier, 31052 Toulouse, France; Laboratoire de Psychologie, Université de Franche-Comté, 25000 Besançon, France.
| | - André Didierjean
- Laboratoire de Psychologie, Université de Franche-Comté, 25000 Besançon, France; Institut Universitaire de France
| | - Simon Thorpe
- Centre de Recherche Cerveau et Cognition (CerCo), Centre National de la Recherche Scientifique (CNRS), Université Paul Sabatier, 31052 Toulouse, France
| |
Collapse
|
72
|
Abstract
Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes.
Collapse
Affiliation(s)
- Melissa Le-Hoa Võ
- Scene Grammar Lab, Department of Cognitive Psychology, Goethe University Frankfurt, Frankfurt, Germany
| | | |
Collapse
|
73
|
Olejarczyk JH, Luke SG, Henderson JM. Incidental memory for parts of scenes from eye movements. VISUAL COGNITION 2014. [DOI: 10.1080/13506285.2014.941433] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
74
|
Lanzoni L, Melcher D, Miceli G, Corbett JE. Global statistical regularities modulate the speed of visual search in patients with focal attentional deficits. Front Psychol 2014; 5:514. [PMID: 24971066 PMCID: PMC4053765 DOI: 10.3389/fpsyg.2014.00514] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Accepted: 05/10/2014] [Indexed: 11/13/2022] Open
Abstract
There is growing evidence that the statistical properties of ensembles of similar objects are processed in a qualitatively different manner than the characteristics of individual items. It has recently been proposed that these types of perceptual statistical representations are part of a strategy to complement focused attention in order to circumvent the visual system’s limited capacity to represent more than a few individual objects in detail. Previous studies have demonstrated that patients with attentional deficits are nonetheless sensitive to these sorts of statistical representations. Here, we examined how such global representations may function to aid patients in overcoming focal attentional limitations by manipulating the statistical regularity of a visual scene while patients performed a search task. Three patients previously diagnosed with visual neglect searched for a target Gabor tilted to the left or right of vertical in displays of horizontal distractor Gabors. Although the local sizes of the distractors changed on every trial, the mean size remained stable for several trials. Patients made faster correct responses to targets in neglected regions of the visual field when global statistics remained constant over several trials, similar to age-matched controls. Given neglect patients’ attentional deficits, these results suggest that stable perceptual representations of global statistics can establish a context to speed search without the need to represent individual elements in detail.
Collapse
Affiliation(s)
- Lucilla Lanzoni
- Center for Mind/Brain Sciences, University of Trento Rovereto, Italy ; Center for Neurocognitive Rehabilitation, University of Trento Rovereto, Italy ; Cognitive Neuropsychology Lab, Harvard University Cambridge, MA, USA
| | - David Melcher
- Center for Mind/Brain Sciences, University of Trento Rovereto, Italy
| | - Gabriele Miceli
- Center for Mind/Brain Sciences, University of Trento Rovereto, Italy ; Center for Neurocognitive Rehabilitation, University of Trento Rovereto, Italy
| | | |
Collapse
|
75
|
Rosen ML, Stern CE, Somers DC. Long-term memory guidance of visuospatial attention in a change-detection paradigm. Front Psychol 2014; 5:266. [PMID: 24744744 PMCID: PMC3978356 DOI: 10.3389/fpsyg.2014.00266] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2014] [Accepted: 03/11/2014] [Indexed: 11/13/2022] Open
Abstract
Visual task performance is generally stronger in familiar environments. One reason for this familiarity benefit is that we learn where to direct our visual attention and effective attentional deployment enhances performance. Visual working memory plays a central role in supporting long-term memory guidance of visuospatial attention. We modified a change detection task to create a new paradigm for investigating long-term memory guidance of attention. During the training phase, subjects viewed images in a flicker paradigm and were asked to detect between one and three changes in the images. The test phase required subjects to detect a single change in a one-shot change detection task in which they held all possible locations of changes in visual working memory and deployed attention to those locations to determine if a change occurred. Subjects detected significantly more changes in images for which they had been trained to detect the changes, demonstrating that memory of the images guided subjects in deploying their attention. Moreover, capacity to detect changes was greater for images that had multiple changes during the training phase. In Experiment 2, we observed that capacity to detect changes for the 3-studied change condition increased significantly with more study exposures and capacity was significantly higher than 1, indicating that subjects were able to attend to more than one location. Together, these findings suggest memory and attentional systems interact via working memory such that long-term memory can be used to direct visual spatial attention to multiple locations based on previous experience.
Collapse
Affiliation(s)
- Maya L Rosen
- Department of Psychological and Brain Sciences, Boston University Boston MA, USA
| | - Chantal E Stern
- Department of Psychological and Brain Sciences, Boston University Boston MA, USA
| | - David C Somers
- Department of Psychological and Brain Sciences, Boston University Boston MA, USA
| |
Collapse
|
76
|
Abstract
It is well known that observers can implicitly learn the spatial context of complex visual searches, such that future searches through repeated contexts are completed faster than those through novel contexts, even though observers remain at chance at discriminating repeated from new contexts. This contextual-cueing effect arises quickly (within less than five exposures) and asymptotes within 30 exposures to repeated contexts. In spite of being a robust effect (its magnitude is over 100 ms at the asymptotic level), the effect is implicit: Participants are usually at chance at discriminating old from new contexts at the end of an experiment, in spite of having seen each repeated context more than 30 times throughout a 50-min experiment. Here, we demonstrate that the speed at which the contextual-cueing effect arises can be modulated by external rewards associated with the search contexts (not with the performance itself). Following each visual search trial (and irrespective of a participant's search speed on the trial), we provided a reward, a penalty, or no feedback to the participant. Crucially, the type of feedback obtained was associated with the specific contexts, such that some repeated contexts were always associated with reward, and others were always associated with penalties. Implicit learning occurred fastest for contexts associated with positive feedback, though penalizing contexts also showed a learning benefit. Consistent feedback also produced faster learning than did variable feedback, though unexpected penalties produced the largest immediate effects on search performance.
Collapse
|
77
|
Wasserman EA, Teng Y, Castro L. Pigeons exhibit contextual cueing to both simple and complex backgrounds. Behav Processes 2014; 104:44-52. [PMID: 24491468 DOI: 10.1016/j.beproc.2014.01.021] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2013] [Revised: 01/16/2014] [Accepted: 01/20/2014] [Indexed: 11/24/2022]
Abstract
Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of this contextual cueing effect using a novel Cueing-Miscueing design. Pigeons had to peck a target which could appear in one of four possible locations on four possible color backgrounds or four possible color photographs of real-world scenes. On 80% of the trials, each of the contexts was uniquely paired with one of the target locations; on the other 20% of the trials, each of the contexts was randomly paired with the remaining target locations. Pigeons came to exhibit robust contextual cueing when the context preceded the target by 2s, with reaction times to the target being shorter on correctly-cued trials than on incorrectly-cued trials. Contextual cueing proved to be more robust with photographic backgrounds than with uniformly colored backgrounds. In addition, during the context-target delay, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. These findings confirm the effectiveness of animal models of contextual cueing and underscore the important part played by associative learning in producing the effect. This article is part of a Special Issue entitled: SQAB 2013: Contextual Con.
Collapse
|
78
|
Kunar MA, John R, Sweetman H. A configural dominant account of contextual cueing: Configural cues are stronger than colour cues. Q J Exp Psychol (Hove) 2013; 67:1366-82. [PMID: 24199842 DOI: 10.1080/17470218.2013.863373] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Previous work has shown that reaction times to find a target in displays that have been repeated are faster than those for displays that have never been seen before. This learning effect, termed "contextual cueing" (CC), has been shown using contexts such as the configuration of the distractors in the display and the background colour. However, it is not clear how these two contexts interact to facilitate search. We investigated this here by comparing the strengths of these two cues when they appeared together. In Experiment 1, participants searched for a target that was cued by both colour and distractor configural cues, compared with when the target was only predicted by configural information. The results showed that the addition of a colour cue did not increase contextual cueing. In Experiment 2, participants searched for a target that was cued by both colour and distractor configuration compared with when the target was only cued by colour. The results showed that adding a predictive configural cue led to a stronger CC benefit. Experiments 3 and 4 tested the disruptive effects of removing either a learned colour cue or a learned configural cue and whether there was cue competition when colour and configural cues were presented together. Removing the configural cue was more disruptive to CC than removing colour, and configural learning was shown to overshadow the learning of colour cues. The data support a configural dominant account of CC, where configural cues act as the stronger cue in comparison to colour when they are presented together.
Collapse
Affiliation(s)
- Melina A Kunar
- a Department of Psychology , The University of Warwick , Coventry , UK
| | | | | |
Collapse
|
79
|
|
80
|
Armed and attentive: Holding a weapon can bias attentional priorities in scene viewing. Atten Percept Psychophys 2013; 75:1715-24. [PMID: 24027031 DOI: 10.3758/s13414-013-0538-6] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
81
|
|
82
|
Chan LKH, Hayward WG. Visual search. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2013; 4:415-429. [PMID: 26304227 DOI: 10.1002/wcs.1235] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Visual search is the act of looking for a predefined target among other objects. This task has been widely used as an experimental paradigm to study visual attention, and because of its influence has also become a subject of research itself. When used as a paradigm, visual search studies address questions including the nature, function, and limits of preattentive processing and focused attention. As a subject of research, visual search studies address the role of memory in search, the procedures involved in search, and factors that affect search performance. In this article, we review major theories of visual search, the ways in which preattentive information is used to guide attentional allocation, the role of memory, and the processes and decisions involved in its successful completion. We conclude by summarizing the current state of knowledge about visual search and highlight some unresolved issues. WIREs Cogn Sci 2013, 4:415-429. doi: 10.1002/wcs.1235 The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Louis K H Chan
- Psychology Unit, Hong Kong Baptist University, Shatin, Hong Kong
| | - William G Hayward
- Department of Psychology, University of Hong Kong, Pokfulam, Hong Kong
| |
Collapse
|
83
|
Goujon A, Fagot J. Learning of spatial statistics in nonhuman primates: contextual cueing in baboons (Papio papio). Behav Brain Res 2013; 247:101-9. [PMID: 23499707 DOI: 10.1016/j.bbr.2013.03.004] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2013] [Revised: 02/28/2013] [Accepted: 03/02/2013] [Indexed: 11/26/2022]
Abstract
A growing number of theories of cognition suggest that many of our behaviors result from the ability to implicitly extract and use statistical redundancies present in complex environments. In an attempt to develop an animal model of statistical learning mechanisms in humans, the current study investigated spatial contextual cueing (CC) in nonhuman primates. Twenty-five baboons (Papio papio) were trained to search for a target (T) embedded within configurations of distrators (L) that were either predictive or non-predictive of the target location. Baboons exhibited an early CC effect, which remained intact after a 6-week delay and stable across extensive training of 20,000 trials. These results demonstrate the baboons' ability to learn spatial contingencies, as well as the robustness of CC as a cognitive phenomenon across species. Nevertheless, in both the youngest and oldest baboons, CC required many more trials to emerge than in baboons of intermediate age. As a whole, these results reveal strong similarities between CC in humans and baboons, suggesting similar statistical learning mechanisms in these two species. Therefore, baboons provide a valid model to investigate how statistical learning mechanisms develop and/or age during the life span, as well as how these mechanisms are implemented in neural networks, and how they have evolved throughout the phylogeny.
Collapse
|
84
|
Dziemianko M, Keller F. Memory modulated saliency: A computational model of the incremental learning of target locations in visual search. VISUAL COGNITION 2013. [DOI: 10.1080/13506285.2013.784717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
85
|
Benitez VL, Smith LB. Predictable locations aid early object name learning. Cognition 2012; 125:339-52. [PMID: 22989872 PMCID: PMC3472129 DOI: 10.1016/j.cognition.2012.08.006] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2011] [Revised: 07/26/2012] [Accepted: 08/17/2012] [Indexed: 11/15/2022]
Abstract
Expectancy-based localized attention has been shown to promote the formation and retrieval of multisensory memories in adults. Three experiments show that these processes also characterize attention and learning in 16- to 18-month old infants and, moreover, that these processes may play a critical role in supporting early object name learning. The three experiments show that infants learn names for objects when those objects have predictable rather than varied locations, that infants who anticipate the location of named objects better learn those object names, and that infants integrate experiences that are separated in time but share a common location. Taken together, these results suggest that localized attention, cued attention, and spatial indexing are an inter-related set of processes in young children that aid in the early building of coherent object representations. The relevance of the experimental results and spatial attention for everyday word learning are discussed.
Collapse
Affiliation(s)
- Viridiana L Benitez
- Department of Psychological and Brain Sciences, Indiana University, 1101 East 10th Street, Bloomington, IN 47405, United States.
| | | |
Collapse
|
86
|
Abstract
It seems intuitive to think that previous exposure or interaction with an environment should make it easier to search through it and, no doubt, this is true in many real-world situations. However, in a recent study, we demonstrated that previous exposure to a scene does not necessarily speed search within that scene. For instance, when observers performed as many as 15 searches for different objects in the same, unchanging scene, the speed of search did not decrease much over the course of these multiple searches (Võ & Wolfe, 2012). Only when observers were asked to search for the same object again did search become considerably faster. We argued that our naturalistic scenes provided such strong "semantic" guidance-e.g., knowing that a faucet is usually located near a sink-that guidance by incidental episodic memory-having seen that faucet previously-was rendered less useful. Here, we directly manipulated the availability of semantic information provided by a scene. By monitoring observers' eye movements, we found a tight coupling of semantic and episodic memory guidance: Decreasing the availability of semantic information increases the use of episodic memory to guide search. These findings have broad implications regarding the use of memory during search in general and particularly during search in naturalistic scenes.
Collapse
Affiliation(s)
- Melissa L-H Võ
- Visual Attention Lab, Harvard Medical School, Brigham and Women's Hospital, USA.
| | | |
Collapse
|
87
|
|
88
|
Huebner GM, Gegenfurtner KR. Conceptual and visual features contribute to visual memory for natural images. PLoS One 2012; 7:e37575. [PMID: 22719842 PMCID: PMC3374796 DOI: 10.1371/journal.pone.0037575] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2011] [Accepted: 04/23/2012] [Indexed: 11/18/2022] Open
Abstract
We examined the role of conceptual and visual similarity in a memory task for natural images. The important novelty of our approach was that visual similarity was determined using an algorithm [1] instead of being judged subjectively. This similarity index takes colours and spatial frequencies into account. For each target, four distractors were selected that were (1) conceptually and visually similar, (2) only conceptually similar, (3) only visually similar, or (4) neither conceptually nor visually similar to the target image. Participants viewed 219 images with the instruction to memorize them. Memory for a subset of these images was tested subsequently. In Experiment 1, participants performed a two-alternative forced choice recognition task and in Experiment 2, a yes/no-recognition task. In Experiment 3, testing occurred after a delay of one week. We analyzed the distribution of errors depending on distractor type. Performance was lowest when the distractor image was conceptually and visually similar to the target image, indicating that both factors matter in such a memory task. After delayed testing, these differences disappeared. Overall performance was high, indicating a large-capacity, detailed visual long-term memory.
Collapse
Affiliation(s)
- Gesche M Huebner
- Department of Psychology, Justus-Liebig-University of Giessen, Giessen, Germany.
| | | |
Collapse
|
89
|
Alexander RG, Zelinsky GJ. Effects of part-based similarity on visual search: the Frankenbear experiment. Vision Res 2012; 54:20-30. [PMID: 22227607 DOI: 10.1016/j.visres.2011.12.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2011] [Revised: 12/18/2011] [Accepted: 12/21/2011] [Indexed: 10/14/2022]
Abstract
Do the target-distractor and distractor-distractor similarity relationships known to exist for simple stimuli extend to real-world objects, and are these effects expressed in search guidance or target verification? Parts of photorealistic distractors were replaced with target parts to create four levels of target-distractor similarity under heterogenous and homogenous conditions. We found that increasing target-distractor similarity and decreasing distractor-distractor similarity impaired search guidance and target verification, but that target-distractor similarity and heterogeneity/homogeneity interacted only in measures of guidance; distractor homogeneity lessens effects of target-distractor similarity by causing gaze to fixate the target sooner, not by speeding target detection following its fixation.
Collapse
Affiliation(s)
- Robert G Alexander
- Department of Psychology, Stony Brook University, Stony Brook, NY 11794-2500, USA
| | | |
Collapse
|
90
|
Brockmole JR, Davoli CC, Cronin DA. The Visual World in Sight and Mind. PSYCHOLOGY OF LEARNING AND MOTIVATION 2012. [DOI: 10.1016/b978-0-12-394293-7.00003-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
91
|
Hollingworth A. Guidance of visual search by memory and knowledge. NEBRASKA SYMPOSIUM ON MOTIVATION. NEBRASKA SYMPOSIUM ON MOTIVATION 2012; 59:63-89. [PMID: 23437630 DOI: 10.1007/978-1-4614-4794-8_4] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
To behave intelligently in the world, humans must be able to find objects efficiently within the complex environments they inhabit. A growing proportion of the literature on visual search is devoted to understanding this type of natural search. In the present chapter, I review the literature on visual search through natural scenes, focusing on the role of memory and knowledge in guiding attention to task-relevant objects.
Collapse
|
92
|
Abstract
How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4-6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the "functional set size" of items that could possibly be the target.
Collapse
|
93
|
Visual prediction and perceptual expertise. Int J Psychophysiol 2011; 83:156-63. [PMID: 22123523 DOI: 10.1016/j.ijpsycho.2011.11.002] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2011] [Revised: 10/10/2011] [Accepted: 11/03/2011] [Indexed: 11/23/2022]
Abstract
Making accurate predictions about what may happen in the environment requires analogies between perceptual input and associations in memory. These elements of predictions are based on cortical representations, but little is known about how these processes can be enhanced by experience and training. On the other hand, studies on perceptual expertise have revealed that the acquisition of expertise leads to strengthened associative processing among features or objects, suggesting that predictions and expertise may be tightly connected. Here we review the behavioral and neural findings regarding the mechanisms involving prediction and expert processing, and highlight important possible overlaps between them. Future investigation should examine the relations among perception, memory and prediction skills as a function of expertise. The knowledge gained by this line of research will have implications for visual cognition research, and will advance our understanding of how the human brain can improve its ability to predict by learning from experience.
Collapse
|
94
|
|
95
|
Abstract
In contextual cuing (CC), reaction times for finding targets are faster in repeated displays than in displays that have never been seen before. This has been demonstrated using target-distractor configurations, global background colors, naturalistic scenes, and covariation of targets with distractors. The majority of CC studies have used displays in which the target is always present. This study investigated what happens when the target is sometimes absent. Experiment 1 showed that, although configural CC occurs in displays when the target is always present, there is no CC when the target is always absent. Experiment 2 showed that there is no CC when the same spatial layout can be both target present and target absent on different trials. The presence of distractors in locations that had contained targets on other trials appeared to interfere with CC, and even disrupted the expression of CC in previously learned contexts (Exps. 3-5). These results show that target-distractor associations are the important element in producing CC and that, consistent with a response selection account, changing the response type from an orientation task to a detection task removes the CC effect.
Collapse
Affiliation(s)
- Melina A Kunar
- Department of Psychology, University of Warwick, Coventry CV4 7AL, UK.
| | | |
Collapse
|
96
|
Neider MB, Kramer AF. Older Adults Capitalize on Contextual Information to Guide Search. Exp Aging Res 2011; 37:539-71. [DOI: 10.1080/0361073x.2011.619864] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
97
|
Does oculomotor inhibition of return influence fixation probability during scene search? Atten Percept Psychophys 2011; 73:2384-98. [DOI: 10.3758/s13414-011-0191-x] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
98
|
Madhavan P, Lacson FC, Gonzalez C, Brennan PC. The Role of Incentive Framing on Training and Transfer of Learning in a Visual Threat Detection Task. APPLIED COGNITIVE PSYCHOLOGY 2011. [DOI: 10.1002/acp.1807] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
99
|
Goujon A. Categorical implicit learning in real-world scenes: Evidence from contextual cueing. Q J Exp Psychol (Hove) 2011; 64:920-41. [PMID: 21161855 DOI: 10.1080/17470218.2010.526231] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
The present study examined the extent to which learning mechanisms are deployed on semantic-categorical regularities during a visual searching within real-world scenes. The contextual cueing paradigm was used with photographs of indoor scenes in which the semantic category did or did not predict the target position on the screen. No evidence of a facilitation effect was observed in the predictive condition compared to the nonpredictive condition when participants were merely instructed to search for a target T or L (Experiment 1). However, a rapid contextual cueing effect occurred when each display containing the search target was preceded by a preview of the scene on which participants had to make a decision regarding the scene's category (Experiment 2). A follow-up explicit memory task indicated that this benefit resulted from implicit learning. Similar implicit contextual cueing effects were also obtained when the scene to categorize was different from the subsequent search scene (Experiment 3) and when a mere preview of the search scene preceded the visual searching (Experiment 4). These results suggested that if enhancing the processing of the scene was required with the present material, such implicit semantic learning can nevertheless take place when the category is task irrelevant.
Collapse
Affiliation(s)
- Annabelle Goujon
- Laboratoire de Psychologie Cognitive-CNRS, and Université de Provence, Marseille, France
| |
Collapse
|
100
|
Thompson C, Crundall D. Scanning Behaviour in Natural Scenes is Influenced by a Preceding Unrelated Visual Search Task. Perception 2011; 40:1335-49. [DOI: 10.1068/p6848] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
Three experiments explored the transference of visual scanning behaviour between two unrelated tasks. Participants first viewed letters presented horizontally, vertically, or as a random array. They then viewed still images (experiments 1 and 2) or video clips (experiment 3) of driving scenes, under varying task conditions. Despite having no relevance to the driving images, layout of stimuli in the letter task influenced scanning behaviour in this subsequent task. In the still images, a vertical letter search increased vertical scanning, and in the dynamic clips, a horizontal letter search decreased vertical scanning. This indicated that (i) models of scanning behaviour should account for the influence of a preceding unrelated task; (ii) carry-over is modulated by demand in the current task; and (iii) in situations where particular scanning strategies are important for primary task performance (eg driving safety), secondary task information should be displayed in a manner likely to produce a congruent scanning strategy.
Collapse
Affiliation(s)
| | - David Crundall
- School of Health Sciences, University of Salford, Salford M5 4WT, UK
| |
Collapse
|