51
|
Sharp-Wave Ripples in Primates Are Enhanced near Remembered Visual Objects. Curr Biol 2016; 27:257-262. [PMID: 28041797 DOI: 10.1016/j.cub.2016.11.027] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Revised: 10/08/2016] [Accepted: 11/10/2016] [Indexed: 12/25/2022]
Abstract
The hippocampus plays an important role in memory for events that are distinct in space and time. One of the strongest, most synchronous neural signals produced by the hippocampus is the sharp-wave ripple (SWR), observed in a variety of mammalian species during offline behaviors, such as slow-wave sleep [1-3] and quiescent waking and pauses in exploration [4-8], leading to long-standing and widespread theories of its contribution to plasticity and memory during these inactive or immobile states [9-14]. Indeed, during sleep and waking inactivity, hippocampal SWRs in rodents appear to support spatial long-term and working memory [4, 15-23], but so far, they have not been linked to memory in primates. More recently, SWRs have been observed during active, visual scene exploration in macaques [24], opening up the possibility that these active-state ripples in the primate hippocampus are linked to memory for objects embedded in scenes. By measuring hippocampal SWRs in macaques during search for scene-contextualized objects, we found that SWR rate increased with repeated presentations. Furthermore, gaze during SWRs was more likely to be near the target object on repeated than on novel presentations, even after accounting for overall differences in gaze location with scene repetition. This proximity bias with repetition occurred near the time of target object detection for remembered targets. The increase in ripple likelihood near remembered visual objects suggests a link between ripples and memory in primates; specifically, SWRs may reflect part of a mechanism supporting the guidance of search based on past experience.
Collapse
|
52
|
Qian T, Jaeger TF, Aslin RN. Incremental implicit learning of bundles of statistical patterns. Cognition 2016; 157:156-173. [PMID: 27639552 PMCID: PMC5181648 DOI: 10.1016/j.cognition.2016.09.002] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2014] [Revised: 09/02/2016] [Accepted: 09/08/2016] [Indexed: 11/26/2022]
Abstract
Forming an accurate representation of a task environment often takes place incrementally as the information relevant to learning the representation only unfolds over time. This incremental nature of learning poses an important problem: it is usually unclear whether a sequence of stimuli consists of only a single pattern, or multiple patterns that are spliced together. In the former case, the learner can directly use each observed stimulus to continuously revise its representation of the task environment. In the latter case, however, the learner must first parse the sequence of stimuli into different bundles, so as to not conflate the multiple patterns. We created a video-game statistical learning paradigm and investigated (1) whether learners without prior knowledge of the existence of multiple "stimulus bundles" - subsequences of stimuli that define locally coherent statistical patterns - could detect their presence in the input and (2) whether learners are capable of constructing a rich representation that encodes the various statistical patterns associated with bundles. By comparing human learning behavior to the predictions of three computational models, we find evidence that learners can handle both tasks successfully. In addition, we discuss the underlying reasons for why the learning of stimulus bundles occurs even when such behavior may seem irrational.
Collapse
Affiliation(s)
- Ting Qian
- Department of Biomedical and Health Informatics, The Children's Hospital of Philadelphia, United States
| | - T Florian Jaeger
- Department of Brain and Cognitive Sciences, University of Rochester, United States; Department of Computer Science, University of Rochester, United States; Department of Linguistics, University of Rochester, United States
| | - Richard N Aslin
- Department of Brain and Cognitive Sciences, University of Rochester, United States
| |
Collapse
|
53
|
Schenke KC, Wyer NA, Bach P. The Things You Do: Internal Models of Others' Expected Behaviour Guide Action Observation. PLoS One 2016; 11:e0158910. [PMID: 27434265 PMCID: PMC4951130 DOI: 10.1371/journal.pone.0158910] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2016] [Accepted: 06/23/2016] [Indexed: 11/19/2022] Open
Abstract
Predictions allow humans to manage uncertainties within social interactions. Here, we investigate how explicit and implicit person models-how different people behave in different situations-shape these predictions. In a novel action identification task, participants judged whether actors interacted with or withdrew from objects. In two experiments, we manipulated, unbeknownst to participants, the two actors action likelihoods across situations, such that one actor typically interacted with one object and withdrew from the other, while the other actor showed the opposite behaviour. In Experiment 2, participants additionally received explicit information about the two individuals that either matched or mismatched their actual behaviours. The data revealed direct but dissociable effects of both kinds of person information on action identification. Implicit action likelihoods affected response times, speeding up the identification of typical relative to atypical actions, irrespective of the explicit knowledge about the individual's behaviour. Explicit person knowledge, in contrast, affected error rates, causing participants to respond according to expectations instead of observed behaviour, even when they were aware that the explicit information might not be valid. Together, the data show that internal models of others' behaviour are routinely re-activated during action observation. They provide first evidence of a person-specific social anticipation system, which predicts forthcoming actions from both explicit information and an individuals' prior behaviour in a situation. These data link action observation to recent models of predictive coding in the non-social domain where similar dissociations between implicit effects on stimulus identification and explicit behavioural wagers have been reported.
Collapse
Affiliation(s)
- Kimberley C. Schenke
- School of Psychology, Plymouth University, Drake Circus, Plymouth, Devon, United Kingdom
- * E-mail:
| | - Natalie A. Wyer
- School of Psychology, Plymouth University, Drake Circus, Plymouth, Devon, United Kingdom
| | - Patric Bach
- School of Psychology, Plymouth University, Drake Circus, Plymouth, Devon, United Kingdom
| |
Collapse
|
54
|
A simple algorithm for the offline recalibration of eye-tracking data through best-fitting linear transformation. Behav Res Methods 2016; 47:1365-1376. [PMID: 25552423 PMCID: PMC4636520 DOI: 10.3758/s13428-014-0544-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Poor calibration and inaccurate drift correction can pose severe problems for eye-tracking experiments requiring high levels of accuracy and precision. We describe an algorithm for the offline correction of eye-tracking data. The algorithm conducts a linear transformation of the coordinates of fixations that minimizes the distance between each fixation and its closest stimulus. A simple implementation in MATLAB is also presented. We explore the performance of the correction algorithm under several conditions using simulated and real data, and show that it is particularly likely to improve data quality when many fixations are included in the fitting process.
Collapse
|
55
|
Castelhano MS, Witherspoon RL. How You Use It Matters. Psychol Sci 2016; 27:606-21. [DOI: 10.1177/0956797616629130] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2013] [Accepted: 01/06/2016] [Indexed: 11/16/2022] Open
Abstract
How does one know where to look for objects in scenes? Objects are seen in context daily, but also used for specific purposes. Here, we examined whether an object’s function can guide attention during visual search in scenes. In Experiment 1, participants studied either the function (function group) or features (feature group) of a set of invented objects. In a subsequent search, the function group located studied objects faster than novel (unstudied) objects, whereas the feature group did not. In Experiment 2, invented objects were positioned in locations that were either congruent or incongruent with the objects’ functions. Search for studied objects was faster for function-congruent locations and hampered for function-incongruent locations, relative to search for novel objects. These findings demonstrate that knowledge of object function can guide attention in scenes, and they have important implications for theories of visual cognition, cognitive neuroscience, and developmental and ecological psychology.
Collapse
|
56
|
Wynn JS, Bone MB, Dragan MC, Hoffman KL, Buchsbaum BR, Ryan JD. Selective scanpath repetition during memory-guided visual search. VISUAL COGNITION 2016; 24:15-37. [PMID: 27570471 PMCID: PMC4975086 DOI: 10.1080/13506285.2016.1175531] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 03/31/2016] [Accepted: 04/01/2016] [Indexed: 10/26/2022]
Abstract
Visual search efficiency improves with repetition of a search display, yet the mechanisms behind these processing gains remain unclear. According to Scanpath Theory, memory retrieval is mediated by repetition of the pattern of eye movements or "scanpath" elicited during stimulus encoding. Using this framework, we tested the prediction that scanpath recapitulation reflects relational memory guidance during repeated search events. Younger and older subjects were instructed to find changing targets within flickering naturalistic scenes. Search efficiency (search time, number of fixations, fixation duration) and scanpath similarity (repetition) were compared across age groups for novel (V1) and repeated (V2) search events. Younger adults outperformed older adults on all efficiency measures at both V1 and V2, while the search time benefit for repeated viewing (V1-V2) did not differ by age. Fixation-binned scanpath similarity analyses revealed repetition of initial and final (but not middle) V1 fixations at V2, with older adults repeating more initial V1 fixations than young adults. In young adults only, early scanpath similarity correlated negatively with search time at test, indicating increased efficiency, whereas the similarity of V2 fixations to middle V1 fixations predicted poor search performance. We conclude that scanpath compression mediates increased search efficiency by selectively recapitulating encoding fixations that provide goal-relevant input. Extending Scanpath Theory, results suggest that scanpath repetition varies as a function of time and memory integrity.
Collapse
Affiliation(s)
- Jordana S. Wynn
- Department of Psychology, University of Toronto, Toronto, ON, CanadaM5S 3G3
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON, CanadaM6A 2E1
| | - Michael B. Bone
- Department of Psychology, University of Toronto, Toronto, ON, CanadaM5S 3G3
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON, CanadaM6A 2E1
| | | | - Kari L. Hoffman
- Department of Biology, York University, Toronto, ON, CanadaM3J 1P3
- Department of Psychology, York University, Toronto, ON, CanadaM3J 1P3
- Centre for Vision Research, York University, Toronto, ON, CanadaM3J 1P3
| | - Bradley R. Buchsbaum
- Department of Psychology, University of Toronto, Toronto, ON, CanadaM5S 3G3
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON, CanadaM6A 2E1
| | - Jennifer D. Ryan
- Department of Psychology, University of Toronto, Toronto, ON, CanadaM5S 3G3
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON, CanadaM6A 2E1
| |
Collapse
|
57
|
Bendall RCA, Thompson C. Emotion has no impact on attention in a change detection flicker task. Front Psychol 2015; 6:1592. [PMID: 26539141 PMCID: PMC4612156 DOI: 10.3389/fpsyg.2015.01592] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2015] [Accepted: 10/02/2015] [Indexed: 11/13/2022] Open
Abstract
Past research provides conflicting findings regarding the influence of emotion on visual attention. Early studies suggested a broadening of attentional resources in relation to positive mood. However, more recent evidence indicates that positive emotions may not have a beneficial impact on attention, and that the relationship between emotion and attention may be mitigated by factors such as task demand or stimulus valence. The current study explored the effect of emotion on attention using the change detection flicker paradigm. Participants were induced into positive, neutral, and negative mood states and then completed a change detection task. A series of neutral scenes were presented and participants had to identify the location of a disappearing item in each scene. The change was made to the center or the periphery of each scene and it was predicted that peripheral changes would be detected quicker in the positive mood condition and slower in the negative mood condition, compared to the neutral condition. In contrast to previous findings emotion had no influence on attention and whilst central changes were detected faster than peripheral changes, change blindness was not affected by mood. The findings suggest that the relationship between emotion and visual attention is influenced by the characteristics of a task, and any beneficial impact of positive emotion may be related to processing style rather than a “broadening” of attentional resources.
Collapse
Affiliation(s)
- Robert C A Bendall
- Directorate of Psychology and Public Health, School of Health Sciences, University of Salford Salford, UK
| | - Catherine Thompson
- Directorate of Psychology and Public Health, School of Health Sciences, University of Salford Salford, UK
| |
Collapse
|
58
|
Neural structures involved in visual search guidance by reward-enhanced contextual cueing of the target location. Neuroimage 2015; 124:887-897. [PMID: 26427645 DOI: 10.1016/j.neuroimage.2015.09.040] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2015] [Revised: 08/21/2015] [Accepted: 09/18/2015] [Indexed: 11/23/2022] Open
Abstract
Spatial contextual cueing reflects an incidental form of learning that occurs when spatial distractor configurations are repeated in visual search displays. Recently, it was reported that the efficiency of contextual cueing can be modulated by reward. We replicated this behavioral finding and investigated its neural basis with fMRI. Reward value was associated with repeated displays in a learning session. The effect of reward value on context-guided visual search was assessed in a subsequent fMRI session without reward. Structures known to support explicit reward valuation, such as ventral frontomedial cortex and posterior cingulate cortex, were modulated by incidental reward learning. Contextual cueing, leading to more efficient search, went along with decreased activation in the visual search network. Retrosplenial cortex played a special role in that it showed both a main effect of reward and a reward×configuration interaction and may thereby be a central structure for the reward modulation of context-guided visual search.
Collapse
|
59
|
Goujon A, Didierjean A, Thorpe S. Investigating implicit statistical learning mechanisms through contextual cueing. Trends Cogn Sci 2015; 19:524-33. [PMID: 26255970 DOI: 10.1016/j.tics.2015.07.009] [Citation(s) in RCA: 102] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Since its inception, the contextual cueing (CC) paradigm has generated considerable interest in various fields of cognitive sciences because it constitutes an elegant approach to understanding how statistical learning (SL) mechanisms can detect contextual regularities during a visual search. In this article we review and discuss five aspects of CC: (i) the implicit nature of learning, (ii) the mechanisms involved in CC, (iii) the mediating factors affecting CC, (iv) the generalization of CC phenomena, and (v) the dissociation between implicit and explicit CC phenomena. The findings suggest that implicit SL is an inherent component of ongoing processing which operates through clustering, associative, and reinforcement processes at various levels of sensory-motor processing, and might result from simple spike-timing-dependent plasticity.
Collapse
Affiliation(s)
- Annabelle Goujon
- Centre de Recherche Cerveau et Cognition (CerCo), Centre National de la Recherche Scientifique (CNRS), Université Paul Sabatier, 31052 Toulouse, France; Laboratoire de Psychologie, Université de Franche-Comté, 25000 Besançon, France.
| | - André Didierjean
- Laboratoire de Psychologie, Université de Franche-Comté, 25000 Besançon, France; Institut Universitaire de France
| | - Simon Thorpe
- Centre de Recherche Cerveau et Cognition (CerCo), Centre National de la Recherche Scientifique (CNRS), Université Paul Sabatier, 31052 Toulouse, France
| |
Collapse
|
60
|
Kasper RW, Grafton ST, Eckstein MP, Giesbrecht B. Multimodal neuroimaging evidence linking memory and attention systems during visual search cued by context. Ann N Y Acad Sci 2015; 1339:176-89. [DOI: 10.1111/nyas.12640] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Affiliation(s)
- Ryan W. Kasper
- Department of Psychological and Brain Sciences; Institute for Collaborative Biotechnologies, University of California, Santa Barbara; Santa Barbara California
| | - Scott T. Grafton
- Department of Psychological and Brain Sciences; Institute for Collaborative Biotechnologies, University of California, Santa Barbara; Santa Barbara California
| | - Miguel P. Eckstein
- Department of Psychological and Brain Sciences; Institute for Collaborative Biotechnologies, University of California, Santa Barbara; Santa Barbara California
| | - Barry Giesbrecht
- Department of Psychological and Brain Sciences; Institute for Collaborative Biotechnologies, University of California, Santa Barbara; Santa Barbara California
| |
Collapse
|
61
|
Davis M, Merrill EC, Conners FA, Roskos B. Patterns of differences in wayfinding performance and correlations among abilities between persons with and without Down syndrome and typically developing children. Front Psychol 2014; 5:1446. [PMID: 25566127 PMCID: PMC4267194 DOI: 10.3389/fpsyg.2014.01446] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2014] [Accepted: 11/26/2014] [Indexed: 12/24/2022] Open
Abstract
Down syndrome (DS) impacts several brain regions including the hippocampus and surrounding structures that have responsibility for important aspects of navigation and wayfinding. Hence it is reasonable to expect that DS may result in a reduced ability to engage in these skills. Two experiments are reported that evaluated route-learning of youth with DS, youth with intellectual disability (ID) and not DS, and typically developing (TD) children matched on mental age (MA). In both experiments, participants learned routes with eight choice point presented via computer. Several objects were placed along the route that could be used as landmarks. Participants navigated the route once with turn indicators pointing the way and then retraced the route without them. In Experiment 1 we found that the TD children and ID participants performed very similarly. They learned the route in the same number of attempts, committed the same number of errors while learning the route, and recalled approximately the same number of landmarks. The participants with DS performed significantly worse on both measures of navigation (attempts and errors) and also recalled significantly fewer landmarks. In Experiment 2, we attempted to reduce TD and ID vs DS differences by focusing participants’ attention on the landmarks. Half of the participants in each group were instructed to identify the landmarks as they passed them the first time. The participants with DS again committed more errors than the participants in the ID and TD groups in the navigation task. In addition, they recalled fewer landmarks. While landmark identification improved landmark memory for both groups, it did not have a significant impact on navigation. Participants with DS still performed more poorly than did the TD and ID participants. Of additional interest, we observed that the performance of persons with DS correlated with different ability measures than did the performance of the other groups. The results the two experiments point to a problem in navigation for persons with DS that exceeds expectations based solely on intellectual level.
Collapse
Affiliation(s)
- Megan Davis
- Department of Psychology, The University of Alabama , Tuscaloosa, AL, USA
| | - Edward C Merrill
- Department of Psychology, The University of Alabama , Tuscaloosa, AL, USA
| | - Frances A Conners
- Department of Psychology, The University of Alabama , Tuscaloosa, AL, USA
| | - Beverly Roskos
- Department of Psychology, The University of Alabama , Tuscaloosa, AL, USA
| |
Collapse
|
62
|
Jiang YV, Swallow KM. Changing viewer perspectives reveals constraints to implicit visual statistical learning. J Vis 2014; 14:14.12.3. [PMID: 25294640 DOI: 10.1167/14.12.3] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations.
Collapse
Affiliation(s)
- Yuhong V Jiang
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Khena M Swallow
- Department of Psychology, Cornell University, Ithaca, NY, USA
| |
Collapse
|
63
|
Herbik A, Geringswald F, Thieme H, Pollmann S, Hoffmann MB. Prediction of higher visual function in macular degeneration with multifocal electroretinogram and multifocal visual evoked potential. Ophthalmic Physiol Opt 2014; 34:540-51. [PMID: 25160891 DOI: 10.1111/opo.12152] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2014] [Accepted: 07/17/2014] [Indexed: 11/29/2022]
Abstract
OBJECTIVE Visual search can be guided by past experience of regularities in our visual environment. This search guidance by contextual memory cues is impaired by foveal vision loss. Here we compared retinal and cortical visually evoked responses in their predictive value for contextual cueing impairment and visual acuity. METHODS Multifocal electroretinograms to flash stimulation (mfERGs; 103 locations; 55.8° diameter) and visual evoked potentials to pattern-reversal stimulation (mfVEPs; 60 locations; 48.6° diameter) were recorded monocularly in participants with age-related macular degeneration (n = 14 and 16, respectively). Response magnitudes were calculated as the respective signal-to-noise ratios for each eccentricity. Visual acuities (logMAR, range: 0.0-1.2) and contextual cueing effects on visual search (reaction time gain, range: -0.14-0.15) were correlated with the signal-to-noise ratios. A step-wise regression analysis was applied separately to the mfERG- and mfVEP-dataset to determine the eccentricity range and the processing stage that is critical for these visual functions. RESULTS Central mfERGs (1.0-3.2°) were the sole predictor of contextual cueing of visual search (p = 0.006), but they were not significant predictors of visual acuity. In contrast, central mfVEPs (1.3-3.2°) were the sole predictor of visual acuity (p < 0.001), but they were not significant predictors of contextual cueing. CONCLUSIONS Contextual cueing is more dependent on parafoveal mfERG magnitude while visual acuity is more dependent on parafoveal mfVEP magnitude. The relation of contextual cueing to parafoveal mfERG magnitudes indicates the predictive value of retinal bipolar cell activity for this advanced level of visual function.
Collapse
Affiliation(s)
- Anne Herbik
- Department of Ophthalmology, Otto-von-Guericke University, Magdeburg, Germany
| | | | | | | | | |
Collapse
|
64
|
Olejarczyk JH, Luke SG, Henderson JM. Incidental memory for parts of scenes from eye movements. VISUAL COGNITION 2014. [DOI: 10.1080/13506285.2014.941433] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
65
|
Object grouping based on real-world regularities facilitates perception by reducing competitive interactions in visual cortex. Proc Natl Acad Sci U S A 2014; 111:11217-22. [PMID: 25024190 DOI: 10.1073/pnas.1400559111] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In virtually every real-life situation humans are confronted with complex and cluttered visual environments that contain a multitude of objects. Because of the limited capacity of the visual system, objects compete for neural representation and cognitive processing resources. Previous work has shown that such attentional competition is partly object based, such that competition among elements is reduced when these elements perceptually group into an object based on low-level cues. Here, using functional MRI (fMRI) and behavioral measures, we show that the attentional benefit of grouping extends to higher-level grouping based on the relative position of objects as experienced in the real world. An fMRI study designed to measure competitive interactions among objects in human visual cortex revealed reduced neural competition between objects when these were presented in commonly experienced configurations, such as a lamp above a table, relative to the same objects presented in other configurations. In behavioral visual search studies, we then related this reduced neural competition to improved target detection when distracter objects were shown in regular configurations. Control studies showed that low-level grouping could not account for these results. We interpret these findings as reflecting the grouping of objects based on higher-level spatial-relational knowledge acquired through a lifetime of seeing objects in specific configurations. This interobject grouping effectively reduces the number of objects that compete for representation and thereby contributes to the efficiency of real-world perception.
Collapse
|
66
|
Lanzoni L, Melcher D, Miceli G, Corbett JE. Global statistical regularities modulate the speed of visual search in patients with focal attentional deficits. Front Psychol 2014; 5:514. [PMID: 24971066 PMCID: PMC4053765 DOI: 10.3389/fpsyg.2014.00514] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Accepted: 05/10/2014] [Indexed: 11/13/2022] Open
Abstract
There is growing evidence that the statistical properties of ensembles of similar objects are processed in a qualitatively different manner than the characteristics of individual items. It has recently been proposed that these types of perceptual statistical representations are part of a strategy to complement focused attention in order to circumvent the visual system’s limited capacity to represent more than a few individual objects in detail. Previous studies have demonstrated that patients with attentional deficits are nonetheless sensitive to these sorts of statistical representations. Here, we examined how such global representations may function to aid patients in overcoming focal attentional limitations by manipulating the statistical regularity of a visual scene while patients performed a search task. Three patients previously diagnosed with visual neglect searched for a target Gabor tilted to the left or right of vertical in displays of horizontal distractor Gabors. Although the local sizes of the distractors changed on every trial, the mean size remained stable for several trials. Patients made faster correct responses to targets in neglected regions of the visual field when global statistics remained constant over several trials, similar to age-matched controls. Given neglect patients’ attentional deficits, these results suggest that stable perceptual representations of global statistics can establish a context to speed search without the need to represent individual elements in detail.
Collapse
Affiliation(s)
- Lucilla Lanzoni
- Center for Mind/Brain Sciences, University of Trento Rovereto, Italy ; Center for Neurocognitive Rehabilitation, University of Trento Rovereto, Italy ; Cognitive Neuropsychology Lab, Harvard University Cambridge, MA, USA
| | - David Melcher
- Center for Mind/Brain Sciences, University of Trento Rovereto, Italy
| | - Gabriele Miceli
- Center for Mind/Brain Sciences, University of Trento Rovereto, Italy ; Center for Neurocognitive Rehabilitation, University of Trento Rovereto, Italy
| | | |
Collapse
|
67
|
Wasserman EA, Teng Y, Brooks DI. Scene-based contextual cueing in pigeons. JOURNAL OF EXPERIMENTAL PSYCHOLOGY-ANIMAL LEARNING AND COGNITION 2014; 40:401-18. [PMID: 25546098 DOI: 10.1037/xan0000028] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target, which could appear in 1 of 4 locations on color photographs of real-world scenes. On half of the trials, each of 4 scenes was consistently paired with 1 of 4 possible target locations; on the other half of the trials, each of 4 different scenes was randomly paired with the same 4 possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons.
Collapse
|
68
|
Mudrik L, Shalgi S, Lamy D, Deouell LY. Synchronous contextual irregularities affect early scene processing: replication and extension. Neuropsychologia 2014; 56:447-58. [PMID: 24593900 DOI: 10.1016/j.neuropsychologia.2014.02.020] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2013] [Revised: 02/19/2014] [Accepted: 02/20/2014] [Indexed: 11/16/2022]
Abstract
Whether contextual regularities facilitate perceptual stages of scene processing is widely debated, and empirical evidence is still inconclusive. Specifically, it was recently suggested that contextual violations affect early processing of a scene only when the incongruent object and the scene are presented a-synchronously, creating expectations. We compared event-related potentials (ERPs) evoked by scenes that depicted a person performing an action using either a congruent or an incongruent object (e.g., a man shaving with a razor or with a fork) when scene and object were presented simultaneously. We also explored the role of attention in contextual processing by using a pre-cue to direct subjects׳ attention towards or away from the congruent/incongruent object. Subjects׳ task was to determine how many hands the person in the picture used in order to perform the action. We replicated our previous findings of frontocentral negativity for incongruent scenes that started ~ 210 ms post stimulus presentation, even earlier than previously found. Surprisingly, this incongruency ERP effect was negatively correlated with the reaction times cost on incongruent scenes. The results did not allow us to draw conclusions about the role of attention in detecting the regularity, due to a weak attention manipulation. By replicating the 200-300 ms incongruity effect with a new group of subjects at even earlier latencies than previously reported, the results strengthen the evidence for contextual processing during this time window even when simultaneous presentation of the scene and object prevent the formation of prior expectations. We discuss possible methodological limitations that may account for previous failures to find this an effect, and conclude that contextual information affects object model selection processes prior to full object identification, with semantic knowledge activation stages unfolding only later on.
Collapse
Affiliation(s)
- Liad Mudrik
- Department of Psychology, Tel Aviv University, PO Box 39040, Tel Aviv 69978, Israel; Division of Biology, California Institute of Technology, 1200 E California Blvd, Pasadena, CA 91125, USA.
| | - Shani Shalgi
- Department of Cognitive Science, The Hebrew University of Jerusalem, Jerusalem 91905, Israel
| | - Dominique Lamy
- Department of Psychology, Tel Aviv University, PO Box 39040, Tel Aviv 69978, Israel
| | - Leon Y Deouell
- Department of Psychology and the Edmond and Lily Safra Center for brain sciences, The Hebrew University of Jerusalem, Jerusalem 91905, Israel
| |
Collapse
|
69
|
Jiang YV, Won BY, Swallow KM. First saccadic eye movement reveals persistent attentional guidance by implicit learning. J Exp Psychol Hum Percept Perform 2014; 40:1161-73. [PMID: 24512610 DOI: 10.1037/a0035961] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Implicit learning about where a visual search target is likely to appear often speeds up search. However, whether implicit learning guides spatial attention or affects postsearch decisional processes remains controversial. Using eye tracking, this study provides compelling evidence that implicit learning guides attention. In a training phase, participants often found the target in a high-frequency, "rich" quadrant of the display. When subsequently tested in a phase during which the target was randomly located, participants were twice as likely to direct the first saccadic eye movement to the previously rich quadrant than to any of the sparse quadrants. The attentional bias persisted for nearly 200 trials after training and was unabated by explicit instructions to distribute attention evenly. We propose that implicit learning guides spatial attention but in a qualitatively different manner than goal-driven attention.
Collapse
Affiliation(s)
| | - Bo-Yeong Won
- Department of Psychology, University of Minnesota
| | | |
Collapse
|
70
|
Wu CC, Wick FA, Pomplun M. Guidance of visual attention by semantic information in real-world scenes. Front Psychol 2014; 5:54. [PMID: 24567724 PMCID: PMC3915098 DOI: 10.3389/fpsyg.2014.00054] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2013] [Accepted: 01/16/2014] [Indexed: 11/17/2022] Open
Abstract
Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces.
Collapse
Affiliation(s)
- Chia-Chien Wu
- Department of Computer Science, University of Massachusetts Boston, MA, USA
| | | | - Marc Pomplun
- Department of Computer Science, University of Massachusetts Boston, MA, USA
| |
Collapse
|
71
|
Darby K, Burling J, Yoshida H. The Role of Search Speed in the Contextual Cueing of Children's Attention. COGNITIVE DEVELOPMENT 2014; 29:17-29. [PMID: 24505167 DOI: 10.1016/j.cogdev.2013.10.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
The contextual cueing effect is a robust phenomenon in which repeated exposure to the same arrangement of random elements guides attention to relevant information by constraining search. The effect is measured using an object search task in which a target (e.g., the letter T) is located within repeated or nonrepeated visual contexts (e.g., configurations of the letter L). Decreasing response times for the repeated configurations indicates that contextual information has facilitated search. Although the effect is robust among adult participants, recent attempts to document the effect in children have yielded mixed results. We examined the effect of search speed on contextual cueing with school-aged children, comparing three types of stimuli that promote different search times in order to observe how speed modulates this effect. Reliable effects of search time were found, suggesting that visual search speed uniquely constrains the role of attention toward contextually cued information.
Collapse
Affiliation(s)
- Kevin Darby
- Department of Psychology, The Ohio State University, 267 Psychology Building, 1835 Neil Avenue, Columbus, OH 43210, United States
| | - Joseph Burling
- Department of Psychology, University of Houston, 126 Heyne Building, Houston, TX 77204, United States
| | - Hanako Yoshida
- Department of Psychology, University of Houston, 126 Heyne Building, Houston, TX 77204, United States
| |
Collapse
|
72
|
Kunar MA, John R, Sweetman H. A configural dominant account of contextual cueing: Configural cues are stronger than colour cues. Q J Exp Psychol (Hove) 2013; 67:1366-82. [PMID: 24199842 DOI: 10.1080/17470218.2013.863373] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Previous work has shown that reaction times to find a target in displays that have been repeated are faster than those for displays that have never been seen before. This learning effect, termed "contextual cueing" (CC), has been shown using contexts such as the configuration of the distractors in the display and the background colour. However, it is not clear how these two contexts interact to facilitate search. We investigated this here by comparing the strengths of these two cues when they appeared together. In Experiment 1, participants searched for a target that was cued by both colour and distractor configural cues, compared with when the target was only predicted by configural information. The results showed that the addition of a colour cue did not increase contextual cueing. In Experiment 2, participants searched for a target that was cued by both colour and distractor configuration compared with when the target was only cued by colour. The results showed that adding a predictive configural cue led to a stronger CC benefit. Experiments 3 and 4 tested the disruptive effects of removing either a learned colour cue or a learned configural cue and whether there was cue competition when colour and configural cues were presented together. Removing the configural cue was more disruptive to CC than removing colour, and configural learning was shown to overshadow the learning of colour cues. The data support a configural dominant account of CC, where configural cues act as the stronger cue in comparison to colour when they are presented together.
Collapse
Affiliation(s)
- Melina A Kunar
- a Department of Psychology , The University of Warwick , Coventry , UK
| | | | | |
Collapse
|
73
|
|
74
|
Armed and attentive: Holding a weapon can bias attentional priorities in scene viewing. Atten Percept Psychophys 2013; 75:1715-24. [PMID: 24027031 DOI: 10.3758/s13414-013-0538-6] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
75
|
|
76
|
|
77
|
Dziemianko M, Keller F. Memory modulated saliency: A computational model of the incremental learning of target locations in visual search. VISUAL COGNITION 2013. [DOI: 10.1080/13506285.2013.784717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
78
|
Spatial reference frame of incidentally learned attention. Cognition 2013; 126:378-90. [DOI: 10.1016/j.cognition.2012.10.011] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2012] [Revised: 09/26/2012] [Accepted: 10/08/2012] [Indexed: 11/20/2022]
|
79
|
Kourkoulou A, Kuhn G, Findlay JM, Leekam SR. Eye Movement Difficulties in Autism Spectrum Disorder: Implications for Implicit Contextual Learning. Autism Res 2013; 6:177-89. [DOI: 10.1002/aur.1274] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2012] [Accepted: 12/13/2012] [Indexed: 11/09/2022]
Affiliation(s)
- Anastasia Kourkoulou
- Wales Autism Research Centre, School of Psychology; Cardiff University; Tower Building; Cardiff; UK
| | - Gustav Kuhn
- Department of Psychology, Goldsmiths; University of London; New Cross; NW
| | - John M. Findlay
- Department of Psychology; Durham University; South Road; Durham; UK
| | - Susan R. Leekam
- Wales Autism Research Centre, School of Psychology; Cardiff University; Tower Building; Cardiff; UK
| |
Collapse
|
80
|
Rewards teach visual selective attention. Vision Res 2012; 85:58-72. [PMID: 23262054 DOI: 10.1016/j.visres.2012.12.005] [Citation(s) in RCA: 257] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2012] [Revised: 10/31/2012] [Accepted: 12/10/2012] [Indexed: 12/31/2022]
Abstract
Visual selective attention is the brain function that modulates ongoing processing of retinal input in order for selected representations to gain privileged access to perceptual awareness and guide behavior. Enhanced analysis of currently relevant or otherwise salient information is often accompanied by suppressed processing of the less relevant or salient input. Recent findings indicate that rewards exert a powerful influence on the deployment of visual selective attention. Such influence takes different forms depending on the specific protocol adopted in the given study. In some cases, the prospect of earning a larger reward in relation to a specific stimulus or location biases attention accordingly in order to maximize overall gain. This is mediated by an effect of reward acting as a type of incentive motivation for the strategic control of attention. In contrast, reward delivery can directly alter the processing of specific stimuli by increasing their attentional priority, and this can be measured even when rewards are no longer involved, reflecting a form of reward-mediated attentional learning. As a further development, recent work demonstrates that rewards can affect attentional learning in dissociable ways depending on whether rewards are perceived as feedback on performance or instead are registered as random-like events occurring during task performance. Specifically, it appears that visual selective attention is shaped by two distinct reward-related learning mechanisms: one requiring active monitoring of performance and outcome, and a second one detecting the sheer association between objects in the environment (whether attended or ignored) and the more-or-less rewarding events that accompany them. Overall this emerging literature demonstrates unequivocally that rewards "teach" visual selective attention so that processing resources will be allocated to objects, features and locations which are likely to optimize the organism's interaction with the surrounding environment and maximize positive outcome.
Collapse
|
81
|
Abstract
It seems intuitive to think that previous exposure or interaction with an environment should make it easier to search through it and, no doubt, this is true in many real-world situations. However, in a recent study, we demonstrated that previous exposure to a scene does not necessarily speed search within that scene. For instance, when observers performed as many as 15 searches for different objects in the same, unchanging scene, the speed of search did not decrease much over the course of these multiple searches (Võ & Wolfe, 2012). Only when observers were asked to search for the same object again did search become considerably faster. We argued that our naturalistic scenes provided such strong "semantic" guidance-e.g., knowing that a faucet is usually located near a sink-that guidance by incidental episodic memory-having seen that faucet previously-was rendered less useful. Here, we directly manipulated the availability of semantic information provided by a scene. By monitoring observers' eye movements, we found a tight coupling of semantic and episodic memory guidance: Decreasing the availability of semantic information increases the use of episodic memory to guide search. These findings have broad implications regarding the use of memory during search in general and particularly during search in naturalistic scenes.
Collapse
Affiliation(s)
- Melissa L-H Võ
- Visual Attention Lab, Harvard Medical School, Brigham and Women's Hospital, USA.
| | | |
Collapse
|
82
|
|
83
|
Giesbrecht B, Sy JL, Guerin SA. Both memory and attention systems contribute to visual search for targets cued by implicitly learned context. Vision Res 2012; 85:80-9. [PMID: 23099047 DOI: 10.1016/j.visres.2012.10.006] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2012] [Revised: 10/10/2012] [Accepted: 10/12/2012] [Indexed: 11/19/2022]
Abstract
Environmental context learned without awareness can facilitate visual processing of goal-relevant information. According to one view, the benefit of implicitly learned context relies on the neural systems involved in spatial attention and hippocampus-mediated memory. While this view has received empirical support, it contradicts traditional models of hippocampal function. The purpose of the present work was to clarify the influence of spatial context on visual search performance and on brain structures involved memory and attention. Event-related functional magnetic resonance imaging revealed that activity in the hippocampus as well as in visual and parietal cortex was modulated by learned visual context even though participants' subjective reports and performance on a post-experiment recognition task indicated no explicit knowledge of the learned context. Moreover, the magnitude of the initial selective hippocampus response predicted the magnitude of the behavioral benefit due to context observed at the end of the experiment. The results suggest that implicit contextual learning is mediated by attention and memory and that these systems interact to support search of our environment.
Collapse
Affiliation(s)
- Barry Giesbrecht
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, USA.
| | | | | |
Collapse
|
84
|
Huestegge L, Radach R. Visual and memory search in complex environments: determinants of eye movements and search performance. ERGONOMICS 2012; 55:1009-1027. [PMID: 22725621 DOI: 10.1080/00140139.2012.689372] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Previous research on visual and memory search revealed various top down and bottom up factors influencing performance. However, utilising abstract stimuli (e.g. geometrical shapes or letters) and focussing on individual factors has often limited the applicability of research findings. Two experiments were designed to analyse which attributes of a product facilitate search in an applied environment. Participants scanned displays containing juice packages while their eye movements were recorded. The familiarity, saliency, and position of search targets were systematically varied. Experiment 1 involved a visual search task, whereas Experiment 2 focussed on memory search. The results showed that bottom up (target saliency) and top down (target familiarity) factors strongly interacted. Overt visual attention was influenced by cultural habits, purposes, and current task demands. The results provide a solid database for assessing the impact and interplay of fundamental top down and bottom up determinants of search processes in applied fields of psychology. Practitioner Summary: Our study demonstrates how a product (or a visual item in general) needs to be designed and placed to ensure that it can be found effectively and efficiently within complex environments. Corresponding product design should result in faster and more accurate visual and memory based search processes.
Collapse
Affiliation(s)
- Lynn Huestegge
- Institute for Psychology, RWTH Aachen University, Aachen, Germany.
| | | |
Collapse
|
85
|
Geringswald F, Baumgartner F, Pollmann S. Simulated loss of foveal vision eliminates visual search advantage in repeated displays. Front Hum Neurosci 2012; 6:134. [PMID: 22593741 PMCID: PMC3350129 DOI: 10.3389/fnhum.2012.00134] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2012] [Accepted: 04/26/2012] [Indexed: 11/20/2022] Open
Abstract
In the contextual cueing paradigm, incidental visual learning of repeated distractor configurations leads to faster search times in repeated compared to new displays. This contextual cueing is closely linked to the visual exploration of the search arrays as indicated by fewer fixations and more efficient scan paths in repeated search arrays. Here, we examined contextual cueing under impaired visual exploration induced by a simulated central scotoma that causes the participant to rely on extrafoveal vision. We let normal-sighted participants search for the target either under unimpaired viewing conditions or with a gaze-contingent central scotoma masking the currently fixated area. Under unimpaired viewing conditions, participants revealed shorter search times and more efficient exploration of the display for repeated compared to novel search arrays and thus exhibited contextual cueing. When visual search was impaired by the central scotoma, search facilitation for repeated displays was eliminated. These results indicate that a loss of foveal sight, as it is commonly observed in maculopathies, e.g., may lead to deficits in high-level visual functions well beyond the immediate consequences of a scotoma.
Collapse
|
86
|
Schneps MH, Brockmole JR, Sonnert G, Pomplun M. History of reading struggles linked to enhanced learning in low spatial frequency scenes. PLoS One 2012; 7:e35724. [PMID: 22558210 PMCID: PMC3338804 DOI: 10.1371/journal.pone.0035724] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2011] [Accepted: 03/24/2012] [Indexed: 11/18/2022] Open
Abstract
People with dyslexia, who face lifelong struggles with reading, exhibit numerous associated low-level sensory deficits including deficits in focal attention. Countering this, studies have shown that struggling readers outperform typical readers in some visual tasks that integrate distributed information across an expanse. Though such abilities would be expected to facilitate scene memory, prior investigations using the contextual cueing paradigm failed to find corresponding advantages in dyslexia. We suggest that these studies were confounded by task-dependent effects exaggerating known focal attention deficits in dyslexia, and that, if natural scenes were used as the context, advantages would emerge. Here, we investigate this hypothesis by comparing college students with histories of severe lifelong reading difficulties (SR) and typical readers (TR) in contexts that vary attention load. We find no differences in contextual-cueing when spatial contexts are letter-like objects, or when contexts are natural scenes. However, the SR group significantly outperforms the TR group when contexts are low-pass filtered natural scenes [F(3, 39) = 3.15, p<.05]. These findings suggest that perception or memory for low spatial frequency components in scenes is enhanced in dyslexia. These findings are important because they suggest strengths for spatial learning in a population otherwise impaired, carrying implications for the education and support of students who face challenges in school.
Collapse
Affiliation(s)
- Matthew H Schneps
- Science Education Department, Harvard-Smithsonian Center for Astrophysics, Cambridge, Massachusetts, United States of America.
| | | | | | | |
Collapse
|
87
|
Brockmole JR, Davoli CC, Cronin DA. The Visual World in Sight and Mind. PSYCHOLOGY OF LEARNING AND MOTIVATION 2012. [DOI: 10.1016/b978-0-12-394293-7.00003-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
88
|
Hollingworth A. Guidance of visual search by memory and knowledge. NEBRASKA SYMPOSIUM ON MOTIVATION. NEBRASKA SYMPOSIUM ON MOTIVATION 2012; 59:63-89. [PMID: 23437630 DOI: 10.1007/978-1-4614-4794-8_4] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
To behave intelligently in the world, humans must be able to find objects efficiently within the complex environments they inhabit. A growing proportion of the literature on visual search is devoted to understanding this type of natural search. In the present chapter, I review the literature on visual search through natural scenes, focusing on the role of memory and knowledge in guiding attention to task-relevant objects.
Collapse
|
89
|
Abstract
Human perception is highly flexible and adaptive. Selective processing is tuned dynamically according to current task goals and expectations to optimize behavior. Arguably, the major source of our expectations about events yet to unfold is our past experience; however, the ability of long-term memories to bias early perceptual analysis has remained untested. We used a noninvasive method with high temporal resolution to record neural activity while human participants detected visual targets that appeared at remembered versus novel locations within naturalistic visual scenes. Upon viewing a familiar scene, spatial memories changed oscillatory brain activity in anticipation of the target location. Memory also enhanced neural activity during early stages of visual analysis of the target and improved behavioral performance. Both measures correlated with subsequent target-detection performance. We therefore demonstrated that memory can directly enhance perceptual functions in the human brain.
Collapse
|
90
|
Neider MB, Kramer AF. Older Adults Capitalize on Contextual Information to Guide Search. Exp Aging Res 2011; 37:539-71. [DOI: 10.1080/0361073x.2011.619864] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
91
|
Luhmann CC. Integrating spatial context learning over contradictory signals: Recency effects in contextual cueing. VISUAL COGNITION 2011. [DOI: 10.1080/13506285.2011.586653] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
92
|
Kaspar K, König P. Overt attention and context factors: the impact of repeated presentations, image type, and individual motivation. PLoS One 2011; 6:e21719. [PMID: 21750726 PMCID: PMC3130043 DOI: 10.1371/journal.pone.0021719] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2010] [Accepted: 06/09/2011] [Indexed: 11/18/2022] Open
Abstract
The present study investigated the dynamic of the attention focus during observation of different categories of complex scenes and simultaneous consideration of individuals' memory and motivational state. We repeatedly presented four types of complex visual scenes in a pseudo-randomized order and recorded eye movements. Subjects were divided into groups according to their motivational disposition in terms of action orientation and individual rating of scene interest. Statistical analysis of eye-tracking data revealed that the attention focus successively became locally expressed by increasing fixation duration; decreasing saccade length, saccade frequency, and single subject's fixation distribution over images; and increasing inter-subject variance of fixation distributions. The validity of these results was supported by verbal reports. This general tendency was weaker for the group of subjects who rated the image set as interesting as compared to the other group. Additionally, effects were partly mediated by subjects' motivational disposition. Finally, we found a generally strong impact of image type on eye movement parameters. We conclude that motivational tendencies linked to personality as well as individual preferences significantly affected viewing behaviour. Hence, it is important and fruitful to consider inter-individual differences on the level of motivation and personality traits within investigations of attention processes. We demonstrate that future studies on memory's impact on overt attention have to deal appropriately with several aspects that had been out of the research focus until now.
Collapse
Affiliation(s)
- Kai Kaspar
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.
| | | |
Collapse
|
93
|
Võ MLH, Wolfe JM. When does repeated search in scenes involve memory? Looking at versus looking for objects in scenes. J Exp Psychol Hum Percept Perform 2011; 38:23-41. [PMID: 21688939 DOI: 10.1037/a0024147] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained essentially unchanged over the course of searches despite increasing scene familiarity. Similarly, looking at target objects during previews, which included letter search, 30 seconds of free viewing, or even 30 seconds of memorizing a scene, also did not benefit search for the same objects later on. However, when the same object was searched for again memory for the previous search was capable of producing very substantial speeding of search despite many different intervening searches. This was especially the case when the previous search engagement had been active rather than supported by a cue. While these search benefits speak to the strength of memory-guided search when the same search target is repeated, the lack of memory guidance during initial object searches-despite previous encounters with the target objects-demonstrates the dominance of guidance by generic scene knowledge in real-world search.
Collapse
Affiliation(s)
- Melissa L-H Võ
- Visual Attention Lab, Harvard Medical School, 64 Sidney Street, Suite 170, Cambridge, MA 02139, USA.
| | | |
Collapse
|
94
|
Goujon A. Categorical implicit learning in real-world scenes: Evidence from contextual cueing. Q J Exp Psychol (Hove) 2011; 64:920-41. [PMID: 21161855 DOI: 10.1080/17470218.2010.526231] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
The present study examined the extent to which learning mechanisms are deployed on semantic-categorical regularities during a visual searching within real-world scenes. The contextual cueing paradigm was used with photographs of indoor scenes in which the semantic category did or did not predict the target position on the screen. No evidence of a facilitation effect was observed in the predictive condition compared to the nonpredictive condition when participants were merely instructed to search for a target T or L (Experiment 1). However, a rapid contextual cueing effect occurred when each display containing the search target was preceded by a preview of the scene on which participants had to make a decision regarding the scene's category (Experiment 2). A follow-up explicit memory task indicated that this benefit resulted from implicit learning. Similar implicit contextual cueing effects were also obtained when the scene to categorize was different from the subsequent search scene (Experiment 3) and when a mere preview of the search scene preceded the visual searching (Experiment 4). These results suggested that if enhancing the processing of the scene was required with the present material, such implicit semantic learning can nevertheless take place when the category is task irrelevant.
Collapse
Affiliation(s)
- Annabelle Goujon
- Laboratoire de Psychologie Cognitive-CNRS, and Université de Provence, Marseille, France
| |
Collapse
|
95
|
Brooks DI, Rasmussen IP, Hollingworth A. The nesting of search contexts within natural scenes: evidence from contextual cuing. J Exp Psychol Hum Percept Perform 2011; 36:1406-18. [PMID: 20731525 DOI: 10.1037/a0019257] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In a contextual cuing paradigm, we examined how memory for the spatial structure of a natural scene guides visual search. Participants searched through arrays of objects that were embedded within depictions of real-world scenes. If a repeated search array was associated with a single scene during study, then array repetition produced significant contextual cuing. However, expression of that learning was dependent on instantiating the original scene in which the learning occurred: Contextual cuing was disrupted when the repeated array was transferred to a different scene. Such scene-specific learning was not absolute, however. Under conditions of high scene variability, repeated search array were learned independently of the scene background. These data suggest that when a consistent environmental structure is available, spatial representations supporting visual search are organized hierarchically, with memory for functional subregions of an environment nested within a representation of the larger scene.
Collapse
Affiliation(s)
- Daniel I Brooks
- University of Iowa, Department of Psychology, Iowa City, IA 52242-1407, USA.
| | | | | |
Collapse
|
96
|
Saccadic context indicates information processing within visual fixations: evidence from event-related potentials and eye-movements analysis of the distractor effect. Int J Psychophysiol 2011; 80:54-62. [PMID: 21291920 DOI: 10.1016/j.ijpsycho.2011.01.013] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2010] [Revised: 01/21/2011] [Accepted: 01/25/2011] [Indexed: 11/21/2022]
Abstract
Attention, visual information processing, and oculomotor control are integrated functions of closely related brain mechanisms. Recently, it was shown that the processing of visual distractors appearing during a fixation is modulated by the amplitude of its preceding saccade (Pannasch & Velichkovsky, 2009). So far, this was demonstrated only at the behavioral level in terms of saccadic inhibition. The present study investigated distractor-related brain activity with cortical eye fixation-related potentials (EFRPs). Moreover, the following saccade was included as an additional classification criterion. Eye movements and EFRPs were recorded during free visual exploration of paintings. During some of the fixations, a visual distractor was shown as an annulus around the fixation position, 100 ms after the fixation onset. The saccadic context of a fixation was classified by its preceding and following saccade amplitudes with the cut-off criterion set to 4° of visual angle. The prolongation of fixation duration induced by distractors was largest for fixations preceded and followed by short saccades. EFRP data revealed a difference in distractor-related P2 amplitude between the saccadic context conditions, following the same trend as in eye movements. Furthermore, influences of the following saccade amplitude on the latency of the saccadic inhibition and on the N1 amplitude were found. The EFRP results cannot be explained by the influence of saccades per se since this bias was removed by subtracting the baseline from the distractor EFRP. Rather, the data suggest that saccadic context indicates differences in how information is processed within single visual fixations.
Collapse
|
97
|
|
98
|
(Uke) Karacan H, Cagiltay K, Tekman HG. Change detection in desktop virtual environments: An eye-tracking study. COMPUTERS IN HUMAN BEHAVIOR 2010. [DOI: 10.1016/j.chb.2010.04.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
99
|
Conci M, Sun L, Müller HJ. Contextual remapping in visual search after predictable target-location changes. PSYCHOLOGICAL RESEARCH 2010; 75:279-89. [DOI: 10.1007/s00426-010-0306-3] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2010] [Accepted: 08/12/2010] [Indexed: 11/29/2022]
|
100
|
Searching in the dark: cognitive relevance drives attention in real-world scenes. Psychon Bull Rev 2010; 16:850-6. [PMID: 19815788 DOI: 10.3758/pbr.16.5.850] [Citation(s) in RCA: 119] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We investigated whether the deployment of attention in scenes is better explained by visual salience or by cognitive relevance. In two experiments, participants searched for target objects in scene photographs. The objects appeared in semantically appropriate locations but were not visually salient within their scenes. Search was fast and efficient, with participants much more likely to look to the targets than to the salient regions. This difference was apparent from the first fixation and held regardless of whether participants were familiar with the visual form of the search targets. In the majority of trials, salient regions were not fixated. The critical effects were observed for all 24 participants across the two experiments. We outline a cognitive relevance framework to account for the control of attention and fixation in scenes.
Collapse
|