1
|
Onwuegbusi T, Hermens F, Hogue T. Data-driven group comparisons of eye fixations to dynamic stimuli. Q J Exp Psychol (Hove) 2021; 75:989-1003. [PMID: 34507503 PMCID: PMC9016662 DOI: 10.1177/17470218211048060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Recent advances in software and hardware have allowed eye tracking to move away from static images to more ecologically relevant video streams. The analysis of eye tracking data for such dynamic stimuli, however, is not without challenges. The frame-by-frame coding of regions of interest (ROIs) is labour-intensive and computer vision techniques to automatically code such ROIs are not yet mainstream, restricting the use of such stimuli. Combined with the more general problem of defining relevant ROIs for video frames, methods are needed that facilitate data analysis. Here, we present a first evaluation of an easy-to-implement data-driven method with the potential to address these issues. To test the new method, we examined the differences in eye movements of self-reported politically left- or right-wing leaning participants to video clips of left- and right-wing politicians. The results show that our method can accurately predict group membership on the basis of eye movement patterns, isolate video clips that best distinguish people on the political left-right spectrum, and reveal the section of each video clip with the largest group differences. Our methodology thereby aids the understanding of group differences in gaze behaviour, and the identification of critical stimuli for follow-up studies or for use in saccade diagnosis.
Collapse
Affiliation(s)
| | - Frouke Hermens
- School of Psychology, University of Lincoln, Lincoln, UK
| | - Todd Hogue
- School of Psychology, University of Lincoln, Lincoln, UK
| |
Collapse
|
2
|
Vasilev MR, Yates M, Prueitt E, Slattery TJ. Parafoveal degradation during reading reduces preview costs only when it is not perceptually distinct. Q J Exp Psychol (Hove) 2020; 74:254-276. [PMID: 32988313 PMCID: PMC8044602 DOI: 10.1177/1747021820959661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
There is a growing understanding that the parafoveal preview effect during reading may represent a combination of preview benefits and preview costs due to interference from parafoveal masks. It has been suggested that visually degrading the parafoveal masks may reduce their costs, but adult readers were later shown to be highly sensitive to degraded display changes. Four experiments examined how preview benefits and preview costs are influenced by the perception of distinct parafoveal degradation at the target word location. Participants read sentences with four preview types (identity, orthographic, phonological, and letter-mask preview) and two levels of visual degradation (0% vs. 20%). The distinctiveness of the target word degradation was either eliminated by degrading all words in the sentence (Experiments 1a–2a) or remained present, as in previous research (Experiments 1b–2b). Degrading the letter masks resulted in a reduction in preview costs, but only when all words in the sentence were degraded. When degradation at the target word location was perceptually distinct, it induced costs of its own, even for orthographically and phonologically related previews. These results confirm previous reports that traditional parafoveal masks introduce preview costs that overestimate the size of the true benefit. However, they also show that parafoveal degradation has the unintended consequence of introducing additional costs when participants are aware of distinct degradation on the target word. Parafoveal degradation appears to be easily perceived and may temporarily orient attention away from the reading task, thus delaying word processing.
Collapse
Affiliation(s)
| | - Mark Yates
- Department of Psychology, University of South Alabama, Mobile, AL, USA
| | - Ethan Prueitt
- Department of Psychology, University of South Alabama, Mobile, AL, USA
| | | |
Collapse
|
3
|
Cronin DA, Hall EH, Goold JE, Hayes TR, Henderson JM. Eye Movements in Real-World Scene Photographs: General Characteristics and Effects of Viewing Task. Front Psychol 2020; 10:2915. [PMID: 32010016 PMCID: PMC6971407 DOI: 10.3389/fpsyg.2019.02915] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 12/10/2019] [Indexed: 11/13/2022] Open
Abstract
The present study examines eye movement behavior in real-world scenes with a large (N = 100) sample. We report baseline measures of eye movement behavior in our sample, including mean fixation duration, saccade amplitude, and initial saccade latency. We also characterize how eye movement behaviors change over the course of a 12 s trial. These baseline measures will be of use to future work studying eye movement behavior in scenes in a variety of literatures. We also examine effects of viewing task on when and where the eyes move in real-world scenes: participants engaged in a memorization and an aesthetic judgment task while viewing 100 scenes. While we find no difference at the mean-level between the two tasks, temporal- and distribution-level analyses reveal significant task-driven differences in eye movement behavior.
Collapse
Affiliation(s)
- Deborah A. Cronin
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| | - Elizabeth H. Hall
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
- Department of Psychology, University of California, Davis, Davis, CA, United States
| | - Jessica E. Goold
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| | - Taylor R. Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| | - John M. Henderson
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
- Department of Psychology, University of California, Davis, Davis, CA, United States
| |
Collapse
|
4
|
Guillory SB, Kaldy Z. Persistence and Accumulation of Visual Memories for Objects in Scenes in 12-Month-Old Infants. Front Psychol 2019; 10:2454. [PMID: 31780984 PMCID: PMC6851165 DOI: 10.3389/fpsyg.2019.02454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 10/16/2019] [Indexed: 11/13/2022] Open
Abstract
Visual memory for objects has been studied extensively in infants over the past 20 years, however, little is known about how they are formed when objects are embedded in naturalistic scenes. In adults, memory for objects in a scene show information accumulation over time as well as persistence despite interruptions (Melcher, 2001, 2006). In the present study, eye-tracking was used to investigate these two processes in 12-month-old infants (N = 19) measuring: (1) whether longer encoding time can improve memory performance (accumulation), and (2) whether multiple shorter exposures to a scene are equivalent to a single exposure of the same total duration (persistence). A control group of adults was also tested in a closely matched paradigm (N = 23). We found that increasing exposure time led to gains in memory performance in both groups. Infants were found to be successful in remembering objects with continuous exposures to a scene, but unlike adults, were not able to perform better than chance when interrupted. However, infants' scan patterns showed evidence of memory as they continued the exploration of the scene in a strategic way following the interruption. Our findings provide insight into how infants are able to build representations of their visual environment by accumulating information about objects embedded in scenes.
Collapse
Affiliation(s)
- Sylvia B. Guillory
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Zsuzsa Kaldy
- Psychology Department, University of Massachusetts Boston, Boston, MA, United States
| |
Collapse
|
5
|
Krasich K, Biggs AT, Brockmole JR. Attention capture during visual search: The consequences of distractor appeal, familiarity, and frequency. VISUAL COGNITION 2018. [DOI: 10.1080/13506285.2018.1508102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Kristina Krasich
- Department of Psychology, The University of Notre Dame, Notre Dame, IN, USA
| | | | - James R. Brockmole
- Department of Psychology, The University of Notre Dame, Notre Dame, IN, USA
| |
Collapse
|
6
|
Mitsven SG, Cantrell LM, Luck SJ, Oakes LM. Visual short-term memory guides infants' visual attention. Cognition 2018; 177:189-197. [PMID: 29704857 PMCID: PMC5975244 DOI: 10.1016/j.cognition.2018.04.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 04/08/2018] [Accepted: 04/19/2018] [Indexed: 10/17/2022]
Abstract
Adults' visual attention is guided by the contents of visual short-term memory (VSTM). Here we asked whether 10-month-old infants' (N = 41) visual attention is also guided by the information stored in VSTM. In two experiments, we modified the one-shot change detection task (Oakes, Baumgartner, Barrett, Messenger, & Luck, 2013) to create a simplified cued visual search task to ask how information stored in VSTM influences where infants look. A single sample item (e.g., a colored circle) was presented at fixation for 500 ms, followed by a brief (300 ms) retention interval and then a test array consisting of two items, one on each side of fixation. One item in the test array matched the sample stimulus and the other did not. Infants were more likely to look at the non-matching item than at the matching item, demonstrating that the information stored rapidly in VSTM guided subsequent looking behavior.
Collapse
Affiliation(s)
- Samantha G Mitsven
- Center for Mind and Brain, University of California, Davis, United States
| | - Lisa M Cantrell
- Center for Mind and Brain, University of California, Davis, United States
| | - Steven J Luck
- Center for Mind and Brain, University of California, Davis, United States; Department of Psychology, University of California, Davis, United States
| | - Lisa M Oakes
- Center for Mind and Brain, University of California, Davis, United States; Department of Psychology, University of California, Davis, United States.
| |
Collapse
|
7
|
Abstract
Salient peripheral events trigger fast, “exogenous” covert orienting. The influential premotor theory of attention argues that covert orienting of attention depends upon planned but unexecuted eye-movements. One problem with this theory is that salient peripheral events, such as offsets, appear to summon attention when used to measure covert attention (e.g., the Posner cueing task) but appear not to elicit oculomotor preparation in tasks that require overt orienting (e.g., the remote distractor paradigm). Here, we examined the effects of peripheral offsets on covert attention and saccade preparation. Experiment 1 suggested that transient offsets summoned attention in a manual detection task without triggering motor preparation planning in a saccadic localisation task, although there were a high proportion of saccadic capture errors on “no-target” trials, where a cue was presented but no target appeared. In Experiment 2, “no-target” trials were removed. Here, transient offsets produced both attentional facilitation and faster saccadic responses on valid cue trials. A third experiment showed that the permanent disappearance of an object also elicited attentional facilitation and faster saccadic reaction times. These experiments demonstrate that offsets trigger both saccade programming and covert attentional orienting, consistent with the idea that exogenous, covert orienting is tightly coupled with oculomotor activation. The finding that no-go trials attenuates oculomotor priming effects offers a way to reconcile the current findings with previous claims of a dissociation between covert attention and oculomotor control in paradigms that utilise a high proportion of catch trials.
Collapse
Affiliation(s)
- Daniel T Smith
- Daniel T Smith, Department of Psychology, Durham University, E011 Wolfson Building, Stockton-on-Tees TS17 6BH, UK.
| | | |
Collapse
|
8
|
Mirza MB, Adams RA, Mathys C, Friston KJ. Human visual exploration reduces uncertainty about the sensed world. PLoS One 2018; 13:e0190429. [PMID: 29304087 PMCID: PMC5755757 DOI: 10.1371/journal.pone.0190429] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2017] [Accepted: 12/14/2017] [Indexed: 11/19/2022] Open
Abstract
In previous papers, we introduced a normative scheme for scene construction and epistemic (visual) searches based upon active inference. This scheme provides a principled account of how people decide where to look, when categorising a visual scene based on its contents. In this paper, we use active inference to explain the visual searches of normal human subjects; enabling us to answer some key questions about visual foraging and salience attribution. First, we asked whether there is any evidence for 'epistemic foraging'; i.e. exploration that resolves uncertainty about a scene. In brief, we used Bayesian model comparison to compare Markov decision process (MDP) models of scan-paths that did-and did not-contain the epistemic, uncertainty-resolving imperatives for action selection. In the course of this model comparison, we discovered that it was necessary to include non-epistemic (heuristic) policies to explain observed behaviour (e.g., a reading-like strategy that involved scanning from left to right). Despite this use of heuristic policies, model comparison showed that there is substantial evidence for epistemic foraging in the visual exploration of even simple scenes. Second, we compared MDP models that did-and did not-allow for changes in prior expectations over successive blocks of the visual search paradigm. We found that implicit prior beliefs about the speed and accuracy of visual searches changed systematically with experience. Finally, we characterised intersubject variability in terms of subject-specific prior beliefs. Specifically, we used canonical correlation analysis to see if there were any mixtures of prior expectations that could predict between-subject differences in performance; thereby establishing a quantitative link between different behavioural phenotypes and Bayesian belief updating. We demonstrated that better scene categorisation performance is consistently associated with lower reliance on heuristics; i.e., a greater use of a generative model of the scene to direct its exploration.
Collapse
Affiliation(s)
- M. Berk Mirza
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, United Kingdom
- * E-mail:
| | - Rick A. Adams
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
- Division of Psychiatry, University College London, London, United Kingdom
| | - Christoph Mathys
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, United Kingdom
- Scuola Internazionale Superiore di Studi Avanzati (SISSA), Trieste, Italy
- Translational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich and ETH Zurich, Zurich, Switzerland
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, London, United Kingdom
| | - Karl J. Friston
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
9
|
Denison RN, Sheynin J, Silver MA. Perceptual suppression of predicted natural images. J Vis 2016; 16:6. [PMID: 27802512 PMCID: PMC5098454 DOI: 10.1167/16.13.6] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 08/19/2016] [Indexed: 01/12/2023] Open
Abstract
Perception is shaped not only by current sensory inputs but also by expectations generated from past sensory experience. Humans viewing ambiguous stimuli in a stable visual environment are generally more likely to see the perceptual interpretation that matches their expectations, but it is less clear how expectations affect perception when the environment is changing predictably. We used statistical learning to teach observers arbitrary sequences of natural images and employed binocular rivalry to measure perceptual selection as a function of predictive context. In contrast to previous demonstrations of preferential selection of predicted images for conscious awareness, we found that recently acquired sequence predictions biased perceptual selection toward unexpected natural images and image categories. These perceptual biases were not associated with explicit recall of the learned image sequences. Our results show that exposure to arbitrary sequential structure in the environment impacts subsequent visual perceptual selection and awareness. Specifically, for natural image sequences, the visual system prioritizes what is surprising, or statistically informative, over what is expected, or statistically likely.
Collapse
Affiliation(s)
- Rachel N Denison
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, US,
| | - Jacob Sheynin
- College of Letters and Science, University of California, Berkeley, Berkeley, CA,
| | - Michael A Silver
- Helen Wills Neuroscience Institute, Vision Science Graduate Group, and School of Optometry, University of California, Berkeley, Berkeley, CA, ://argentum.ucbso.berkeley.edu
| |
Collapse
|
10
|
Visual working memory simultaneously guides facilitation and inhibition during visual search. Atten Percept Psychophys 2016; 78:1232-44. [DOI: 10.3758/s13414-016-1105-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
11
|
Arexis M, Maquestiaux F, Gaspelin N, Ruthruff E, Didierjean A. Attentional capture in driving displays. Br J Psychol 2016; 108:259-275. [PMID: 28369841 DOI: 10.1111/bjop.12197] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2015] [Revised: 02/22/2016] [Accepted: 03/07/2016] [Indexed: 11/27/2022]
Abstract
Drivers face frequent distraction on the roadways, but little is known about situations placing them at risk of misallocating visual attention. To investigate this issue, we asked participants to search for a red target embedded within simulated driving scenes (photographs taken from inside a car) in three experiments. Distraction was induced by presenting, via a GPS unit, red or green distractors positioned in an irrelevant location at which the target never appeared. If the salient distractor captures attention, visual search should be slower on distractor-present trials than distractor-absent trials. In Experiment 1, salient distractors yielded no such capture effect. In Experiment 2, we decreased the frequency of the salient distractor from 50% of trials to only 10% or 20% of trials. Capture effects were almost five times larger for the 10% occurrence group than for the 20% occurrence group. In Experiment 3, the amount of available central resources was manipulated by asking participants to either simultaneously monitor or ignore a stream of spoken digits. Capture effects were much larger for the dual-task group than for the single-task group. In summary, these findings identify risk factors for attentional capture in real-world driving scenes: distractor rarity and diversion of attention.
Collapse
Affiliation(s)
- Mahé Arexis
- Université de Franche-Comté, Besançon, France
| | | | | | - Eric Ruthruff
- University of New Mexico, Albuquerque, New Mexico, USA
| | - André Didierjean
- Université de Franche-Comté, Besançon, France.,Institut Universitaire de France, Paris, France
| |
Collapse
|
12
|
Abstract
Novelty modulates sensory and reward processes, but it remains unknown how these effects interact, i.e., how the visual effects of novelty are related to its motivational effects. A widespread hypothesis, based on findings that novelty activates reward-related structures, is that all the effects of novelty are explained in terms of reward. According to this idea, a novel stimulus is by default assigned high reward value and hence high salience, but this salience rapidly decreases if the stimulus signals a negative outcome. Here we show that, contrary to this idea, novelty affects visual salience in the monkey lateral intraparietal area (LIP) in ways that are independent of expected reward. Monkeys viewed peripheral visual cues that were novel or familiar (received few or many exposures) and predicted whether the trial will have a positive or a negative outcome--i.e., end in a reward or a lack of reward. We used a saccade-based assay to detect whether the cues automatically attracted or repelled attention from their visual field location. We show that salience--measured in saccades and LIP responses--was enhanced by both novelty and positive reward associations, but these factors were dissociable and habituated on different timescales. The monkeys rapidly recognized that a novel stimulus signaled a negative outcome (and withheld anticipatory licking within the first few presentations), but the salience of that stimulus remained high for multiple subsequent presentations. Therefore, novelty can provide an intrinsic bonus for attention that extends beyond the first presentation and is independent of physical rewards.
Collapse
|
13
|
Reinstating salience effects over time: the influence of stimulus changes on visual selection behavior over a sequence of eye movements. Atten Percept Psychophys 2014; 76:1655-70. [PMID: 24927943 DOI: 10.3758/s13414-013-0493-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recently, we showed that salience affects initial saccades only in a static stimulus environment; subsequent saccades were unaffected by salience but, instead, were directed in line with task requirements (Siebold, van Zoest, & Donk, PLoS ONE 6(9): e23552, 2011). Yet multiple studies have shown that people tend to fixate salient regions more often than nonsalient ones when they are looking at images--in particular, when salience is defined by dynamic changes. The goal of the present study was to investigate how oculomotor selection beyond an initial saccade is affected by salience as derived from changing, as opposed to static, stimuli. Observers were presented with displays containing two fixation dots, one target, one distractor, and multiple background elements. They were instructed to fixate on one of the fixation dots and make a speeded eye movement to the target, either directly or preceded by an initial eye movement to the other fixation dot. In Experiment 1, target and distractor differed in orientation contrast relative to the background, such that one was more salient than the other, whereas in Experiments 2 and 3, the orientation contrast between the two elements was identical. Here, salience was implemented by a continuous luminance flicker or by a difference in luminance contrast, respectively, which was presented either simultaneously with display onset or contingent upon the first saccade. The results showed that in all experiments, initial saccades were strongly guided by salience, whereas second saccades were consistently goal directed if the salience manipulation was present from display onset. However, if the flicker or luminance contrast was presented contingent upon the initial saccade, salience effects were reinstated. We argue that salience effects are short-lived but can be reinstated if new information is presented, even when this occurs during an eye movement.
Collapse
|
14
|
Rummukainen O, Radun J, Virtanen T, Pulkki V. Categorization of natural dynamic audiovisual scenes. PLoS One 2014; 9:e95848. [PMID: 24788808 PMCID: PMC4006781 DOI: 10.1371/journal.pone.0095848] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2013] [Accepted: 03/31/2014] [Indexed: 11/19/2022] Open
Abstract
This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database.
Collapse
Affiliation(s)
- Olli Rummukainen
- Department of Signal Processing and Acoustics, Aalto University, Espoo, Finland
| | - Jenni Radun
- Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland
| | - Toni Virtanen
- Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland
| | - Ville Pulkki
- Department of Signal Processing and Acoustics, Aalto University, Espoo, Finland
| |
Collapse
|
15
|
Eye movements, visual search and scene memory, in an immersive virtual environment. PLoS One 2014; 9:e94362. [PMID: 24759905 PMCID: PMC3997357 DOI: 10.1371/journal.pone.0094362] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2014] [Accepted: 03/13/2014] [Indexed: 12/02/2022] Open
Abstract
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.
Collapse
|
16
|
Du F, Qi Y, Li X, Zhang K. Dual processes of oculomotor capture by abrupt onset: rapid involuntary capture and sluggish voluntary prioritization. PLoS One 2013; 8:e80678. [PMID: 24260451 PMCID: PMC3833982 DOI: 10.1371/journal.pone.0080678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2013] [Accepted: 10/07/2013] [Indexed: 11/21/2022] Open
Abstract
The present study showed that there are two distinctive processes underlying oculomotor capture by abrupt onset. When a visual mask between the cue and the target eliminates the unique luminance transient of an onset, the onset still attracts attention in a top-down fashion. This memory-based prioritization of onset is voluntarily controlled by the knowledge of target location. But when there is no visual mask between the cue and the target, the onset captures attention mainly in a bottom-up manner. This transient-driven capture of onset is involuntary because it occurs even when the onset is completely irrelevant to the target location. In addition, the present study demonstrated distinctive temporal characteristics for these two processes. The involuntary capture driven by luminance transients is rapid and brief, whereas the memory-based voluntary prioritization of onset is more sluggish and long-lived.
Collapse
Affiliation(s)
- Feng Du
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Yue Qi
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xingshan Li
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Kan Zhang
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
17
|
Ren Z, Gao S, Chia LT, Rajan D. Regularized feature reconstruction for spatio-temporal saliency detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:3120-3132. [PMID: 23743773 DOI: 10.1109/tip.2013.2259837] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Multimedia applications such as image or video retrieval, copy detection, and so forth can benefit from saliency detection, which is essentially a method to identify areas in images and videos that capture the attention of the human visual system. In this paper, we propose a new spatio-temporal saliency detection framework on the basis of regularized feature reconstruction. Specifically, for video saliency detection, both the temporal and spatial saliency detection are considered. For temporal saliency, we model the movement of the target patch as a reconstruction process using the patches in neighboring frames. A Laplacian smoothing term is introduced to model the coherent motion trajectories. With psychological findings that abrupt stimulus could cause a rapid and involuntary deployment of attention, our temporal model combines the reconstruction error, regularizer, and local trajectory contrast to measure the temporal saliency. For spatial saliency, a similar sparse reconstruction process is adopted to capture the regions with high center-surround contrast. Finally, the temporal saliency and spatial saliency are combined together to favor salient regions with high confidence for video saliency detection. We also apply the spatial saliency part of the spatio-temporal model to image saliency detection. Experimental results on a human fixation video dataset and an image saliency detection dataset show that our method achieves the best performance over several state-of-the-art approaches.
Collapse
Affiliation(s)
- Zhixiang Ren
- Centre for Multimedia and Network Technology, School of Computer Engineering, Nanyang Technological University, 639798, Singapore.
| | | | | | | |
Collapse
|
18
|
Abstract
Despite many studies on selective attention, fundamental questions remain about its nature and neural mechanisms. Here I draw from the animal and machine learning fields that describe attention as a mechanism for active learning and uncertainty reduction and explore the implications of this view for understanding visual attention and eye movement control. I propose that a closer integration of these different views has the potential greatly to expand our understanding of oculomotor control and our ability to use this system as a window into high level but poorly understood cognitive functions, including the capacity for curiosity and exploration and for inferring internal models of the external world.
Collapse
|
19
|
Brockmole JR, Davoli CC, Cronin DA. The Visual World in Sight and Mind. PSYCHOLOGY OF LEARNING AND MOTIVATION 2012. [DOI: 10.1016/b978-0-12-394293-7.00003-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
20
|
Tatler BW, Hayhoe MM, Land MF, Ballard DH. Eye guidance in natural vision: reinterpreting salience. J Vis 2011; 11:5. [PMID: 21622729 DOI: 10.1167/11.5.5] [Citation(s) in RCA: 353] [Impact Index Per Article: 27.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Models of gaze allocation in complex scenes are derived mainly from studies of static picture viewing. The dominant framework to emerge has been image salience, where properties of the stimulus play a crucial role in guiding the eyes. However, salience-based schemes are poor at accounting for many aspects of picture viewing and can fail dramatically in the context of natural task performance. These failures have led to the development of new models of gaze allocation in scene viewing that address a number of these issues. However, models based on the picture-viewing paradigm are unlikely to generalize to a broader range of experimental contexts, because the stimulus context is limited, and the dynamic, task-driven nature of vision is not represented. We argue that there is a need to move away from this class of model and find the principles that govern gaze allocation in a broader range of settings. We outline the major limitations of salience-based selection schemes and highlight what we have learned from studies of gaze allocation in natural vision. Clear principles of selection are found across many instances of natural vision and these are not the principles that might be expected from picture-viewing studies. We discuss the emerging theoretical framework for gaze allocation on the basis of reward maximization and uncertainty reduction.
Collapse
|
21
|
Seigneuric A, Durand K, Jiang T, Baudouin JY, Schaal B. The nose tells it to the eyes: crossmodal associations between olfaction and vision. Perception 2011; 39:1541-54. [PMID: 21313950 DOI: 10.1068/p6740] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Crossmodal linkage between the olfactory and visual senses is still largely underexplored. In this study, we investigated crossmodal olfactory-visual associations by testing whether and how visual processing of objects is affected by the presence of olfactory cues. To this end, we explored the influence of prior learned associations between an odour (eg odour of orange) and a visual stimulus naturally associated with that odour (picture of orange) on the movements of the eyes over a complex scene. Participants were asked to freely explore a photograph containing an odour-related visual cue embedded among other objects while being exposed to the corresponding odour (subjects were unaware of the presence of the odour). Eye movements were recorded to analyse the order and distribution of fixations on each object of the scene. Our data show that the odour-related visual cue was explored faster and for a shorter time in the presence of the congruent odour. These findings suggest that odours can affect visual processing by attracting attention to the possible odour source and by facilitating its identification.
Collapse
Affiliation(s)
- Alix Seigneuric
- Centre des Sciences du Goût et de l'Alimentation, UMR6265 CNRS, Université de Bourgogne, INRA, Agrosup Dijon, 21000 Dijon, France.
| | | | | | | | | |
Collapse
|
22
|
|
23
|
Do High-Functioning People with Autism Spectrum Disorder Spontaneously Use Event Knowledge to Selectively Attend to and Remember Context-Relevant Aspects in Scenes? J Autism Dev Disord 2010; 41:945-61. [DOI: 10.1007/s10803-010-1124-6] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
24
|
(Uke) Karacan H, Cagiltay K, Tekman HG. Change detection in desktop virtual environments: An eye-tracking study. COMPUTERS IN HUMAN BEHAVIOR 2010. [DOI: 10.1016/j.chb.2010.04.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
25
|
|
26
|
Abstract
Attention capture occurs when a stimulus event involuntarily recruits attention. The abrupt appearance of a new object is perhaps the most well-studied attention-capturing event, yet there is debate over the root cause of this capture. Does a new object capture attention because it involves the creation of a new object representation or because its appearance creates a characteristic luminance transient? The present study sought to resolve this question by introducing a new object into a search display, either with or without a unique luminance transient. Contrary to the results of a recent study (Davoli, Suszko, & Abrams, 2007), when the new object's transient was masked by a brief interstimulus interval introduced between the placeholder and search arrays, a new object did not capture attention. Moreover, when a new object's transient was masked, participants could not locate a new object efficiently even when that was their explicit goal. Together, these data suggest that luminance transient signals are necessary for attention capture by new objects.
Collapse
|
27
|
Abrupt onsets capture attention independent of top-down control settings II: Additivity is no evidence for filtering. Atten Percept Psychophys 2010; 72:672-82. [DOI: 10.3758/app.72.3.672] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
28
|
Searching in the dark: cognitive relevance drives attention in real-world scenes. Psychon Bull Rev 2010; 16:850-6. [PMID: 19815788 DOI: 10.3758/pbr.16.5.850] [Citation(s) in RCA: 116] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We investigated whether the deployment of attention in scenes is better explained by visual salience or by cognitive relevance. In two experiments, participants searched for target objects in scene photographs. The objects appeared in semantically appropriate locations but were not visually salient within their scenes. Search was fast and efficient, with participants much more likely to look to the targets than to the salient regions. This difference was apparent from the first fixation and held regardless of whether participants were familiar with the visual form of the search targets. In the majority of trials, salient regions were not fixated. The critical effects were observed for all 24 participants across the two experiments. We outline a cognitive relevance framework to account for the control of attention and fixation in scenes.
Collapse
|
29
|
Matsukura M, Brockmole JR, Henderson JM. Overt attentional prioritization of new objects and feature changes during real-world scene viewing. VISUAL COGNITION 2009. [DOI: 10.1080/13506280902868660] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
30
|
Cole GG, Kuhn G. Appearance matters: Attentional orienting by new objects in the precueing paradigm. VISUAL COGNITION 2009. [DOI: 10.1080/13506280802611582] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
31
|
Abstract
Despite the substantial interest in memory for complex pictorial stimuli, there has been virtually no research comparing memory for static scenes with that for their moving counterparts. We report that both monochrome and color moving images are better remembered than static versions of the same stimuli at retention intervals up to one month. When participants studied a sequence of still images, recognition performance was the same as that for single static images. These results are discussed within a theoretical framework which draws upon previous studies of scene memory, face recognition, and representational momentum.
Collapse
|
32
|
Brockmole JR, Henderson JM. Prioritizing new objects for eye fixation in real-world scenes: Effects of object–scene consistency. VISUAL COGNITION 2008. [DOI: 10.1080/13506280701453623] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
33
|
|
34
|
Brockmole JR, Henderson JM. Object appearance, disappearance, and attention prioritization in real-world scenes. Psychon Bull Rev 2005; 12:1061-7. [PMID: 16615329 DOI: 10.3758/bf03206444] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We examined the prioritization of abruptly appearing and disappearing objects in real-world scenes. These scene changes occurred either during a fixation (transient appearance/disappearance) or during a saccade (nontransient appearance/disappearance). Prioritization was measured by the eyes' propensity to be directed to the region of the scene change. Object additions and deletions were fixated at rates greater than chance, suggesting that both types of scene change arecues used by the visual system to guide attention during scene exploration, although appearances were fixated twice as often as disappearances, indicating that new objects are more salient than deleted objects. New and deleted objects were prioritized sooner and more frequently if they occurred during a fixation, as compared with during a saccade, indicating an important role of the transient signal that often accompanies sudden changes in scenes. New objects were prioritized regardless of whether they appeared during a fixation or a saccade, whereas prioritization of a deleted object occurred only if (1) a transient signal was present or (2) the removal of the object revealed previously occluded objects.
Collapse
|