1
|
Spotorno S, Tatler BW. What's left of the leftward bias in scene viewing? Lateral asymmetries in information processing during early search guidance. Cognition 2024; 254:106009. [PMID: 39577391 DOI: 10.1016/j.cognition.2024.106009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 10/31/2024] [Accepted: 11/05/2024] [Indexed: 11/24/2024]
Abstract
Understanding how early scene viewing is guided can reveal fundamental brain mechanisms for quickly making sense of our surroundings. Viewing is often initiated from the left side. Across two experiments, we focused on search initiation for lateralised targets within real-world scenes, investigating the role of the cerebral hemispheres in guiding the first saccade. We aimed to disentangle hemispheric contribution from the effects of reading habits and distinguish between an overall dominance of the right hemisphere for visuospatial processing and finer hemispheric specialisation for the type of target template representation (from pictorial versus verbal cues), spatial scale (global versus local), and timescale (short versus longer). We replicated the tendency to initiate search leftward in both experiments. However, we found no evidence supporting a significant impact of left-to-right reading habits, either as a purely motor or attentional bias to the left. A general visuospatial dominance of the right hemisphere could not account for the results either. In Experiment 1, we found a greater probability of directing the first saccade toward targets in the left visual field but only after a verbal target cue, with no lateral differences after a pictorial cue. This suggested a contribution of the right hemisphere specialisation in perceptually simulating words' referents. Lengthening the Inter-Stimulus Interval between the cue and the scene (from 100 to 900 ms) resulted in reduced first saccade gain in the left visual field, suggesting a decreased ability of the the right hemisphere to use the target template to guide gaze close to the target object, which primarily depends on local information processing. Experiment 2, using visual versus auditory verbal cues, replicated and extended the findings for both first saccade direction and gain. Overall, our study shows that the multidetermined functional specialisation of the cerebral hemispheres is a key driver of early scene search and must be incorporated into theories and models to advance understanding of the mechanisms that guide viewing behaviour.
Collapse
Affiliation(s)
- Sara Spotorno
- Psychology Department, Durham University, Durham, UK.
| | | |
Collapse
|
2
|
Burlingham CS, Sendhilnathan N, Komogortsev O, Murdison TS, Proulx MJ. Motor "laziness" constrains fixation selection in real-world tasks. Proc Natl Acad Sci U S A 2024; 121:e2302239121. [PMID: 38470927 PMCID: PMC10962974 DOI: 10.1073/pnas.2302239121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 02/02/2024] [Indexed: 03/14/2024] Open
Abstract
Humans coordinate their eye, head, and body movements to gather information from a dynamic environment while maximizing reward and minimizing biomechanical and energetic costs. However, such natural behavior is not possible in traditional experiments employing head/body restraints and artificial, static stimuli. Therefore, it is unclear to what extent mechanisms of fixation selection discovered in lab studies, such as inhibition-of-return (IOR), influence everyday behavior. To address this gap, participants performed nine real-world tasks, including driving, visually searching for an item, and building a Lego set, while wearing a mobile eye tracker (169 recordings; 26.6 h). Surprisingly, in all tasks, participants most often returned to what they just viewed and saccade latencies were shorter preceding return than forward saccades, i.e., consistent with facilitation, rather than inhibition, of return. We hypothesize that conservation of eye and head motor effort ("laziness") contributes. Correspondingly, we observed center biases in fixation position and duration relative to the head's orientation. A model that generates scanpaths by randomly sampling these distributions reproduced all return phenomena we observed, including distinct 3-fixation sequences for forward versus return saccades. After controlling for orbital eccentricity, one task (building a Lego set) showed evidence for IOR. This, along with small discrepancies between model and data, indicates that the brain balances minimization of motor costs with maximization of rewards (e.g., accomplished by IOR and other mechanisms) and that the optimal balance varies according to task demands. Supporting this account, the orbital range of motion used in each task traded off lawfully with fixation duration.
Collapse
Affiliation(s)
- Charlie S. Burlingham
- Reality Labs Research, Meta Platforms Inc., Redmond, WA98052
- Department of Psychology, New York University, New York, NY10003
| | | | - Oleg Komogortsev
- Reality Labs Research, Meta Platforms Inc., Redmond, WA98052
- Department of Computer Science, Texas State University, San Marcos, TX78666
| | | | | |
Collapse
|
3
|
Nikolaev AR, Meghanathan RN, van Leeuwen C. Refixation behavior in naturalistic viewing: Methods, mechanisms, and neural correlates. Atten Percept Psychophys 2024:10.3758/s13414-023-02836-9. [PMID: 38169029 DOI: 10.3758/s13414-023-02836-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/17/2023] [Indexed: 01/05/2024]
Abstract
When freely viewing a scene, the eyes often return to previously visited locations. By tracking eye movements and coregistering eye movements and EEG, such refixations are shown to have multiple roles: repairing insufficient encoding from precursor fixations, supporting ongoing viewing by resampling relevant locations prioritized by precursor fixations, and aiding the construction of memory representations. All these functions of refixation behavior are understood to be underpinned by three oculomotor and cognitive systems and their associated brain structures. First, immediate saccade planning prior to refixations involves attentional selection of candidate locations to revisit. This process is likely supported by the dorsal attentional network. Second, visual working memory, involved in maintaining task-related information, is likely supported by the visual cortex. Third, higher-order relevance of scene locations, which depends on general knowledge and understanding of scene meaning, is likely supported by the hippocampal memory system. Working together, these structures bring about viewing behavior that balances exploring previously unvisited areas of a scene with exploiting visited areas through refixations.
Collapse
Affiliation(s)
- Andrey R Nikolaev
- Department of Psychology, Lund University, Box 213, 22100, Lund, Sweden.
- Brain & Cognition Research Unit, KU Leuven-University of Leuven, Leuven, Belgium.
| | | | - Cees van Leeuwen
- Brain & Cognition Research Unit, KU Leuven-University of Leuven, Leuven, Belgium
- Center for Cognitive Science, Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau, Kaiserslautern, Germany
| |
Collapse
|
4
|
Roth N, Rolfs M, Hellwich O, Obermayer K. Objects guide human gaze behavior in dynamic real-world scenes. PLoS Comput Biol 2023; 19:e1011512. [PMID: 37883331 PMCID: PMC10602265 DOI: 10.1371/journal.pcbi.1011512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 09/12/2023] [Indexed: 10/28/2023] Open
Abstract
The complexity of natural scenes makes it challenging to experimentally study the mechanisms behind human gaze behavior when viewing dynamic environments. Historically, eye movements were believed to be driven primarily by space-based attention towards locations with salient features. Increasing evidence suggests, however, that visual attention does not select locations with high saliency but operates on attentional units given by the objects in the scene. We present a new computational framework to investigate the importance of objects for attentional guidance. This framework is designed to simulate realistic scanpaths for dynamic real-world scenes, including saccade timing and smooth pursuit behavior. Individual model components are based on psychophysically uncovered mechanisms of visual attention and saccadic decision-making. All mechanisms are implemented in a modular fashion with a small number of well-interpretable parameters. To systematically analyze the importance of objects in guiding gaze behavior, we implemented five different models within this framework: two purely spatial models, where one is based on low-level saliency and one on high-level saliency, two object-based models, with one incorporating low-level saliency for each object and the other one not using any saliency information, and a mixed model with object-based attention and selection but space-based inhibition of return. We optimized each model's parameters to reproduce the saccade amplitude and fixation duration distributions of human scanpaths using evolutionary algorithms. We compared model performance with respect to spatial and temporal fixation behavior, including the proportion of fixations exploring the background, as well as detecting, inspecting, and returning to objects. A model with object-based attention and inhibition, which uses saliency information to prioritize between objects for saccadic selection, leads to scanpath statistics with the highest similarity to the human data. This demonstrates that scanpath models benefit from object-based attention and selection, suggesting that object-level attentional units play an important role in guiding attentional processing.
Collapse
Affiliation(s)
- Nicolas Roth
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Germany
| | - Martin Rolfs
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Department of Psychology, Humboldt-Universität zu Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| | - Olaf Hellwich
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Institute of Computer Engineering and Microelectronics, Technische Universität Berlin, Germany
| | - Klaus Obermayer
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| |
Collapse
|
5
|
Nuthmann A, Clark CNL. Pseudoneglect during object search in naturalistic scenes. Exp Brain Res 2023; 241:2345-2360. [PMID: 37610677 PMCID: PMC10471692 DOI: 10.1007/s00221-023-06679-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 07/25/2023] [Indexed: 08/24/2023]
Abstract
Pseudoneglect, that is the tendency to pay more attention to the left side of space, is typically assessed with paper-and-pencil tasks, particularly line bisection. In the present study, we used an everyday task with more complex stimuli. Subjects' task was to look for pre-specified objects in images of real-world scenes. In half of the scenes, the search object was located on the left side of the image (L-target); in the other half of the scenes, the target was on the right side (R-target). To control for left-right differences in the composition of the scenes, half of the scenes were mirrored horizontally. Eye-movement recordings were used to track the course of pseudoneglect on a millisecond timescale. Subjects' initial eye movements were biased to the left of the scene, but less so for R-targets than for L-targets, indicating that pseudoneglect was modulated by task demands and scene guidance. We further analyzed how horizontal gaze positions changed over time. When the data for L- and R-targets were pooled, the leftward bias lasted, on average, until the first second of the search process came to an end. Even for right-side targets, the gaze data showed an early left-bias, which was compensated by adjustments in the direction and amplitude of later saccades. Importantly, we found that pseudoneglect affected search efficiency by leading to less efficient scan paths and consequently longer search times for R-targets compared with L-targets. It may therefore be prudent to take spatial asymmetries into account when studying visual search in scenes.
Collapse
Affiliation(s)
- Antje Nuthmann
- Institute of Psychology, University of Kiel, Olshausenstr. 62, 24118, Kiel, Germany.
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK.
| | - Christopher N L Clark
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
6
|
Loh Z, Hall EH, Cronin D, Henderson JM. Working memory control predicts fixation duration in scene-viewing. PSYCHOLOGICAL RESEARCH 2023; 87:1143-1154. [PMID: 35879564 PMCID: PMC11129724 DOI: 10.1007/s00426-022-01694-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 06/02/2022] [Indexed: 11/28/2022]
Abstract
When viewing scenes, observers differ in how long they linger at each fixation location and how far they move their eyes between fixations. What factors drive these differences in eye-movement behaviors? Previous work suggests individual differences in working memory capacity may influence fixation durations and saccade amplitudes. In the present study, participants (N = 98) performed two scene-viewing tasks, aesthetic judgment and memorization, while viewing 100 photographs of real-world scenes. Working memory capacity, working memory processing ability, and fluid intelligence were assessed with an operation span task, a memory updating task, and Raven's Advanced Progressive Matrices, respectively. Across participants, we found significant effects of task on both fixation durations and saccade amplitudes. At the level of each individual participant, we also found a significant relationship between memory updating task performance and participants' fixation duration distributions. However, we found no effect of fluid intelligence and no effect of working memory capacity on fixation duration or saccade amplitude distributions, inconsistent with previous findings. These results suggest that the ability to flexibly maintain and update working memory is strongly related to fixation duration behavior.
Collapse
Affiliation(s)
- Zoe Loh
- Management of Complex Systems Department, University of California Merced, Merced, CA, 95343, USA
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618, USA
| | - Elizabeth H Hall
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618, USA.
- Department of Psychology, University of California Davis, Davis, CA, 95616, USA.
| | - Deborah Cronin
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618, USA
- Department of Psychology, Drake University, Des Moines, IA, 50311, USA
| | - John M Henderson
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618, USA
- Department of Psychology, University of California Davis, Davis, CA, 95616, USA
| |
Collapse
|
7
|
Wedel M, Pieters R, van der Lans R. Modeling Eye Movements During Decision Making: A Review. PSYCHOMETRIKA 2023; 88:697-729. [PMID: 35852670 PMCID: PMC10188393 DOI: 10.1007/s11336-022-09876-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 06/15/2022] [Accepted: 06/16/2022] [Indexed: 05/17/2023]
Abstract
This article reviews recent advances in the psychometric and econometric modeling of eye-movements during decision making. Eye movements offer a unique window on unobserved perceptual, cognitive, and evaluative processes of people who are engaged in decision making tasks. They provide new insights into these processes, which are not easily available otherwise, allow for explanations of fundamental search and choice phenomena, and enable predictions of future decisions. We propose a theoretical framework of the search and choice tasks that people commonly engage in and of the underlying cognitive processes involved in those tasks. We discuss how these processes drive specific eye-movement patterns. Our framework emphasizes the central role of task and strategy switching for complex goal attainment. We place the extant literature within that framework, highlight recent advances in modeling eye-movement behaviors during search and choice, discuss limitations, challenges, and open problems. An agenda for further psychometric modeling of eye movements during decision making concludes the review.
Collapse
Affiliation(s)
- Michel Wedel
- Robert H. Smith School of Business, University of Maryland, College Park, MD 20742-1815 USA
| | - Rik Pieters
- Tilburg University, Tilburg, The Netherlands
- Católica Lisbon School of Business and Economics, Universidade Católica Portuguesa, Lisbon, Portugal
| | - Ralf van der Lans
- Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong
| |
Collapse
|
8
|
Soballa P, Schöpper LM, Frings C, Merz S. Spatial biases in inhibition of return. VISUAL COGNITION 2022. [DOI: 10.1080/13506285.2023.2188336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
|
9
|
Nadezhda M, Dovbnyuk K, Merzon L, MacInnes WJ. Between the Scenes. Exp Psychol 2022; 69:185-195. [PMID: 36305454 PMCID: PMC9730397 DOI: 10.1027/1618-3169/a000556] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 08/10/2022] [Accepted: 08/17/2022] [Indexed: 11/23/2022]
Abstract
We constantly move our eyes to new information while inspecting a scene, but these patterns of eye movements change based on the task and goals of the observer. Inhibition of return (IOR) may facilitate visual search by reducing the likelihood of revisiting previously attended locations. However, IOR may present in any visual task, or it may be search-specific. We investigated the presence of IOR in foraging, memorization, change detection, and two versions of visual search. One version of search used a static search array that remained stable throughout the trial, but the second used a scene flickering paradigm similar to the change detection task. IOR was observed in both versions of visual search, memorization, and foraging, but not in change detection. Visual search and change detection both had temporal nonscene components, and we observed that IOR could be maintained despite the scene removal but only for search. Although IOR is maintained in scene coordinates, short disruptions to this scene are insufficient to completely remove the inhibitory tags. Finally, we compare return saccades in trials without a probe and observe fewer return saccades in tasks for which IOR was observed, providing further evidence that IOR might serve as a novelty drive.
Collapse
Affiliation(s)
| | - Kseniya Dovbnyuk
- Department of Psychology and Cognitive Science, University of Trento, Italy
- Department of Neuroscience and Biomedical Engineering, Aalto University, Aalto, Finland
- Department of Psychology, Vision Modelling Laboratory, HSE University, Moscow, Russian Federation
- Department of Computer Science, Swansea University, Swansea, UK
| | - Liya Merzon
- Department of Neuroscience and Biomedical Engineering, Aalto University, Aalto, Finland
| | - W. Joseph MacInnes
- Department of Psychology, Vision Modelling Laboratory, HSE University, Moscow, Russian Federation
- Department of Computer Science, Swansea University, Swansea, UK
| |
Collapse
|
10
|
Kümmerer M, Bethge M, Wallis TSA. DeepGaze III: Modeling free-viewing human scanpaths with deep learning. J Vis 2022; 22:7. [PMID: 35472130 PMCID: PMC9055565 DOI: 10.1167/jov.22.5.7] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Humans typically move their eyes in “scanpaths” of fixations linked by saccades. Here we present DeepGaze III, a new model that predicts the spatial location of consecutive fixations in a free-viewing scanpath over static images. DeepGaze III is a deep learning–based model that combines image information with information about the previous fixation history to predict where a participant might fixate next. As a high-capacity and flexible model, DeepGaze III captures many relevant patterns in the human scanpath data, setting a new state of the art in the MIT300 dataset and thereby providing insight into how much information in scanpaths across observers exists in the first place. We use this insight to assess the importance of mechanisms implemented in simpler, interpretable models for fixation selection. Due to its architecture, DeepGaze III allows us to disentangle several factors that play an important role in fixation selection, such as the interplay of scene content and scanpath history. The modular nature of DeepGaze III allows us to conduct ablation studies, which show that scene content has a stronger effect on fixation selection than previous scanpath history in our main dataset. In addition, we can use the model to identify scenes for which the relative importance of these sources of information differs most. These data-driven insights would be difficult to accomplish with simpler models that do not have the computational capacity to capture such patterns, demonstrating an example of how deep learning advances can be used to contribute to scientific understanding.
Collapse
Affiliation(s)
| | | | - Thomas S A Wallis
- Technical University of Darmstadt, Institute of Psychology and Centre for Cognitive Science, Darmstadt, Germany.,
| |
Collapse
|
11
|
David EJ, Lebranchu P, Perreira Da Silva M, Le Callet P. What are the visuo-motor tendencies of omnidirectional scene free-viewing in virtual reality? J Vis 2022; 22:12. [PMID: 35323868 PMCID: PMC8963670 DOI: 10.1167/jov.22.4.12] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 02/08/2022] [Indexed: 11/24/2022] Open
Abstract
Central and peripheral vision during visual tasks have been extensively studied on two-dimensional screens, highlighting their perceptual and functional disparities. This study has two objectives: replicating on-screen gaze-contingent experiments removing central or peripheral field of view in virtual reality, and identifying visuo-motor biases specific to the exploration of 360 scenes with a wide field of view. Our results are useful for vision modelling, with applications in gaze position prediction (e.g., content compression and streaming). We ask how previous on-screen findings translate to conditions where observers can use their head to explore stimuli. We implemented a gaze-contingent paradigm to simulate loss of vision in virtual reality, participants could freely view omnidirectional natural scenes. This protocol allows the simulation of vision loss with an extended field of view (\(\gt \)80°) and studying the head's contributions to visual attention. The time-course of visuo-motor variables in our pure free-viewing task reveals long fixations and short saccades during first seconds of exploration, contrary to literature in visual tasks guided by instructions. We show that the effect of vision loss is reflected primarily on eye movements, in a manner consistent with two-dimensional screens literature. We hypothesize that head movements mainly serve to explore the scenes during free-viewing, the presence of masks did not significantly impact head scanning behaviours. We present new fixational and saccadic visuo-motor tendencies in a 360° context that we hope will help in the creation of gaze prediction models dedicated to virtual reality.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany
| | - Pierre Lebranchu
- LS2N UMR CNRS 6004, University of Nantes and Nantes University Hospital, Nantes, France
| | | | - Patrick Le Callet
- LS2N UMR CNRS 6004, University of Nantes, Nantes, France
- http://pagesperso.ls2n.fr/~lecallet-p/index.html
| |
Collapse
|
12
|
Abstract
This paper describes Guided Search 6.0 (GS6), a revised model of visual search. When we encounter a scene, we can see something everywhere. However, we cannot recognize more than a few items at a time. Attention is used to select items so that their features can be "bound" into recognizable objects. Attention is "guided" so that items can be processed in an intelligent order. In GS6, this guidance comes from five sources of preattentive information: (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g., priming), (4) reward, and (5) scene syntax and semantics. These sources are combined into a spatial "priority map," a dynamic attentional landscape that evolves over the course of search. Selective attention is guided to the most active location in the priority map approximately 20 times per second. Guidance will not be uniform across the visual field. It will favor items near the point of fixation. Three types of functional visual field (FVFs) describe the nature of these foveal biases. There is a resolution FVF, an FVF governing exploratory eye movements, and an FVF governing covert deployments of attention. To be identified as targets or rejected as distractors, items must be compared to target templates held in memory. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 ms/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid of serial and parallel processes. In GS6, if a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Setting of that threshold is adaptive, allowing feedback about performance to shape subsequent searches. Simulation shows that the combination of asynchronous diffusion and a quitting signal can produce the basic patterns of response time and error data from a range of search experiments.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Ophthalmology and Radiology, Brigham & Women's Hospital/Harvard Medical School, Cambridge, MA, USA.
- Visual Attention Lab, 65 Landsdowne St, 4th Floor, Cambridge, MA, 02139, USA.
| |
Collapse
|
13
|
Okada KI, Miura K, Fujimoto M, Morita K, Yoshida M, Yamamori H, Yasuda Y, Iwase M, Inagaki M, Shinozaki T, Fujita I, Hashimoto R. Impaired inhibition of return during free-viewing behaviour in patients with schizophrenia. Sci Rep 2021; 11:3237. [PMID: 33547381 PMCID: PMC7865073 DOI: 10.1038/s41598-021-82253-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Accepted: 01/18/2021] [Indexed: 01/30/2023] Open
Abstract
Schizophrenia affects various aspects of cognitive and behavioural functioning. Eye movement abnormalities are commonly observed in patients with schizophrenia (SZs). Here we examined whether such abnormalities reflect an anomaly in inhibition of return (IOR), the mechanism that inhibits orienting to previously fixated or attended locations. We analyzed spatiotemporal patterns of eye movement during free-viewing of visual images including natural scenes, geometrical patterns, and pseudorandom noise in SZs and healthy control participants (HCs). SZs made saccades to previously fixated locations more frequently than HCs. The time lapse from the preceding saccade was longer for return saccades than for forward saccades in both SZs and HCs, but the difference was smaller in SZs. SZs explored a smaller area than HCs. Generalized linear mixed-effect model analysis indicated that the frequent return saccades served to confine SZs' visual exploration to localized regions. The higher probability of return saccades in SZs was related to cognitive decline after disease onset but not to the dose of prescribed antipsychotics. We conclude that SZs exhibited attenuated IOR under free-viewing conditions, which led to restricted scene scanning. IOR attenuation will be a useful clue for detecting impairment in attention/orienting control and accompanying cognitive decline in schizophrenia.
Collapse
Affiliation(s)
- Ken-ichi Okada
- grid.136593.b0000 0004 0373 3971Graduate School of Frontier Biosciences, Osaka University, Osaka, 565-0871 Japan ,grid.136593.b0000 0004 0373 3971Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, and Osaka University, Osaka, 565-0871 Japan ,grid.39158.360000 0001 2173 7691Present Address: Department of Physiology, Hokkaido University School of Medicine, Hokkaido, 060-8638 Japan
| | - Kenichiro Miura
- grid.419280.60000 0004 1763 8916Department of Pathology of Mental Diseases, National Institute of Mental Health, National Center of Neurology and Psychiatry, Ogawa-Higashi 4-1-1, Kodaira, Tokyo, 187-8553 Japan
| | - Michiko Fujimoto
- grid.419280.60000 0004 1763 8916Department of Pathology of Mental Diseases, National Institute of Mental Health, National Center of Neurology and Psychiatry, Ogawa-Higashi 4-1-1, Kodaira, Tokyo, 187-8553 Japan ,grid.136593.b0000 0004 0373 3971Department of Psychiatry, Osaka University Graduate School of Medicine, Osaka, 565-0871 Japan
| | - Kentaro Morita
- grid.412708.80000 0004 1764 7572Department of Rehabilitation, University of Tokyo Hospital, Tokyo, 113-8655 Japan
| | - Masatoshi Yoshida
- grid.467811.d0000 0001 2272 1771Department of Developmental Physiology, National Institute for Physiological Sciences, Aichi, 444-8585 Japan ,grid.275033.00000 0004 1763 208XSchool of Life Science, The Graduate University for Advanced Studies, Kanagawa, 240-0193 Japan ,grid.39158.360000 0001 2173 7691Center for Human Nature, Artificial Intelligence, and Neuroscience, Hokkaido University, Hokkaido, 060-0812 Japan
| | - Hidenaga Yamamori
- grid.419280.60000 0004 1763 8916Department of Pathology of Mental Diseases, National Institute of Mental Health, National Center of Neurology and Psychiatry, Ogawa-Higashi 4-1-1, Kodaira, Tokyo, 187-8553 Japan ,grid.136593.b0000 0004 0373 3971Department of Psychiatry, Osaka University Graduate School of Medicine, Osaka, 565-0871 Japan ,grid.460257.2Japan Community Health Care Organization Osaka Hospital, Osaka, 553-0003 Japan
| | - Yuka Yasuda
- grid.419280.60000 0004 1763 8916Department of Pathology of Mental Diseases, National Institute of Mental Health, National Center of Neurology and Psychiatry, Ogawa-Higashi 4-1-1, Kodaira, Tokyo, 187-8553 Japan ,Life Grow Brilliant Mental Clinic, Medical Corporation Foster, Osaka, 530-0012 Japan ,grid.136593.b0000 0004 0373 3971Molecular Research Center for Children’s Mental Development, United Graduate School of Child Development, Osaka University, Osaka, 565-0871 Japan
| | - Masao Iwase
- grid.136593.b0000 0004 0373 3971Department of Psychiatry, Osaka University Graduate School of Medicine, Osaka, 565-0871 Japan
| | - Mikio Inagaki
- grid.136593.b0000 0004 0373 3971Graduate School of Frontier Biosciences, Osaka University, Osaka, 565-0871 Japan ,grid.136593.b0000 0004 0373 3971Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, and Osaka University, Osaka, 565-0871 Japan
| | - Takashi Shinozaki
- grid.136593.b0000 0004 0373 3971Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, and Osaka University, Osaka, 565-0871 Japan ,grid.136593.b0000 0004 0373 3971Graduate School of Information Science and Technology, Osaka University, Osaka, 565-0871 Japan
| | - Ichiro Fujita
- grid.136593.b0000 0004 0373 3971Graduate School of Frontier Biosciences, Osaka University, Osaka, 565-0871 Japan ,grid.136593.b0000 0004 0373 3971Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, and Osaka University, Osaka, 565-0871 Japan
| | - Ryota Hashimoto
- grid.419280.60000 0004 1763 8916Department of Pathology of Mental Diseases, National Institute of Mental Health, National Center of Neurology and Psychiatry, Ogawa-Higashi 4-1-1, Kodaira, Tokyo, 187-8553 Japan ,grid.136593.b0000 0004 0373 3971Department of Psychiatry, Osaka University Graduate School of Medicine, Osaka, 565-0871 Japan ,grid.136593.b0000 0004 0373 3971Molecular Research Center for Children’s Mental Development, United Graduate School of Child Development, Osaka University, Osaka, 565-0871 Japan
| |
Collapse
|
14
|
Schwetlick L, Rothkegel LOM, Trukenbrod HA, Engbert R. Modeling the effects of perisaccadic attention on gaze statistics during scene viewing. Commun Biol 2020; 3:727. [PMID: 33262536 PMCID: PMC7708631 DOI: 10.1038/s42003-020-01429-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 10/21/2020] [Indexed: 11/09/2022] Open
Abstract
How we perceive a visual scene depends critically on the selection of gaze positions. For this selection process, visual attention is known to play a key role in two ways. First, image-features attract visual attention, a fact that is captured well by time-independent fixation models. Second, millisecond-level attentional dynamics around the time of saccade drives our gaze from one position to the next. These two related research areas on attention are typically perceived as separate, both theoretically and experimentally. Here we link the two research areas by demonstrating that perisaccadic attentional dynamics improve predictions on scan path statistics. In a mathematical model, we integrated perisaccadic covert attention with dynamic scan path generation. Our model reproduces saccade amplitude distributions, angular statistics, intersaccadic turning angles, and their impact on fixation durations as well as inter-individual differences using Bayesian inference. Therefore, our result lend support to the relevance of perisaccadic attention to gaze statistics.
Collapse
Affiliation(s)
- Lisa Schwetlick
- Department of Psychology, University of Potsdam, 14469, Potsdam, Germany.
- DFG Collaborative Research Center 1294, University of Potsdam, 14469, Potsdam, Germany.
| | | | | | - Ralf Engbert
- Department of Psychology, University of Potsdam, 14469, Potsdam, Germany
- DFG Collaborative Research Center 1294, University of Potsdam, 14469, Potsdam, Germany
- Research Focus Cognitive Science, University of Potsdam, 14469, Potsdam, Germany
| |
Collapse
|
15
|
David E, Beitner J, Võ MLH. Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment. Brain Sci 2020; 10:E841. [PMID: 33198116 PMCID: PMC7696943 DOI: 10.3390/brainsci10110841] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/09/2020] [Accepted: 11/10/2020] [Indexed: 11/19/2022] Open
Abstract
Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view could be of use. In this study, we had participants looking for objects in simulated everyday rooms in virtual reality. By implementing a gaze-contingent protocol we masked central or peripheral vision (masks of 6 deg. of radius) during trials. We analyzed the impact of vision loss on visuo-motor variables related to fixation (duration) and saccades (amplitude and relative directions). An important novelty is that we segregated eye, head and the general gaze movements in our analyses. Additionally, we studied these measures after separating trials into two search phases (scanning and verification). Our results generally replicate past on-screen literature and teach about the role of eye and head movements. We showed that the scanning phase is dominated by short fixations and long saccades to explore, and the verification phase by long fixations and short saccades to analyze. One finding indicates that eye movements are strongly driven by visual stimulation, while head movements serve a higher behavioral goal of exploring omnidirectional scenes. Moreover, losing central vision has a smaller impact than reported on-screen, hinting at the importance of peripheral scene processing for visual search with an extended field of view. Our findings provide more information concerning how knowledge gathered on-screen may transfer to more natural conditions, and attest to the experimental usefulness of eye tracking in virtual reality.
Collapse
Affiliation(s)
- Erwan David
- Scene Grammar Lab, Department of Psychology, Theodor-W.-Adorno-Platz 6, Johann Wolfgang-Goethe-Universität, 60323 Frankfurt, Germany; (J.B.); (M.L.-H.V.)
| | | | | |
Collapse
|
16
|
David EJ, Lebranchu P, Perreira Da Silva M, Le Callet P. Predicting artificial visual field losses: A gaze-based inference study. J Vis 2020; 19:22. [PMID: 31868896 DOI: 10.1167/19.14.22] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual field defects are a world-wide concern, and the proportion of the population experiencing vision loss is ever increasing. Macular degeneration and glaucoma are among the four leading causes of permanent vision loss. Identifying and characterizing visual field losses from gaze alone could prove crucial in the future for screening tests, rehabilitation therapies, and monitoring. In this experiment, 54 participants took part in a free-viewing task of visual scenes while experiencing artificial scotomas (central and peripheral) of varying radii in a gaze-contingent paradigm. We studied the importance of a set of gaze features as predictors to best differentiate between artificial scotoma conditions. Linear mixed models were utilized to measure differences between scotoma conditions. Correlation and factorial analyses revealed redundancies in our data. Finally, hidden Markov models and recurrent neural networks were implemented as classifiers in order to measure the predictive usefulness of gaze features. The results show separate saccade direction biases depending on scotoma type. We demonstrate that the saccade relative angle, amplitude, and peak velocity of saccades are the best features on the basis of which to distinguish between artificial scotomas in a free-viewing task. Finally, we discuss the usefulness of our protocol and analyses as a gaze-feature identifier tool that discriminates between artificial scotomas of different types and sizes.
Collapse
Affiliation(s)
| | - Pierre Lebranchu
- University of Nantes and Nantes University Hospital, Nantes, France
| | | | | |
Collapse
|
17
|
Backhaus D, Engbert R, Rothkegel LOM, Trukenbrod HA. Task-dependence in scene perception: Head unrestrained viewing using mobile eye-tracking. J Vis 2020; 20:3. [PMID: 32392286 PMCID: PMC7409614 DOI: 10.1167/jov.20.5.3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 12/15/2019] [Indexed: 11/24/2022] Open
Abstract
Real-world scene perception is typically studied in the laboratory using static picture viewing with restrained head position. Consequently, the transfer of results obtained in this paradigm to real-word scenarios has been questioned. The advancement of mobile eye-trackers and the progress in image processing, however, permit a more natural experimental setup that, at the same time, maintains the high experimental control from the standard laboratory setting. We investigated eye movements while participants were standing in front of a projector screen and explored images under four specific task instructions. Eye movements were recorded with a mobile eye-tracking device and raw gaze data were transformed from head-centered into image-centered coordinates. We observed differences between tasks in temporal and spatial eye-movement parameters and found that the bias to fixate images near the center differed between tasks. Our results demonstrate that current mobile eye-tracking technology and a highly controlled design support the study of fine-scaled task dependencies in an experimental setting that permits more natural viewing behavior than the static picture viewing paradigm.
Collapse
Affiliation(s)
- Daniel Backhaus
- Experimental and Biological Psychology, University of Potsdam, Potsdam, Germany
| | - Ralf Engbert
- Experimental and Biological Psychology, University of Potsdam, Potsdam, Germany
| | | | - Hans A. Trukenbrod
- Experimental and Biological Psychology, University of Potsdam, Potsdam, Germany
| |
Collapse
|
18
|
Prior target locations attract overt attention during search. Cognition 2020; 201:104282. [PMID: 32387723 DOI: 10.1016/j.cognition.2020.104282] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Revised: 03/25/2020] [Accepted: 03/27/2020] [Indexed: 01/22/2023]
Abstract
A key question about visual search is how we guide attention to objects that are relevant to our goals. Traditionally, theories of visual attention have emphasized guidance by explicit knowledge of the target feature. But there is growing evidence that attention is also implicitly guided by prior experience. One such example is the phenomenon of location priming, whereby attention is automatically allocated to the location where the search target was previously found. Problematically, much of the previous evidence for location priming has been disputed because it relies exclusively on manual response time, making unclear the relative contribution of location priming on attentional allocation and later cognitive processes. The current study addressed this issue by measuring shifts of gaze, which provide a more direct measure of attentional orienting. In five experiments, first saccades were strongly attracted to the target location from the previous trial, even though this location was not predictive of the target location on the current trial. This oculomotor priming effect was so strong that it effectively disrupted attentional guidance to the search target. The results suggest that memories of recent experience can powerfully influence attentional allocation.
Collapse
|
19
|
Trukenbrod HA, Barthelmé S, Wichmann FA, Engbert R. Spatial statistics for gaze patterns in scene viewing: Effects of repeated viewing. J Vis 2019; 19:5. [PMID: 31173630 DOI: 10.1167/19.6.5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Scene viewing is used to study attentional selection in complex but still controlled environments. One of the main observations on eye movements during scene viewing is the inhomogeneous distribution of fixation locations: While some parts of an image are fixated by almost all observers and are inspected repeatedly by the same observer, other image parts remain unfixated by observers even after long exploration intervals. Here, we apply spatial point process methods to investigate the relationship between pairs of fixations. More precisely, we use the pair correlation function, a powerful statistical tool, to evaluate dependencies between fixation locations along individual scanpaths. We demonstrate that aggregation of fixation locations within 4° is stronger than expected from chance. Furthermore, the pair correlation function reveals stronger aggregation of fixations when the same image is presented a second time. We use simulations of a dynamical model to show that a narrower spatial attentional span may explain differences in pair correlations between the first and the second inspection of the same image.
Collapse
Affiliation(s)
| | - Simon Barthelmé
- Centre National de la Recherche Scientifique, Gipsa-lab, Grenoble Institut National Polytechnique, France
| | - Felix A Wichmann
- Eberhard Karls University of Tübingen, Tübingen, Germany.,Bernstein Center for Computational Neuroscience Tübingen, Tübingen, Germany.,Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | | |
Collapse
|
20
|
Batten JP, Smith TJ. Saccades predict and synchronize to visual rhythms irrespective of musical beats. VISUAL COGNITION 2019; 26:695-718. [PMID: 30828706 PMCID: PMC6382286 DOI: 10.1080/13506285.2018.1544181] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Accepted: 10/30/2018] [Indexed: 11/15/2022]
Abstract
Music has been shown to entrain movement. One of the body’s most frequent movements, saccades, are arguably subject to a timer that may also be susceptible to musical entrainment. We developed a continuous and highly-controlled visual search task and varied the timing of the search target presentation, it was either gaze-contingent, tap-contingent, or visually-timed. We found: (1) explicit control of saccadic timing is limited to gross duration variations and imprecisely synchronized; (2) saccadic timing does not implicitly entrain to musical beats, even when closely aligned in phase; (3) eye movements predict visual onsets produced by motor-movements (finger-taps) and externally-timed sequences, beginning fixation prior to visual onset; (4) eye movement timing can be rhythmic, synchronizing to both motor-produced and externally timed visual sequences; each unaffected by musical beats. These results provide evidence that saccadic timing is sensitive to the temporal demands of visual tasks and impervious to influence from musical beats.
Collapse
Affiliation(s)
- Jonathan P Batten
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Tim J Smith
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| |
Collapse
|
21
|
Rothkegel LOM, Schütt HH, Trukenbrod HA, Wichmann FA, Engbert R. Searchers adjust their eye-movement dynamics to target characteristics in natural scenes. Sci Rep 2019; 9:1635. [PMID: 30733470 PMCID: PMC6367441 DOI: 10.1038/s41598-018-37548-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2018] [Accepted: 12/07/2018] [Indexed: 11/30/2022] Open
Abstract
When searching a target in a natural scene, it has been shown that both the target's visual properties and similarity to the background influence whether and how fast humans are able to find it. So far, it was unclear whether searchers adjust the dynamics of their eye movements (e.g., fixation durations, saccade amplitudes) to the target they search for. In our experiment, participants searched natural scenes for six artificial targets with different spatial frequency content throughout eight consecutive sessions. High-spatial frequency targets led to smaller saccade amplitudes and shorter fixation durations than low-spatial frequency targets if target identity was known. If a saccade was programmed in the same direction as the previous saccade, fixation durations and successive saccade amplitudes were not influenced by target type. Visual saliency and empirical fixation density at the endpoints of saccades which maintain direction were comparatively low, indicating that these saccades were less selective. Our results suggest that searchers adjust their eye movement dynamics to the search target efficiently, since previous research has shown that low-spatial frequencies are visible farther into the periphery than high-spatial frequencies. We interpret the saccade direction specificity of our effects as an underlying separation into a default scanning mechanism and a selective, target-dependent mechanism.
Collapse
Affiliation(s)
- Lars O M Rothkegel
- Department of Psychology, University of Potsdam, Karl-Liebknechtstraße 24/25, 14476, Potsdam, Germany.
| | - Heiko H Schütt
- Department of Psychology, University of Potsdam, Karl-Liebknechtstraße 24/25, 14476, Potsdam, Germany
- Neural Information Processing Group, University of Tübingen, Sand 6, 72076, Tübingen, Germany
| | - Hans A Trukenbrod
- Department of Psychology, University of Potsdam, Karl-Liebknechtstraße 24/25, 14476, Potsdam, Germany
| | - Felix A Wichmann
- Neural Information Processing Group, University of Tübingen, Sand 6, 72076, Tübingen, Germany
- Max Planck Institute for Intelligent Systems, Max-Planck-Ring 4, 72076, Tübingen, Germany
| | - Ralf Engbert
- Department of Psychology, University of Potsdam, Karl-Liebknechtstraße 24/25, 14476, Potsdam, Germany
| |
Collapse
|
22
|
Lüthold P, Lao J, He L, Zhou X, Caldara R. Waldo reveals cultural differences in return fixations. VISUAL COGNITION 2019. [DOI: 10.1080/13506285.2018.1561567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Patrick Lüthold
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Lingnan He
- School of Communication and Design, Sun Yat-Sen University, Guangzhou, People’s Republic of China
| | - Xinyue Zhou
- School of Management, Zhejiang University, Zhejiang, People’s Republic of China
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
23
|
Abstract
Saccadic momentum refers to the increased probability of making a saccade in a forward direction relative to the previous saccade. During visual search and free viewing conditions saccadic probability falls in a gradient from forward to backward directions. It has been considered to reflect an oculomotor bias for a continuing motor plan. Here we report that a saccadic momentum gradient is observed in nonhuman primate behavior and in the visual responses of cortical area V4 neurons during a conjunction style visual search task. This result suggests that saccadic momentum arises in part from a biased spatial distribution of visual responses to stimuli. The effect is independent of feature-based selective attention and overridden by directed spatial attention. The implications of saccadic momentum for search guidance are much broader and robust than the inhibition-of-return's presumed role in preventing refixation of recent locations.
Collapse
Affiliation(s)
- Brad C Motter
- Veterans Affairs Medical Center, Syracuse, NY.,Department of Neuroscience and Physiology, State University of New York Upstate Medical University, Syracuse, NY, USA
| |
Collapse
|
24
|
Megardon G, Ludwig C, Sumner P. Trajectory curvature in saccade sequences: spatiotopic influences vs. residual motor activity. J Neurophysiol 2017; 118:1310-1320. [PMID: 28592684 DOI: 10.1152/jn.00110.2017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2017] [Revised: 05/02/2017] [Accepted: 06/01/2017] [Indexed: 11/22/2022] Open
Abstract
When decisions drive saccadic eye movements, traces of the decision process can be inferred from the movement trajectories. For example, saccades can curve away from distractor stimuli, which was thought to reflect cortical inhibition biasing activity in the superior colliculus. Recent neurophysiological work does not support this theory, and two recent models have replaced top-down inhibition with lateral interactions in the superior colliculus or neural fatigue in the brainstem saccadic burst generator. All current models operate in retinotopic coordinates and are based on single saccade paradigms. To extend these models to sequences of saccades, we assessed whether and how saccade curvature depends on previously fixated locations and the direction of previous saccades. With a two-saccade paradigm, we first demonstrated that second saccades curved away from the initial fixation stimulus. Furthermore, by varying the time from fixation offset and the intersaccadic duration, we distinguished the extent of curvature originating from the spatiotopic representation of the previous fixation location or residual motor activity of the previous saccade. Results suggest that both factors drive curvature, and we discuss how these effects could be implemented in current models. In particular, we propose that the collicular retinotopic maps receive an excitatory spatiotopic update from the lateral interparial region.NEW & NOTEWORTHY Saccades curve away from locations of previous fixation. Varying stimulus timing demonstrates the effects of both 1) spatiotopic representation and 2) motor residual activity from previous saccades. The spatiotopic effect can be explained if current models are augmented with an excitatory top-down spatiotopic signal.
Collapse
Affiliation(s)
- Geoffrey Megardon
- Cardiff University Brain Research Imagery Centre, School of Psychology, Cardiff University, Cardiff, United Kingdom; .,School of Psychology, Cardiff University, Tower Building, Cardiff, United Kingdom
| | - Casimir Ludwig
- School of Experimental Psychology, University of Bristol, Bristol, United Kingdom; and
| | - Petroc Sumner
- School of Psychology, Cardiff University, Tower Building, Cardiff, United Kingdom
| |
Collapse
|
25
|
Fixation durations in scene viewing: Modeling the effects of local image features, oculomotor parameters, and task. Psychon Bull Rev 2017; 24:370-392. [PMID: 27480268 PMCID: PMC5390002 DOI: 10.3758/s13423-016-1124-4] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Scene perception requires the orchestration of image- and task-related processes with oculomotor constraints. The present study was designed to investigate how these factors influence how long the eyes remain fixated on a given location. Linear mixed models (LMMs) were used to test whether local image statistics (including luminance, luminance contrast, edge density, visual clutter, and the number of homogeneous segments), calculated for 1° circular regions around fixation locations, modulate fixation durations, and how these effects depend on task-related control. Fixation durations and locations were recorded from 72 participants, each viewing 135 scenes under three different viewing instructions (memorization, preference judgment, and search). Along with the image-related predictors, the LMMs simultaneously considered a number of oculomotor and spatiotemporal covariates, including the amplitudes of the previous and next saccades, and viewing time. As a key finding, the local image features around the current fixation predicted this fixation’s duration. For instance, greater luminance was associated with shorter fixation durations. Such immediacy effects were found for all three viewing tasks. Moreover, in the memorization and preference tasks, some evidence for successor effects emerged, such that some image characteristics of the upcoming location influenced how long the eyes stayed at the current location. In contrast, in the search task, scene processing was not distributed across fixation durations within the visual span. The LMM-based framework of analysis, applied to the control of fixation durations in scenes, suggests important constraints for models of scene perception and search, and for visual attention in general.
Collapse
|
26
|
Niimi R, Shimada H, Yokosawa K. Inhibition of Return Decays Rapidly when Familiar Objects are Used. JAPANESE PSYCHOLOGICAL RESEARCH 2017. [DOI: 10.1111/jpr.12149] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
27
|
Influence of initial fixation position in scene viewing. Vision Res 2016; 129:33-49. [PMID: 27771330 DOI: 10.1016/j.visres.2016.09.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2016] [Revised: 07/30/2016] [Accepted: 09/16/2016] [Indexed: 11/21/2022]
Abstract
During scene perception our eyes generate complex sequences of fixations. Predictors of fixation locations are bottom-up factors such as luminance contrast, top-down factors like viewing instruction, and systematic biases, e.g., the tendency to place fixations near the center of an image. However, comparatively little is known about the dynamics of scanpaths after experimental manipulation of specific fixation locations. Here we investigate the influence of initial fixation position on subsequent eye-movement behavior on an image. We presented 64 colored photographs to participants who started their scanpaths from one of two experimentally controlled positions in the right or left part of an image. Additionally, we used computational models to predict the images' fixation locations and classified them as balanced images or images with high conspicuity on either the left or right side of a picture. The manipulation of the starting position influenced viewing behavior for several seconds and produced a tendency to overshoot to the image side opposite to the starting position. Possible mechanisms for the generation of this overshoot were investigated using numerical simulations of statistical and dynamical models. Our model comparisons show that inhibitory tagging is a viable mechanism for dynamical planning of scanpaths.
Collapse
|
28
|
Abstract
In oculomotor selection, each saccade is thought to be automatically biased toward uninspected locations, inhibiting the inefficient behavior of repeatedly refixating the same objects. This automatic bias is related to inhibition of return (IOR). Although IOR seems an appealing property that increases efficiency in visual search, such a mechanism would not be efficient in other tasks. Indeed, evidence for additional, more flexible control over refixations has been provided. Here, we investigated whether task demands implicitly affect the rate of refixations. We measured the probability of refixations after series of six binary saccadic decisions under two conditions: visual search and free viewing. The rate of refixations seems influenced by two effects. One effect is related to the rate of intervening fixations, specifically, more refixations were observed with more intervening fixations. In addition, we observed an effect of task set, with fewer refixations in visual search than in free viewing. Importantly, the history-related effect was more pronounced when sufficient spatial references were provided, suggesting that this effect is dependent on spatiotopic encoding of previously fixated locations. This known history-related bias in gaze direction is not the primary influence on the refixation rate. Instead, multiple factors, such as task set and spatial references, assert strong influences as well.
Collapse
|
29
|
He T, Ding Y, Wang Z. Environment- and eye-centered inhibitory cueing effects are both observed after a methodological confound is eliminated. Sci Rep 2015; 5:16586. [PMID: 26565380 PMCID: PMC4643241 DOI: 10.1038/srep16586] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2015] [Accepted: 10/16/2015] [Indexed: 12/02/2022] Open
Abstract
Inhibition of return (IOR), typically explored in cueing paradigms, is a performance cost associated with previously attended locations and has been suggested as a crucial attentional mechanism that biases orientation towards novelty. In their seminal IOR paper, Posner and Cohen (1984) showed that IOR is coded in spatiotopic or environment-centered coordinates. Recent studies, however, have consistently reported IOR effects in both spatiotopic and retinotopic (eye-centered) coordinates. One overlooked methodological confound of all previous studies is that the spatial gradient of IOR is not considered when selecting the baseline for estimating IOR effects. This methodological issue makes it difficult to tell if the IOR effects reported in previous studies were coded in retinotopic or spatiotopic coordinates, or in both. The present study addresses this issue with the incorporation of no-cue trials to a modified cueing paradigm in which the cue and target are always intervened by a gaze-shift. The results revealed that a) IOR is indeed coded in both spatiotopic and retinotopic coordinates, and b) the methodology of previous work may have underestimated spatiotopic and retinotopic IOR effects.
Collapse
Affiliation(s)
- Tao He
- Center for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou, 311121, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, 311121, China
| | - Yun Ding
- Center for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou, 311121, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, 311121, China
| | - Zhiguo Wang
- Center for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou, 311121, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, 311121, China
| |
Collapse
|
30
|
Thompson C, Howting L, Hills P. The transference of visual search between two unrelated tasks: Measuring the temporal characteristics of carry-over. Q J Exp Psychol (Hove) 2015; 68:2255-73. [DOI: 10.1080/17470218.2015.1013042] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Investigations into the persistence of top-down control settings do not accurately reflect the nature of dynamic tasks. They typically involve extended practice with an initial task, and this initial task usually shares similar stimuli with a second task. Recent work shows that visual attention and search can be affected by limited exposure to a preceding, unrelated task, and the current study explored the temporal characteristics of this “carry-over” effect. Thirty-four participants completed one, four, or eight simple letter searches and then searched a natural scene. The spatial layout of letters influenced spread of search in the pictures, and this was further impacted by the time spent in the initial task, yet the carry-over effect diminished quickly. The results have implications for theories of top-down control and models that attempt to predict search in natural scenes. They are also relevant to real-world tasks in which performance is closely related to visual attention and search.
Collapse
Affiliation(s)
| | | | - Peter Hills
- Psychology Research Group, Bournemouth University, Poole, UK
| |
Collapse
|
31
|
Kietzmann TC, König P. Effects of contextual information and stimulus ambiguity on overt visual sampling behavior. Vision Res 2015; 110:76-86. [PMID: 25805148 DOI: 10.1016/j.visres.2015.02.023] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2014] [Revised: 02/25/2015] [Accepted: 02/27/2015] [Indexed: 11/26/2022]
Abstract
The sampling of our visual environment through saccadic eye movements is an essential function of the brain, allowing us to overcome the limits of peripheral vision. Understanding which parts of a scene attract overt visual attention is subject to intense research, and considerable progress has been made in unraveling the underlying cortical mechanisms. In contrast to spatial aspects, however, relatively little is understood about temporal aspects of overt visual sampling. At every fixation, the oculomotor system faces the decision whether to keep exploring different aspects of an object or scene or whether to remain fixated to allow for in-depth cortical processing - a situation that can be understood in terms of an exploration-exploitation dilemma. To improve our understanding of the factors involved in these decisions, we here investigate how the level of visual information, experimentally manipulated by scene context and stimulus ambiguity, changes the sampling behavior preceding the recognition of centrally presented ambiguous and disambiguated objects. Behaviorally, we find that context, although only presented until the first voluntary saccade, biases the perceptual outcome and significantly reduces reaction times. Importantly, we find that increased information about an object significantly alters its visual exploration, as evident through increased fixation durations and reduced saccade amplitudes. These results demonstrate that the initial sampling of an object, preceding its recognition, is subject to change based on the amount of information available in the system: increased evidence for its identity biases the exploration-exploitation strategy towards in-depth analyses.
Collapse
Affiliation(s)
- T C Kietzmann
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.
| | - P König
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany; Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| |
Collapse
|
32
|
Abstract
Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes.
Collapse
Affiliation(s)
- Melissa Le-Hoa Võ
- Scene Grammar Lab, Department of Cognitive Psychology, Goethe University Frankfurt, Frankfurt, Germany
| | | |
Collapse
|
33
|
MacInnes WJ, Krüger HM, Hunt AR. Just passing through? Inhibition of return in saccadic sequences. Q J Exp Psychol (Hove) 2015; 68:402-16. [DOI: 10.1080/17470218.2014.945097] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Responses tend to be slower to previously fixated spatial locations, an effect known as “inhibition of return” (IOR). Saccades cannot be assumed to be independent, however, and saccade sequences programmed in parallel differ from independent eye movements. We measured the speed of both saccadic and manual responses to probes appearing in previously fixated locations when those locations were fixated as part of either parallel or independent saccade sequences. Saccadic IOR was observed in independent but not parallel saccade sequences, while manual IOR was present in both parallel and independent sequence types. Saccadic IOR was also short-lived, and dissipated with delays of more than ∼1500 ms between the intermediate fixation and the probe onset. The results confirm that the characteristics of IOR depend critically on the response modality used for measuring it, with saccadic and manual responses giving rise to motor and attentional forms of IOR, respectively. Saccadic IOR is relatively short-lived and is not observed at intermediate locations of parallel saccade sequences, while attentional IOR is long-lasting and consistent for all sequence types.
Collapse
Affiliation(s)
- W. Joseph MacInnes
- School of Psychology, University of Aberdeen, Old Aberdeen, UK
- Faculty of Psychology, Higher School of Economics (HSE), Moscow, Russian Federation
| | - Hannah M. Krüger
- School of Psychology, University of Aberdeen, Old Aberdeen, UK
- Centre Attention and Vision, Laboratoire Psychologie de la Perception, Université Paris Descartes, Paris, France
| | - Amelia R. Hunt
- School of Psychology, University of Aberdeen, Old Aberdeen, UK
| |
Collapse
|
34
|
Haji-Abolhassani A, Clark JJ. An inverse Yarbus process: predicting observers' task from eye movement patterns. Vision Res 2014; 103:127-42. [PMID: 25175112 DOI: 10.1016/j.visres.2014.08.014] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2013] [Revised: 07/30/2014] [Accepted: 08/21/2014] [Indexed: 10/24/2022]
Abstract
In this paper we develop a probabilistic method to infer the visual-task of a viewer given measured eye movement trajectories. This method is based on the theory of hidden Markov models (HMM) that employs a first order Markov process to predict the coordinates of fixations given the task. The prediction confidence level of each task-dependent model is used in a Bayesian inference formulation, whereby the task with the maximum a posteriori (MAP) probability is selected. We applied this technique to a challenging dataset consisting of eye movement trajectories obtained from subjects viewing monochrome images of real scenes tasked with answering questions regarding the scenes. The results show that the HMM approach, combined with a clustering technique, can be a reliable way to infer visual-task from eye movements data.
Collapse
Affiliation(s)
- Amin Haji-Abolhassani
- Centre for Intelligent Machines, Department of Electrical and Computer Engineering, McGill University, Montreal, Quebec H3A 0E9, Canada.
| | - James J Clark
- Centre for Intelligent Machines, Department of Electrical and Computer Engineering, McGill University, Montreal, Quebec H3A 0E9, Canada.
| |
Collapse
|
35
|
Wass SV, Smith TJ. Individual Differences in Infant Oculomotor Behavior During the Viewing of Complex Naturalistic Scenes. INFANCY 2014; 19:352-384. [PMID: 25635173 PMCID: PMC4286103 DOI: 10.1111/infa.12049] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Little research hitherto has examined how individual differences in attention, as assessed using standard experimental paradigms, relate to individual differences in how attention is spontaneously allocated in more naturalistic contexts. Here, we analyzed the time intervals between refoveating eye movements (fixation durations) while typically developing 11-month-old infants viewed a 90-min battery ranging from complex dynamic to noncomplex static materials. The same infants also completed experimental assessments of cognitive control, psychomotor reaction times (RT), processing speed (indexed via peak look during habituation), and arousal (indexed via tonic pupil size). High test-retest reliability was found for fixation duration, across testing sessions and across types of viewing material. Increased cognitive control and increased arousal were associated with reduced variability in fixation duration. For fixations to dynamic stimuli, in which a large proportion of saccades may be exogenously cued, we found that psychomotor RT measures were most predictive of mean fixation duration; for fixations to static stimuli, in contrast, in which there is less exogenous attentional capture, we found that psychomotor RT did not predict performance, but that measures of cognitive control and arousal did. The implications of these findings for understanding the development of attentional control in naturalistic settings are discussed.
Collapse
Affiliation(s)
| | - Tim J Smith
- School of Psychological SciencesBirkbeck College, University of London
| |
Collapse
|
36
|
McDonnell GP, Mills M, McCuller L, Dodd MD. How does implicit learning of search regularities alter the manner in which you search? PSYCHOLOGICAL RESEARCH 2014; 79:183-93. [PMID: 24558017 DOI: 10.1007/s00426-014-0546-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2013] [Accepted: 02/04/2014] [Indexed: 10/25/2022]
|
37
|
|
38
|
|
39
|
McBride S, Huelse M, Lee M. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system. PLoS One 2013; 8:e54585. [PMID: 23437044 PMCID: PMC3577816 DOI: 10.1371/journal.pone.0054585] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2012] [Accepted: 12/14/2012] [Indexed: 11/18/2022] Open
Abstract
Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.
Collapse
Affiliation(s)
- Sebastian McBride
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom.
| | | | | |
Collapse
|
40
|
Saccadic momentum and facilitation of return saccades contribute to an optimal foraging strategy. PLoS Comput Biol 2013; 9:e1002871. [PMID: 23341766 PMCID: PMC3547797 DOI: 10.1371/journal.pcbi.1002871] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2012] [Accepted: 11/19/2012] [Indexed: 11/19/2022] Open
Abstract
The interest in saccadic IOR is funneled by the hypothesis that it serves a clear functional purpose in the selection of fixation points: the facilitation of foraging. In this study, we arrive at a different interpretation of saccadic IOR. First, we find that return saccades are performed much more often than expected from the statistical properties of saccades and saccade pairs. Second, we find that fixation durations before a saccade are modulated by the relative angle of the saccade, but return saccades show no sign of an additional temporal inhibition. Thus, we do not find temporal saccadic inhibition of return. Interestingly, we find that return locations are more salient, according to empirically measured saliency (locations that are fixated by many observers) as well as stimulus dependent saliency (defined by image features), than regular fixation locations. These results and the finding that return saccades increase the match of individual trajectories with a grand total priority map evidences the return saccades being part of a fixation selection strategy that trades off exploration and exploitation. Sometimes humans look at the same location twice. To appreciate the importance of this inconspicuous statement you have to consider that we move our eyes several billion (109) times during our lives and that looking at something is a necessary condition to enable conscious visual awareness. Thus, understanding why and how we move our eyes provides a window into our mental life. Here we investigate one heavily discussed aspect of human's fixation selection strategy: whether it inhibits returning to previously fixated locations. We analyze a large data set (more than 550,000 fixations from 235 subjects) and find that, returning to previously fixated locations happens much more often than expected from the statistical properties of eye-movement trajectories. Furthermore, those locations that we return to are not ordinary – they are more salient than locations that we do not return to. Thus, the inconspicuous statement that we look at the same locations twice reveals an important aspect of our strategy to select fixation points: That we trade off exploring our environment against making sure that we have fully comprehended the relevant parts of our environment.
Collapse
|
41
|
|
42
|
Oculomotor inhibition of return: How soon is it “recoded” into spatiotopic coordinates? Atten Percept Psychophys 2012; 74:1145-53. [DOI: 10.3758/s13414-012-0312-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
43
|
Foulsham T, Kingstone A. Modelling the influence of central and peripheral information on saccade biases in gaze-contingent scene viewing. VISUAL COGNITION 2012. [DOI: 10.1080/13506285.2012.680934] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
44
|
Wang Z, Satel J, Klein RM. Sensory and motor mechanisms of oculomotor inhibition of return. Exp Brain Res 2012; 218:441-53. [DOI: 10.1007/s00221-012-3033-8] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2011] [Accepted: 02/04/2012] [Indexed: 10/28/2022]
|
45
|
Hollingworth A. Guidance of visual search by memory and knowledge. NEBRASKA SYMPOSIUM ON MOTIVATION. NEBRASKA SYMPOSIUM ON MOTIVATION 2012; 59:63-89. [PMID: 23437630 DOI: 10.1007/978-1-4614-4794-8_4] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
To behave intelligently in the world, humans must be able to find objects efficiently within the complex environments they inhabit. A growing proportion of the literature on visual search is devoted to understanding this type of natural search. In the present chapter, I review the literature on visual search through natural scenes, focusing on the role of memory and knowledge in guiding attention to task-relevant objects.
Collapse
|
46
|
Does oculomotor inhibition of return influence fixation probability during scene search? Atten Percept Psychophys 2011; 73:2384-98. [DOI: 10.3758/s13414-011-0191-x] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
47
|
|
48
|
Saccadic context indicates information processing within visual fixations: evidence from event-related potentials and eye-movements analysis of the distractor effect. Int J Psychophysiol 2011; 80:54-62. [PMID: 21291920 DOI: 10.1016/j.ijpsycho.2011.01.013] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2010] [Revised: 01/21/2011] [Accepted: 01/25/2011] [Indexed: 11/21/2022]
Abstract
Attention, visual information processing, and oculomotor control are integrated functions of closely related brain mechanisms. Recently, it was shown that the processing of visual distractors appearing during a fixation is modulated by the amplitude of its preceding saccade (Pannasch & Velichkovsky, 2009). So far, this was demonstrated only at the behavioral level in terms of saccadic inhibition. The present study investigated distractor-related brain activity with cortical eye fixation-related potentials (EFRPs). Moreover, the following saccade was included as an additional classification criterion. Eye movements and EFRPs were recorded during free visual exploration of paintings. During some of the fixations, a visual distractor was shown as an annulus around the fixation position, 100 ms after the fixation onset. The saccadic context of a fixation was classified by its preceding and following saccade amplitudes with the cut-off criterion set to 4° of visual angle. The prolongation of fixation duration induced by distractors was largest for fixations preceded and followed by short saccades. EFRP data revealed a difference in distractor-related P2 amplitude between the saccadic context conditions, following the same trend as in eye movements. Furthermore, influences of the following saccade amplitude on the latency of the saccadic inhibition and on the N1 amplitude were found. The EFRP results cannot be explained by the influence of saccades per se since this bias was removed by subtracting the baseline from the distractor EFRP. Rather, the data suggest that saccadic context indicates differences in how information is processed within single visual fixations.
Collapse
|
49
|
Kollmorgen S, Nortmann N, Schröder S, König P. Influence of low-level stimulus features, task dependent factors, and spatial biases on overt visual attention. PLoS Comput Biol 2010; 6:e1000791. [PMID: 20502672 PMCID: PMC2873902 DOI: 10.1371/journal.pcbi.1000791] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2010] [Accepted: 04/21/2010] [Indexed: 11/18/2022] Open
Abstract
Visual attention is thought to be driven by the interplay between low-level visual features and task dependent information content of local image regions, as well as by spatial viewing biases. Though dependent on experimental paradigms and model assumptions, this idea has given rise to varying claims that either bottom-up or top-down mechanisms dominate visual attention. To contribute toward a resolution of this discussion, here we quantify the influence of these factors and their relative importance in a set of classification tasks. Our stimuli consist of individual image patches (bubbles). For each bubble we derive three measures: a measure of salience based on low-level stimulus features, a measure of salience based on the task dependent information content derived from our subjects' classification responses and a measure of salience based on spatial viewing biases. Furthermore, we measure the empirical salience of each bubble based on our subjects' measured eye gazes thus characterizing the overt visual attention each bubble receives. A multivariate linear model relates the three salience measures to overt visual attention. It reveals that all three salience measures contribute significantly. The effect of spatial viewing biases is highest and rather constant in different tasks. The contribution of task dependent information is a close runner-up. Specifically, in a standardized task of judging facial expressions it scores highly. The contribution of low-level features is, on average, somewhat lower. However, in a prototypical search task, without an available template, it makes a strong contribution on par with the two other measures. Finally, the contributions of the three factors are only slightly redundant, and the semi-partial correlation coefficients are only slightly lower than the coefficients for full correlations. These data provide evidence that all three measures make significant and independent contributions and that none can be neglected in a model of human overt visual attention.
Collapse
Affiliation(s)
- Sepp Kollmorgen
- Institute of Neurobiopsychology, University of Osnabrück, Osnabrück, Germany
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Nora Nortmann
- Institute of Neurobiopsychology, University of Osnabrück, Osnabrück, Germany
| | - Sylvia Schröder
- Institute of Neurobiopsychology, University of Osnabrück, Osnabrück, Germany
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- * E-mail:
| | - Peter König
- Institute of Neurobiopsychology, University of Osnabrück, Osnabrück, Germany
| |
Collapse
|
50
|
Wang Z, Klein RM. Searching for inhibition of return in visual search: A review. Vision Res 2010; 50:220-8. [DOI: 10.1016/j.visres.2009.11.013] [Citation(s) in RCA: 116] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2009] [Revised: 10/14/2009] [Accepted: 11/17/2009] [Indexed: 11/26/2022]
|