1
|
Biebl B, Arcidiacono E, Kacianka S, Rieger JW, Bengler K. Opportunities and Limitations of a Gaze-Contingent Display to Simulate Visual Field Loss in Driving Simulator Studies. FRONTIERS IN NEUROERGONOMICS 2022; 3:916169. [PMID: 38235462 PMCID: PMC10790882 DOI: 10.3389/fnrgo.2022.916169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 05/11/2022] [Indexed: 01/19/2024]
Abstract
Background Research on task performance under visual field loss is often limited due to small and heterogenous samples. Simulations of visual impairments hold the potential to account for many of those challenges. Digitally altered pictures, glasses, and contact lenses with partial occlusions have been used in the past. One of the most promising methods is the use of a gaze-contingent display that occludes parts of the visual field according to the current gaze position. In this study, the gaze-contingent paradigm was implemented in a static driving simulator to simulate visual field loss and to evaluate parallels in the resulting driving and gaze behavior in comparison to patients. Methods The sample comprised 15 participants without visual impairment. All the subjects performed three drives: with full vision, simulated left-sided homonymous hemianopia, and simulated right-sided homonymous hemianopia, respectively. During each drive, the participants drove through an urban environment where they had to maneuver through intersections by crossing straight ahead, turning left, and turning right. Results The subjects reported reduced safety and increased workload levels during simulated visual field loss, which was reflected in reduced lane position stability and greater absence of large gaze movements. Initial compensatory strategies could be found concerning a dislocated gaze position and a distorted fixation ratio toward the blind side, which was more pronounced for right-sided visual field loss. During left-sided visual field loss, the participants showed a smaller horizontal range of gaze positions, longer fixation durations, and smaller saccadic amplitudes compared to right-sided homonymous hemianopia and, more distinctively, compared to normal vision. Conclusion The results largely mirror reports from driving and visual search tasks under simulated and pathological homonymous hemianopia concerning driving and scanning challenges, initially adopted compensatory strategies, and driving safety. This supports the notion that gaze-contingent displays can be a useful addendum to driving simulator research with visual impairments if the results are interpreted considering methodological limitations and inherent differences to the pathological impairment.
Collapse
Affiliation(s)
- Bianca Biebl
- Chair of Ergonomics, School of Engineering and Design, Technical University of Munich, Garching, Germany
| | - Elena Arcidiacono
- Chair of Ergonomics, School of Engineering and Design, Technical University of Munich, Garching, Germany
| | - Severin Kacianka
- Chair of Software and Systems Engineering, Department of Informatics, Technical University of Munich, Garching, Germany
| | - Jochem W. Rieger
- Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Klaus Bengler
- Chair of Ergonomics, School of Engineering and Design, Technical University of Munich, Garching, Germany
| |
Collapse
|
2
|
David EJ, Lebranchu P, Perreira Da Silva M, Le Callet P. What are the visuo-motor tendencies of omnidirectional scene free-viewing in virtual reality? J Vis 2022; 22:12. [PMID: 35323868 PMCID: PMC8963670 DOI: 10.1167/jov.22.4.12] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 02/08/2022] [Indexed: 11/24/2022] Open
Abstract
Central and peripheral vision during visual tasks have been extensively studied on two-dimensional screens, highlighting their perceptual and functional disparities. This study has two objectives: replicating on-screen gaze-contingent experiments removing central or peripheral field of view in virtual reality, and identifying visuo-motor biases specific to the exploration of 360 scenes with a wide field of view. Our results are useful for vision modelling, with applications in gaze position prediction (e.g., content compression and streaming). We ask how previous on-screen findings translate to conditions where observers can use their head to explore stimuli. We implemented a gaze-contingent paradigm to simulate loss of vision in virtual reality, participants could freely view omnidirectional natural scenes. This protocol allows the simulation of vision loss with an extended field of view (\(\gt \)80°) and studying the head's contributions to visual attention. The time-course of visuo-motor variables in our pure free-viewing task reveals long fixations and short saccades during first seconds of exploration, contrary to literature in visual tasks guided by instructions. We show that the effect of vision loss is reflected primarily on eye movements, in a manner consistent with two-dimensional screens literature. We hypothesize that head movements mainly serve to explore the scenes during free-viewing, the presence of masks did not significantly impact head scanning behaviours. We present new fixational and saccadic visuo-motor tendencies in a 360° context that we hope will help in the creation of gaze prediction models dedicated to virtual reality.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany
| | - Pierre Lebranchu
- LS2N UMR CNRS 6004, University of Nantes and Nantes University Hospital, Nantes, France
| | | | - Patrick Le Callet
- LS2N UMR CNRS 6004, University of Nantes, Nantes, France
- http://pagesperso.ls2n.fr/~lecallet-p/index.html
| |
Collapse
|
3
|
Pan Y, Ge X, Ge L, Xu J. Using eye-controlled highlighting techniques to support both serial and parallel processing in visual search. APPLIED ERGONOMICS 2021; 97:103522. [PMID: 34261002 DOI: 10.1016/j.apergo.2021.103522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2020] [Revised: 06/24/2021] [Accepted: 06/28/2021] [Indexed: 06/13/2023]
Abstract
Recent research has developed two eye-controlled highlighting techniques, namely, block highlight display (BHD) and single highlight display (SHD), that enhance information presentation based on a user's current gaze position. The present research aimed to investigate how these techniques facilitate mental processing of users' visual search in high information-density visual environments. In Experiment 1, 60 participants performed 3-, 6-, 9-, and 12-icon visual search tasks. The search times significantly increased as the number of icons increased with the SHD but not with the BHD. In Experiment 2, 40 participants performed a 49-icon visual search task. The search time was faster, and the fixation spatial density was lower with the BHD than with the SHD. These results suggested that the BHD supported parallel processing in the highlighted area and serial processing in the broader display area; thus, the BHD improved search performance compared to the SHD, which primarily supported serial processing.
Collapse
Affiliation(s)
- Yunxian Pan
- Center for Psychological Sciences, Zhejiang University, Hangzhou, Zhejiang Province, China
| | - Xianliang Ge
- Center for Psychological Sciences, Zhejiang University, Hangzhou, Zhejiang Province, China
| | - Liezhong Ge
- Center for Psychological Sciences, Zhejiang University, Hangzhou, Zhejiang Province, China
| | - Jie Xu
- Center for Psychological Sciences, Zhejiang University, Hangzhou, Zhejiang Province, China.
| |
Collapse
|
4
|
Prunty JE, Keemink JR, Kelly DJ. Infants scan static and dynamic facial expressions differently. INFANCY 2021; 26:831-856. [PMID: 34288344 DOI: 10.1111/infa.12426] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Revised: 07/02/2021] [Accepted: 07/08/2021] [Indexed: 11/30/2022]
Abstract
Despite being inherently dynamic phenomena, much of our understanding of how infants attend and scan facial expressions is based on static face stimuli. Here we investigate how six-, nine-, and twelve-month infants allocate their visual attention toward dynamic-interactive videos of the six basic emotional expressions, and compare their responses with static images of the same stimuli. We find infants show clear differences in how they attend and scan dynamic and static expressions, looking longer toward the dynamic-face and lower-face regions. Infants across all age groups show differential interest in expressions, and show precise scanning of regions "diagnostic" for emotion recognition. These data also indicate that infants' attention toward dynamic expressions develops over the first year of life, including relative increases in interest and scanning precision toward some negative facial expressions (e.g., anger, fear, and disgust).
Collapse
Affiliation(s)
| | | | - David J Kelly
- School of Psychology, University of Kent, Canterbury, UK
| |
Collapse
|
5
|
Morales A, Costela FM, Woods RL. Saccade Landing Point Prediction Based on Fine-Grained Learning Method. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:52474-52484. [PMID: 33981520 PMCID: PMC8112574 DOI: 10.1109/access.2021.3070511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The landing point of a saccade defines the new fixation region, the new region of interest. We asked whether it was possible to predict the saccade landing point early in this very fast eye movement. This work proposes a new algorithm based on LSTM networks and a fine-grained loss function for saccade landing point prediction in real-world scenarios. Predicting the landing point is a critical milestone toward reducing the problems caused by display-update latency in gaze-contingent systems that make real-time changes in the display based on eye tracking. Saccadic eye movements are some of the fastest human neuro-motor activities with angular velocities of up to 1,000°/s. We present a comprehensive analysis of the performance of our method using a database with almost 220,000 saccades from 75 participants captured during natural viewing of videos. We include a comparison with state-of-the-art saccade landing point prediction algorithms. The results obtained using our proposed method outperformed existing approaches with improvements of up to 50% error reduction. Finally, we analyzed some factors that affected prediction errors including duration, length, age, and user intrinsic characteristics.
Collapse
Affiliation(s)
- Aythami Morales
- BiDA-Lab, Department of Electrical Engineering, Universidad Autonoma de Madrid, 28049 Madrid, Spain
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA 02114, USA
| | - Francisco M Costela
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA 02114, USA
- Department of Ophthalmology, Harvard Medical School, Boston, MA 02115, USA
| | - Russell L Woods
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA 02114, USA
- Department of Ophthalmology, Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
6
|
David E, Beitner J, Võ MLH. Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment. Brain Sci 2020; 10:E841. [PMID: 33198116 PMCID: PMC7696943 DOI: 10.3390/brainsci10110841] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/09/2020] [Accepted: 11/10/2020] [Indexed: 11/19/2022] Open
Abstract
Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view could be of use. In this study, we had participants looking for objects in simulated everyday rooms in virtual reality. By implementing a gaze-contingent protocol we masked central or peripheral vision (masks of 6 deg. of radius) during trials. We analyzed the impact of vision loss on visuo-motor variables related to fixation (duration) and saccades (amplitude and relative directions). An important novelty is that we segregated eye, head and the general gaze movements in our analyses. Additionally, we studied these measures after separating trials into two search phases (scanning and verification). Our results generally replicate past on-screen literature and teach about the role of eye and head movements. We showed that the scanning phase is dominated by short fixations and long saccades to explore, and the verification phase by long fixations and short saccades to analyze. One finding indicates that eye movements are strongly driven by visual stimulation, while head movements serve a higher behavioral goal of exploring omnidirectional scenes. Moreover, losing central vision has a smaller impact than reported on-screen, hinting at the importance of peripheral scene processing for visual search with an extended field of view. Our findings provide more information concerning how knowledge gathered on-screen may transfer to more natural conditions, and attest to the experimental usefulness of eye tracking in virtual reality.
Collapse
Affiliation(s)
- Erwan David
- Scene Grammar Lab, Department of Psychology, Theodor-W.-Adorno-Platz 6, Johann Wolfgang-Goethe-Universität, 60323 Frankfurt, Germany; (J.B.); (M.L.-H.V.)
| | | | | |
Collapse
|
7
|
Pollmann S, Geringswald F, Wei P, Porracin E. Intact Contextual Cueing for Search in Realistic Scenes with Simulated Central or Peripheral Vision Loss. Transl Vis Sci Technol 2020; 9:15. [PMID: 32855862 PMCID: PMC7422911 DOI: 10.1167/tvst.9.8.15] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 05/29/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose Search in repeatedly presented visual search displays can benefit from implicit learning of the display items' spatial configuration. This effect has been named contextual cueing. Previously, contextual cueing was found to be reduced in observers with foveal or peripheral vision loss. Whereas this previous work used symbolic (T among L-shape) search displays with arbitrary configurations, here we investigated search in realistic scenes. Search in meaningful realistic scenes may benefit much more from explicit memory of the target location. We hypothesized that this explicit recall of the target location reduces visuospatial working memory demands on search considerably, thereby enabling efficient search guidance by learnt contextual cues in observers with vision loss. Methods Two experiments with gaze-contingent scotoma simulation (Experiment 1: central scotoma, Experiment 2: peripheral scotoma) were carried out with normal-sighted observers (total n = 39/40). Observers had to find a cup in pseudorealistic indoor scenes and discriminate the direction of the cup's handle. Results With both central and peripheral scotoma simulation, contextual cueing was observed in repeatedly presented configurations. Conclusions The data show that patients suffering from central or peripheral vision loss may benefit more from memory-guided visual search than would be expected from scotoma simulation and patient studies using abstract symbolic search displays. Translational Relevance In the assessment of visual search in patients with vision loss, semantically meaningless abstract search displays may gain insights into deficient search functions, but more realistic meaningful search scenes are needed to assess whether search deficits can be compensated.
Collapse
Affiliation(s)
- Stefan Pollmann
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, China.,Department of Psychology, Otto-von-Guericke-University, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke-University, Magdeburg, Germany
| | | | - Ping Wei
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, China
| | - Eleonora Porracin
- Department of Psychology, Otto-von-Guericke-University, Magdeburg, Germany
| |
Collapse
|
8
|
David EJ, Lebranchu P, Perreira Da Silva M, Le Callet P. Predicting artificial visual field losses: A gaze-based inference study. J Vis 2020; 19:22. [PMID: 31868896 DOI: 10.1167/19.14.22] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual field defects are a world-wide concern, and the proportion of the population experiencing vision loss is ever increasing. Macular degeneration and glaucoma are among the four leading causes of permanent vision loss. Identifying and characterizing visual field losses from gaze alone could prove crucial in the future for screening tests, rehabilitation therapies, and monitoring. In this experiment, 54 participants took part in a free-viewing task of visual scenes while experiencing artificial scotomas (central and peripheral) of varying radii in a gaze-contingent paradigm. We studied the importance of a set of gaze features as predictors to best differentiate between artificial scotoma conditions. Linear mixed models were utilized to measure differences between scotoma conditions. Correlation and factorial analyses revealed redundancies in our data. Finally, hidden Markov models and recurrent neural networks were implemented as classifiers in order to measure the predictive usefulness of gaze features. The results show separate saccade direction biases depending on scotoma type. We demonstrate that the saccade relative angle, amplitude, and peak velocity of saccades are the best features on the basis of which to distinguish between artificial scotomas in a free-viewing task. Finally, we discuss the usefulness of our protocol and analyses as a gaze-feature identifier tool that discriminates between artificial scotomas of different types and sizes.
Collapse
Affiliation(s)
| | - Pierre Lebranchu
- University of Nantes and Nantes University Hospital, Nantes, France
| | | | | |
Collapse
|
9
|
Otero-Millan J, Langston RE, Costela F, Macknik SL, Martinez-Conde S. Microsaccade generation requires a foveal anchor. J Eye Mov Res 2020; 12. [PMID: 33828756 PMCID: PMC7962683 DOI: 10.16910/jemr.12.6.14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Visual scene characteristics can affect various aspects of saccade and microsaccade dynamics. For example, blank visual scenes are known to elicit diminished saccade and microsaccade production, compared to natural scenes. Similarly, microsaccades are less frequent in the dark. Yet, the extent to which foveal versus peripheral visual information contribute to microsaccade production remains unclear: because microsaccade directions are biased towards covert attention locations, it follows that peripheral visual stimulation could suffice to produce regular microsaccade dynamics, even without foveal stimulation being present. Here we determined the characteristics of microsaccades as a function of foveal and/or peripheral visual stimulation, while human subjects conducted four types of oculomotor tasks (fixation, free viewing, guided viewing and passive viewing). Foveal information was either available, or made unavailable, by the presentation of simulated scotomas. We found foveal stimulation to be critical for microsaccade production, and peripheral stimulation, by itself, to be insufficient to yield normal microsaccades. In each oculomotor task, microsaccade production decreased when scotomas blocked foveal stimulation. Across comparable foveal stimulation conditions, the type of peripheral stimulation (static versus dynamic) moreover affected microsaccade production, with dynamic backgrounds resulting in lower microsaccadic rates than static backgrounds. These results indicate that a foveal visual anchor is necessary for normal microsaccade generation. Whereas peripheral visual stimulation, on its own, does not suffice for normal microsaccade production, it can nevertheless modulate microsaccadic characteristics. These findings extend our current understanding of the links between visual input and ocular motor control, and may therefore help improve the diagnosis and treatment of ophthalmic conditions that degrade central vision, such as age-related macular degeneration.
Collapse
|
10
|
Guidance in Cinematic Virtual Reality-Taxonomy, Research Status and Challenges. MULTIMODAL TECHNOLOGIES AND INTERACTION 2019. [DOI: 10.3390/mti3010019] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In Cinematic Virtual Reality (CVR), the viewer of an omnidirectional movie can freely choose the viewing direction when watching a movie. Therefore, traditional techniques in filmmaking for guiding the viewers’ attention cannot be adapted directly to CVR. Practices such as panning or changing the frame are no longer defined by the filmmaker; rather it is the viewer who decides where to look. In some stories, it is necessary to show certain details to the viewer, which should not be missed. At the same time, the freedom of the viewer to look around in the scene should not be destroyed. Therefore, techniques are needed which guide the attention of the spectator to visual information in the scene. Attention guiding also has the potential to improve the general viewing experience, since viewers will be less afraid to miss something when watching an omnidirectional movie where attention-guiding techniques have been applied. In recent years, there has been a lot of research about attention guiding in images, movies, virtual reality, augmented reality and also in CVR. We classify these methods and offer a taxonomy for attention-guiding methods. Discussing the different characteristics, we elaborate the advantages and disadvantages, give recommendations for use cases and apply the taxonomy to several examples of guiding methods.
Collapse
|
11
|
Abstract
The capability of directing gaze to relevant parts in the environment is crucial for our survival. Computational models have proposed quantitative accounts of human gaze selection in a range of visual search tasks. Initially, models suggested that gaze is directed to the locations in a visual scene at which some criterion such as the probability of target location, the reduction of uncertainty or the maximization of reward appear to be maximal. But subsequent studies established, that in some tasks humans instead direct their gaze to locations, such that after the single next look the criterion is expected to become maximal. However, in tasks going beyond a single action, the entire action sequence may determine future rewards thereby necessitating planning beyond a single next gaze shift. While previous empirical studies have suggested that human gaze sequences are planned, quantitative evidence for whether the human visual system is capable of finding optimal eye movement sequences according to probabilistic planning is missing. Here we employ a series of computational models to investigate whether humans are capable of looking ahead more than the next single eye movement. We found clear evidence that subjects' behavior was better explained by the model of a planning observer compared to a myopic, greedy observer, which selects only a single saccade at a time. In particular, the location of our subjects' first fixation differed depending on the stimulus and the time available for the search, which was well predicted quantitatively by a probabilistic planning model. Overall, our results are the first evidence that the human visual system's gaze selection agrees with optimal planning under uncertainty.
Collapse
|
12
|
Sitzmann V, Serrano A, Pavel A, Agrawala M, Gutierrez D, Masia B, Wetzstein G. Saliency in VR: How Do People Explore Virtual Environments? IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:1633-1642. [PMID: 29553930 DOI: 10.1109/tvcg.2018.2793599] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention. Whereas a body of recent work has focused on modeling saliency in desktop viewing conditions, VR is very different from these conditions in that viewing behavior is governed by stereoscopic vision and by the complex interaction of head orientation, gaze, and other kinematic constraints. To further our understanding of viewing behavior and saliency in VR, we capture and analyze gaze and head orientation data of 169 users exploring stereoscopic, static omni-directional panoramas, for a total of 1980 head and gaze trajectories for three different viewing conditions. We provide a thorough analysis of our data, which leads to several important insights, such as the existence of a particular fixation bias, which we then use to adapt existing saliency predictors to immersive VR conditions. In addition, we explore other applications of our data and analysis, including automatic alignment of VR video cuts, panorama thumbnails, panorama video synopsis, and saliency-basedcompression.
Collapse
|
13
|
Wang Q, Sun M, Liu H, Pan Y, Wang L, Ge L. The applicability of eye-controlled highlighting to the field of visual searching. AUSTRALIAN JOURNAL OF PSYCHOLOGY 2018; 70:294-301. [PMID: 30197433 PMCID: PMC6120491 DOI: 10.1111/ajpy.12200] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2017] [Revised: 01/24/2018] [Accepted: 02/01/2018] [Indexed: 12/01/2022]
Abstract
Objective With the increasing amount of information presented on current human-computer interfaces, eye-controlled highlighting has been proposed, as a new display technique, to optimise users' task performances. However, it is unknown to what extent the eye-controlled highlighting display facilitates visual search performance. The current study examined the facilitative effect of eye-controlled highlighting display technique on visual search with two major attributes of visual stimuli: stimulus type and the visual similarity between targets and distractors. Method In Experiment 1, we used digits and Chinese words as materials to explore the generalisation of the facilitative effect of the eye-controlled highlighting. In Experiment 2, we used Chinese words to examine the effect of target-distractor similarity on the facilitation of eye-controlled highlighting display. Results The eye-controlling highlighting display improved visual search performance when words were used as searching target and when the target-distractor similarity was high. No facilitative effect was found when digits were used as searching target or target-distractor similarity was low. Conclusions The effectiveness of the eye-controlled highlighting on a visual task was influenced by both stimulus type and target-distractor similarity. These findings provided guidelines for modern interface design with eye-based displays implemented.
Collapse
Affiliation(s)
- Qijun Wang
- Department of Psychology Zhejiang Sci-Tech University China
| | - Mengdan Sun
- Department of Psychology Zhejiang Sci-Tech University China
| | - Hongyan Liu
- Department of Psychology Zhejiang Sci-Tech University China
| | - Yunxian Pan
- Department of Psychology Zhejiang Sci-Tech University China
| | - Li Wang
- Laboratory of Human Factors Engineering China Astronaut Research and Training Center Beijing China
| | - Liezhong Ge
- Center for Psychological Sciences Zhejiang University Hangzhou China
| |
Collapse
|
14
|
Wang S, Woods RL, Costela FM, Luo G. Dynamic gaze-position prediction of saccadic eye movements using a Taylor series. J Vis 2017; 17:3. [PMID: 29196761 PMCID: PMC5710308 DOI: 10.1167/17.14.3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Gaze-contingent displays have been widely used in vision research and virtual reality applications. Due to data transmission, image processing, and display preparation, the time delay between the eye tracker and the monitor update may lead to a misalignment between the eye position and the image manipulation during eye movements. We propose a method to reduce the misalignment using a Taylor series to predict the saccadic eye movement. The proposed method was evaluated using two large datasets including 219,335 human saccades (collected with an EyeLink 1000 system, 95% range from 1° to 32°) and 21,844 monkey saccades (collected with a scleral search coil, 95% range from 1° to 9°). When assuming a 10-ms time delay, the prediction of saccade movements using the proposed method could reduce the misalignment greater than the state-of-the-art methods. The average error was about 0.93° for human saccades and 0.26° for monkey saccades. Our results suggest that this proposed saccade prediction method will create more accurate gaze-contingent displays.
Collapse
Affiliation(s)
- Shuhang Wang
- Schepens Eye Research Institute, Mass Eye and Ear, and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Russell L Woods
- Schepens Eye Research Institute, Mass Eye and Ear, and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Francisco M Costela
- Schepens Eye Research Institute, Mass Eye and Ear, and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Gang Luo
- Schepens Eye Research Institute, Mass Eye and Ear, and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
15
|
Gaspar JG, Ward N, Neider MB, Crowell J, Carbonari R, Kaczmarski H, Ringer RV, Johnson AP, Kramer AF, Loschky LC. Measuring the Useful Field of View During Simulated Driving With Gaze-Contingent Displays. HUMAN FACTORS 2016; 58:630-641. [PMID: 27091370 DOI: 10.1177/0018720816642092] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2015] [Accepted: 02/29/2016] [Indexed: 06/05/2023]
Abstract
OBJECTIVE We aimed to develop and test a new dynamic measure of transient changes to the useful field of view (UFOV), utilizing a gaze-contingent paradigm for use in realistic simulated environments. BACKGROUND The UFOV, the area from which an observer can extract visual information during a single fixation, has been correlated with driving performance and crash risk. However, some existing measures of the UFOV cannot be used dynamically in realistic simulators, and other UFOV measures involve constant stimuli at fixed locations. We propose a gaze-contingent UFOV measure (the GC-UFOV) that solves the above problems. METHODS Twenty-five participants completed four simulated drives while they concurrently performed an occasional gaze-contingent Gabor orientation discrimination task. Gabors appeared randomly at one of three retinal eccentricities (5°, 10°, or 15°). Cognitive workload was manipulated both with a concurrent auditory working memory task and with driving task difficulty (via presence/absence of lateral wind). RESULTS Cognitive workload had a detrimental effect on Gabor discrimination accuracy at all three retinal eccentricities. Interestingly, this accuracy cost was equivalent across eccentricities, consistent with previous findings of "general interference" rather than "tunnel vision." CONCLUSION The results showed that the GC-UFOV method was able to measure transient changes in UFOV due to cognitive load in a realistic simulated environment. APPLICATION The GC-UFOV paradigm developed and tested in this study is a novel and effective tool for studying transient changes in the UFOV due to cognitive load in the context of complex real-world tasks such as simulated driving.
Collapse
Affiliation(s)
| | - Nathan Ward
- University of Illinois Urbana-Champaign, Champaign
| | | | - James Crowell
- University of Iowa, Iowa CityUniversity of Illinois Urbana-Champaign, ChampaignUniversity of Central Florida, OrlandoUniversity of Illinois Urbana-Champaign, ChampaignKansas State University, ManhattanConcordia University, Montreal, CanadaNortheastern University, Boston, MAKansas State University, Manhattan
| | - Ronald Carbonari
- University of Iowa, Iowa CityUniversity of Illinois Urbana-Champaign, ChampaignUniversity of Central Florida, OrlandoUniversity of Illinois Urbana-Champaign, ChampaignKansas State University, ManhattanConcordia University, Montreal, CanadaNortheastern University, Boston, MAKansas State University, Manhattan
| | | | | | | | | | | |
Collapse
|
16
|
Kurzhals K, Burch M, Pfeiffer T, Weiskopf D. Eye Tracking in Computer-Based Visualization. Comput Sci Eng 2015. [DOI: 10.1109/mcse.2015.93] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
17
|
Abstract
Gaze-contingent displays combine a display device with an eyetracking system to rapidly update an image on the basis of the measured eye position. All such systems have a delay, the system latency, between a change in gaze location and the related change in the display. The system latency is the result of the delays contributed by the eyetracker, the display computer, and the display, and it is affected by the properties of each component, which may include variability. We present a direct, simple, and low-cost method to measure the system latency. The technique uses a device to briefly blind the eyetracker system (e.g., for video-based eyetrackers, a device with infrared light-emitting diodes (LED)), creating an eyetracker event that triggers a change to the display monitor. The time between these two events, as captured by a relatively low-cost consumer camera with high-speed video capability (1,000 Hz), is an accurate measurement of the system latency. With multiple measurements, the distribution of system latencies can be characterized. The same approach can be used to synchronize the eye position time series and a video recording of the visual stimuli that would be displayed in a particular gaze-contingent experiment. We present system latency assessments for several popular types of displays and discuss what values are acceptable for different applications, as well as how system latencies might be improved.
Collapse
|
18
|
Loschky LC, Ringer RV, Johnson AP, Larson AM, Neider M, Kramer AF. Blur Detection is Unaffected by Cognitive Load. VISUAL COGNITION 2014; 22:522-547. [PMID: 24771997 PMCID: PMC3996539 DOI: 10.1080/13506285.2014.884203] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2013] [Accepted: 01/14/2014] [Indexed: 10/26/2022]
Abstract
Blur detection is affected by retinal eccentricity, but is it also affected by attentional resources? Research showing effects of selective attention on acuity and contrast sensitivity suggests that allocating attention should increase blur detection. However, research showing that blur affects selection of saccade targets suggests that blur detection may be pre-attentive. To investigate this question, we carried out experiments in which viewers detected blur in real-world scenes under varying levels of cognitive load manipulated by the N-back task. We used adaptive threshold estimation to measure blur detection thresholds at 0°, 3°, 6°, and 9° eccentricity. Participants carried out blur detection as a single task, a single task with to-be-ignored letters, or an N-back task with four levels of cognitive load (0, 1, 2, or 3-back). In Experiment 1, blur was presented gaze-contingently for occasional single eye fixations while participants viewed scenes in preparation for an easy picture recognition memory task, and the N-back stimuli were presented auditorily. The results for three participants showed a large effect of retinal eccentricity on blur thresholds, significant effects of N-back level on N-back performance, scene recognition memory, and gaze dispersion, but no effect of N-back level on blur thresholds. In Experiment 2, we replicated Experiment 1 but presented the images tachistoscopically for 200 ms (half with, half without blur), to determine whether gaze-contingent blur presentation in Experiment 1 had produced attentional capture by blur onset during a fixation, thus eliminating any effect of cognitive load on blur detection. The results with three new participants replicated those of Experiment 1, indicating that the use of gaze-contingent blur presentation could not explain the lack of effect of cognitive load on blur detection. Thus, apparently blur detection in real-world scene images is unaffected by attentional resources, as manipulated by the cognitive load produced by the N-back task.
Collapse
Affiliation(s)
- Lester C. Loschky
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, USA
| | - Ryan V. Ringer
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, USA
| | - Aaron P. Johnson
- Department of Psychology, Concordia University, Montreal, Quebec, Canada
| | - Adam M. Larson
- Department of Psychology, University of Findlay, Findlay, OH, USA
| | - Mark Neider
- Department of Psychology, University of Central Florida, Orlando, FL, USA
| | - Arthur F. Kramer
- Department of Psychology and the Beckman Institute, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| |
Collapse
|
19
|
|
20
|
Pfeiffer UJ, Vogeley K, Schilbach L. From gaze cueing to dual eye-tracking: novel approaches to investigate the neural correlates of gaze in social interaction. Neurosci Biobehav Rev 2013; 37:2516-28. [PMID: 23928088 DOI: 10.1016/j.neubiorev.2013.07.017] [Citation(s) in RCA: 103] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2013] [Revised: 07/16/2013] [Accepted: 07/26/2013] [Indexed: 11/25/2022]
Abstract
Tracking eye-movements provides easy access to cognitive processes involved in visual and sensorimotor processing. More recently, the underlying neural mechanisms have been examined by combining eye-tracking and functional neuroimaging methods. Apart from extracting visual information, gaze also serves important functions in social interactions. As a deictic cue, gaze can be used to direct the attention of another person to an object. Conversely, by following other persons' gaze we gain access to their attentional focus, which is essential for understanding their mental states. Social gaze has therefore been studied extensively to understand the social brain. In this endeavor, gaze has mostly been investigated from an observational perspective using static displays of faces and eyes. However, there is growing consent that observational paradigms are insufficient for an understanding of the neural mechanisms of social gaze behavior, which typically involve active engagement in social interactions. Recent methodological advances have allowed increasing ecological validity by studying gaze in face-to-face encounters in real-time. Such improvements include interactions using virtual agents in gaze-contingent eye-tracking paradigms, live interactions via video feeds, and dual eye-tracking in two-person setups. These novel approaches can be used to analyze brain activity related to social gaze behavior. This review introduces these methodologies and discusses recent findings on the behavioral functions and neural mechanisms of gaze processing in social interaction.
Collapse
Affiliation(s)
- Ulrich J Pfeiffer
- Neuroimaging Group, Department of Psychiatry, University Hospital Cologne, Kerpener Strasse 62, 50937 Cologne, Germany.
| | | | | |
Collapse
|
21
|
Han P, Saunders DR, Woods RL, Luo G. Trajectory prediction of saccadic eye movements using a compressed exponential model. J Vis 2013; 13:13.8.27. [PMID: 23902753 DOI: 10.1167/13.8.27] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Gaze-contingent display paradigms play an important role in vision research. The time delay due to data transmission from eye tracker to monitor may lead to a misalignment between the gaze direction and image manipulation during eye movements, and therefore compromise the contingency. We present a method to reduce this misalignment by using a compressed exponential function to model the trajectories of saccadic eye movements. Our algorithm was evaluated using experimental data from 1,212 saccades ranging from 3° to 30°, which were collected with an EyeLink 1000 and a Dual-Purkinje Image (DPI) eye tracker. The model fits eye displacement with a high agreement (R² > 0.96). When assuming a 10-millisecond time delay, prediction of 2D saccade trajectories using our model could reduce the misalignment by 30% to 60% with the EyeLink tracker and 20% to 40% with the DPI tracker for saccades larger than 8°. Because a certain number of samples are required for model fitting, the prediction did not offer improvement for most small saccades and the early stages of large saccades. Evaluation was also performed for a simulated 100-Hz gaze-contingent display using the prerecorded saccade data. With prediction, the percentage of misalignment larger than 2° dropped from 45% to 20% for EyeLink and 42% to 26% for DPI data. These results suggest that the saccade-prediction algorithm may help create more accurate gaze-contingent displays.
Collapse
Affiliation(s)
- Peng Han
- School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou, China.
| | | | | | | |
Collapse
|
22
|
Poscoliero T, Marzi CA, Girelli M. Unconscious priming by illusory figures: the role of the salient region. J Vis 2013; 13:27. [PMID: 23625644 DOI: 10.1167/13.5.27] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
In this study we provide evidence that unconscious priming can be obtained as a result of the processing of the salient region (SR) of illusory figures and without that of illusory contours (ICs). We used a metacontrast masking paradigm where illusory figures were masked by real figures. In Experiment 1 we found a clear priming effect when participants were asked to discriminate between square and diamond masks preceded by congruent or incongruent illusory square or diamond primes. It is likely that metacontrast impairs the processing of ICs but not of the SR; therefore the above result strongly suggests that the priming effect was specifically related to the processing of the SR. In Experiment 2 participants were tested in the same task as in Experiment 1 with additional primes in which the inducers were presented in the same locations but their shapes were changed so as to modify the global configuration. We termed these primes High, Low, and No Salient Region (HSR, LSR, and NSR, respectively). The HSR condition replicated Experiment 1, whereas in the LSR and NSR conditions the priming effect got progressively smaller. The results of Experiment 1 were replicated with the priming effect significantly larger in the HSR than in all other conditions. It was also larger in the HSR than in LSR condition and smallest but still present in the NSR condition. Taken together, these results indicate that the unconscious processing of only the SR yields a priming effect and that a reduction of the saliency of the SR leads to a reduction of the priming effect, while its elimination does not abolish it.
Collapse
Affiliation(s)
- Tommaso Poscoliero
- Department of Neurological, Neuropsychological, Morphological, and Motor Sciences, University of Verona, Verona, Italy
| | | | | |
Collapse
|
23
|
Aguilar C, Castet E. Gaze-contingent simulation of retinopathy: some potential pitfalls and remedies. Vision Res 2011; 51:997-1012. [PMID: 21335024 DOI: 10.1016/j.visres.2011.02.010] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2010] [Revised: 02/08/2011] [Accepted: 02/11/2011] [Indexed: 11/26/2022]
Abstract
Many important results in visual neuroscience rely on the use of gaze-contingent retinal stabilization techniques. Our work focuses on the important fraction of these studies that is concerned with the retinal stabilization of visual filters that degrade some specific portions of the visual field. For instance, macular scotomas, often induced by age related macular degeneration, can be simulated by continuously displaying a gaze-contingent mask in the center of the visual field. The gaze-contingent rules used in most of these studies imply only a very minimal processing of ocular data. By analyzing the relationship between gaze and scotoma locations for different oculo-motor patterns, we show that such a minimal processing might have adverse perceptual and oculomotor consequences due mainly to two potential problems: (a) a transient blink-induced motion of the scotoma while gaze is static, and (b) the intrusion of post-saccadic slow eye movements. We have developed new gaze-contingent rules to solve these two problems. We have also suggested simple ways of tackling two unrecognized problems that are a potential source of mismatch between gaze and scotoma locations. Overall, the present work should help design, describe and test the paradigms used to simulate retinopathy with gaze-contingent displays.
Collapse
Affiliation(s)
- Carlos Aguilar
- Université Aix-Marseille II, CNRS, Institut de Neurosciences Cognitives de la Méditerranée, 31 chemin Joseph Aiguier, 13009 Marseille, France
| | | |
Collapse
|
24
|
|
25
|
Wilms M, Schilbach L, Pfeiffer U, Bente G, Fink GR, Vogeley K. It's in your eyes--using gaze-contingent stimuli to create truly interactive paradigms for social cognitive and affective neuroscience. Soc Cogn Affect Neurosci 2010; 5:98-107. [PMID: 20223797 DOI: 10.1093/scan/nsq024] [Citation(s) in RCA: 98] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The field of social neuroscience has made remarkable progress in elucidating the neural mechanisms of social cognition. More recently, the need for new experimental approaches has been highlighted that allow studying social encounters in a truly interactive manner by establishing 'online' reciprocity in social interaction. In this article, we present a newly developed adaptation of a method which uses eyetracking data obtained from participants in real time to control visual stimulation during functional magnetic resonance imaging, thus, providing an innovative tool to generate gaze-contingent stimuli in spite of the constraints of this experimental setting. We review results of two paradigms employing this technique and demonstrate how gaze data can be used to animate a virtual character whose behavior becomes 'responsive' to being looked at allowing the participant to engage in 'online' interaction with this virtual other in real-time. Possible applications of this setup are discussed highlighting the potential of this development as a new 'tool of the trade' in social cognitive and affective neuroscience.
Collapse
Affiliation(s)
- Marcus Wilms
- Institute of Neurosciences and Medicine, Research Centre Juelich, Juelich, Germany
| | | | | | | | | | | |
Collapse
|
26
|
Toet A. Gaze directed displays as an enabling technology for attention aware systems. COMPUTERS IN HUMAN BEHAVIOR 2006. [DOI: 10.1016/j.chb.2005.12.010] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|