1
|
Kim J, Yoshida T. Sense of agency at a temporally-delayed gaze-contingent display. PLoS One 2024; 19:e0309998. [PMID: 39241025 DOI: 10.1371/journal.pone.0309998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 08/22/2024] [Indexed: 09/08/2024] Open
Abstract
The subjective feeling of being the author of one's actions and the subsequent consequences is referred to as a sense of agency. Such a feeling is crucial for usability in human-computer interactions, where eye movement has been adopted, yet this area has been scarcely investigated. We examined how the temporal action-feedback discrepancy affects the sense of agency concerning eye movement. Participants conducted a visual search for an array of nine Chinese characters within a temporally-delayed gaze-contingent display, blurring the peripheral view. The relative delay between each eye movement and the subsequent window movement varied from 0 to 4,000 ms. In the control condition, the window played a recorded gaze behavior. The mean authorship rating and the proportion of "self" responses in the categorical authorship report ("self," "delayed self," and "other") gradually decreased as the temporal discrepancy increased, with "other" being rarely reported, except in the control condition. These results generally mirror those of prior studies on hand actions, suggesting that sense of agency extends beyond the effector body parts to other modalities, and two different types of sense of agency that have different temporal characteristics are simultaneously operating. The mode of fixation duration shifted as the delay increased under 200-ms delays and was divided into two modes at 200-500 ms delays. The frequency of 0-1.5° saccades exhibited an increasing trend as the delay increased. These results demonstrate the influence of perceived action-effect discrepancy on action refinement and task strategy.
Collapse
Affiliation(s)
- Junhui Kim
- School of Engineering, Tokyo Institute of Technology, Meguro City, Tokyo, Japan
| | - Takako Yoshida
- School of Engineering, Tokyo Institute of Technology, Meguro City, Tokyo, Japan
| |
Collapse
|
2
|
Kim J, Yoshida T. Sense of agency at a gaze-contingent display with jittery temporal delay. Front Psychol 2024; 15:1364076. [PMID: 38827897 PMCID: PMC11141391 DOI: 10.3389/fpsyg.2024.1364076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Accepted: 04/22/2024] [Indexed: 06/05/2024] Open
Abstract
Introduction Inconsistent jittery temporal delays between action and subsequent feedback, prevalent in network-based human-computer interaction (HCI), have been insufficiently explored, particularly regarding their impact on the sense of agency (SoA). This study investigates the SoA in the context of eye-gaze HCI under jittery delay conditions. Methods Participants performed a visual search for Chinese characters using a biresolutional gaze-contingent display, which displayed a high-resolution image in the central vision and a low-resolution in the periphery. We manipulated the delay between eye movements and display updates using a truncated normal distribution (μ to μ + 2 σ) with μ ranging from 0 to 400 ms and σ fixed at 50 ms. Playback of recorded gaze data provided a non-controllable condition. Results The study revealed that both reported authorship and controllability scores, as well as the fixation count per second, decreased as μ increased, aligning with trends observed under constant delay conditions. The subjective authorship weakened significantly at a μ of 94 ms. Notably, the comparison between jittery and constant delays indicated the minimum value (μ) of the distribution as a critical parameter influencing both authorship perception and visual search time efficiency. Discussion This finding underscores the importance of the shortest delay in modulating SoA. Further examining the relative distribution for fixation duration and saccade amplitude suggests an adaptation in action planning and attention distribution in response to delay. By providing a systematic examination of the statistical attributes of jittery delays that most significantly affect SoA, this research offers valuable implications for the design of efficient, delay-tolerant eye-gaze HCI, expanding our understanding of SoA in technologically mediated interactions. Moreover, our findings highlight the significance of considering both constant and variable delay impacts in HCI usability design, marking a novel contribution to the field.
Collapse
Affiliation(s)
- Junhui Kim
- Department of Mechanical Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo, Japan
| | | |
Collapse
|
3
|
Mikawa Y, Fukiage T. Low-Latency Ocular Parallax Rendering and Investigation of Its Effect on Depth Perception in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2228-2238. [PMID: 38442067 DOI: 10.1109/tvcg.2024.3372078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/07/2024]
Abstract
With a demand for an immersive experience in virtual/augmented reality (VR/AR) displays, recent efforts have incorporated eye states, such as focus and fixation, into display graphics. Among these, ocular parallax, a small parallax generated by eye rotation, has received considerable attention for its impact on depth perception. However, the substantial latency of head-mounted displays (HMDs) has made it challenging to accurately assess its true effect during free eye movements. To address this issue, we propose a high-speed (360 Hz) and low-latency (4.8 ms) ocular parallax rendering system with a custom-built eye tracker. Using this proposed system, we conducted an investigation to determine the latency requirements necessary for achieving perceptually stable ocular parallax rendering. Our findings indicate that, in binocular viewing, ocular parallax rendering is perceived as significantly less stable than conventional rendering when the latency exceeds 43.72 ms at 1.3 D and 21.50 ms at 2.0 D. We also evaluated the effects of ocular parallax rendering on binocular fusion and monocular depth perception under free viewing conditions. The results demonstrated that ocular parallax rendering can enhance binocular fusion but has a limited impact on depth perception under monocular viewing conditions when latency is minimized.
Collapse
|
4
|
Mok CS, Bazilinskyy P, de Winter J. Stopping by looking: A driver-pedestrian interaction study in a coupled simulator using head-mounted displays with eye-tracking. APPLIED ERGONOMICS 2022; 105:103825. [PMID: 35777182 DOI: 10.1016/j.apergo.2022.103825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 04/10/2022] [Accepted: 06/03/2022] [Indexed: 06/15/2023]
Abstract
Automated vehicles (AVs) can perform low-level control tasks but are not always capable of proper decision-making. This paper presents a concept of eye-based maneuver control for AV-pedestrian interaction. Previously, it was unknown whether the AV should conduct a stopping maneuver when the driver looks at the pedestrian or looks away from the pedestrian. A two-agent experiment was conducted using two head-mounted displays with integrated eye-tracking. Seventeen pairs of participants (pedestrian and driver) each interacted in a road crossing scenario. The pedestrians' task was to hold a button when they felt safe to cross the road, and the drivers' task was to direct their gaze according to instructions. Participants completed three 16-trial blocks: (1) Baseline, in which the AV was pre-programmed to yield or not yield, (2) Look to Yield (LTY), in which the AV yielded when the driver looked at the pedestrian, and (3) Look Away to Yield (LATY), in which the AV yielded when the driver did not look at the pedestrian. The driver's eye movements in the LTY and LATY conditions were visualized using a virtual light beam. Crossing performance was assessed based on whether the pedestrian held the button when the AV yielded and released the button when the AV did not yield. Furthermore, the pedestrians' and drivers' acceptance of the mappings was measured through a questionnaire. The results showed that the LTY and LATY mappings yielded better crossing performance than Baseline. Furthermore, the LTY condition was best accepted by drivers and pedestrians. Eye-tracking analyses indicated that the LTY and LATY mappings attracted the pedestrian's attention, while pedestrians still distributed their attention between the AV and a second vehicle approaching from the other direction. In conclusion, LTY control may be a promising means of AV control at intersections before full automation is technologically feasible.
Collapse
Affiliation(s)
- Chun Sang Mok
- Department of Cognitive Robotics, Delft University of Technology, Delft, the Netherlands
| | - Pavlo Bazilinskyy
- Department of Cognitive Robotics, Delft University of Technology, Delft, the Netherlands
| | - Joost de Winter
- Department of Cognitive Robotics, Delft University of Technology, Delft, the Netherlands.
| |
Collapse
|
5
|
David EJ, Lebranchu P, Perreira Da Silva M, Le Callet P. What are the visuo-motor tendencies of omnidirectional scene free-viewing in virtual reality? J Vis 2022; 22:12. [PMID: 35323868 PMCID: PMC8963670 DOI: 10.1167/jov.22.4.12] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 02/08/2022] [Indexed: 11/24/2022] Open
Abstract
Central and peripheral vision during visual tasks have been extensively studied on two-dimensional screens, highlighting their perceptual and functional disparities. This study has two objectives: replicating on-screen gaze-contingent experiments removing central or peripheral field of view in virtual reality, and identifying visuo-motor biases specific to the exploration of 360 scenes with a wide field of view. Our results are useful for vision modelling, with applications in gaze position prediction (e.g., content compression and streaming). We ask how previous on-screen findings translate to conditions where observers can use their head to explore stimuli. We implemented a gaze-contingent paradigm to simulate loss of vision in virtual reality, participants could freely view omnidirectional natural scenes. This protocol allows the simulation of vision loss with an extended field of view (\(\gt \)80°) and studying the head's contributions to visual attention. The time-course of visuo-motor variables in our pure free-viewing task reveals long fixations and short saccades during first seconds of exploration, contrary to literature in visual tasks guided by instructions. We show that the effect of vision loss is reflected primarily on eye movements, in a manner consistent with two-dimensional screens literature. We hypothesize that head movements mainly serve to explore the scenes during free-viewing, the presence of masks did not significantly impact head scanning behaviours. We present new fixational and saccadic visuo-motor tendencies in a 360° context that we hope will help in the creation of gaze prediction models dedicated to virtual reality.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany
| | - Pierre Lebranchu
- LS2N UMR CNRS 6004, University of Nantes and Nantes University Hospital, Nantes, France
| | | | - Patrick Le Callet
- LS2N UMR CNRS 6004, University of Nantes, Nantes, France
- http://pagesperso.ls2n.fr/~lecallet-p/index.html
| |
Collapse
|
6
|
Nuthmann A, Canas-Bajo T. Visual search in naturalistic scenes from foveal to peripheral vision: A comparison between dynamic and static displays. J Vis 2022; 22:10. [PMID: 35044436 PMCID: PMC8802022 DOI: 10.1167/jov.22.1.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 12/03/2021] [Indexed: 11/24/2022] Open
Abstract
How important foveal, parafoveal, and peripheral vision are depends on the task. For object search and letter search in static images of real-world scenes, peripheral vision is crucial for efficient search guidance, whereas foveal vision is relatively unimportant. Extending this research, we used gaze-contingent Blindspots and Spotlights to investigate visual search in complex dynamic and static naturalistic scenes. In Experiment 1, we used dynamic scenes only, whereas in Experiments 2 and 3, we directly compared dynamic and static scenes. Each scene contained a static, contextually irrelevant target (i.e., a gray annulus). Scene motion was not predictive of target location. For dynamic scenes, the search-time results from all three experiments converge on the novel finding that neither foveal nor central vision was necessary to attain normal search proficiency. Since motion is known to attract attention and gaze, we explored whether guidance to the target was equally efficient in dynamic as compared to static scenes. We found that the very first saccade was guided by motion in the scene. This was not the case for subsequent saccades made during the scanning epoch, representing the actual search process. Thus, effects of task-irrelevant motion were fast-acting and short-lived. Furthermore, when motion was potentially present (Spotlights) or absent (Blindspots) in foveal or central vision only, we observed differences in verification times for dynamic and static scenes (Experiment 2). When using scenes with greater visual complexity and more motion (Experiment 3), however, the differences between dynamic and static scenes were much reduced.
Collapse
Affiliation(s)
- Antje Nuthmann
- Institute of Psychology, Kiel University, Kiel, Germany
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK
- http://orcid.org/0000-0003-3338-3434
| | - Teresa Canas-Bajo
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
7
|
Morales A, Costela FM, Woods RL. Saccade Landing Point Prediction Based on Fine-Grained Learning Method. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:52474-52484. [PMID: 33981520 PMCID: PMC8112574 DOI: 10.1109/access.2021.3070511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The landing point of a saccade defines the new fixation region, the new region of interest. We asked whether it was possible to predict the saccade landing point early in this very fast eye movement. This work proposes a new algorithm based on LSTM networks and a fine-grained loss function for saccade landing point prediction in real-world scenarios. Predicting the landing point is a critical milestone toward reducing the problems caused by display-update latency in gaze-contingent systems that make real-time changes in the display based on eye tracking. Saccadic eye movements are some of the fastest human neuro-motor activities with angular velocities of up to 1,000°/s. We present a comprehensive analysis of the performance of our method using a database with almost 220,000 saccades from 75 participants captured during natural viewing of videos. We include a comparison with state-of-the-art saccade landing point prediction algorithms. The results obtained using our proposed method outperformed existing approaches with improvements of up to 50% error reduction. Finally, we analyzed some factors that affected prediction errors including duration, length, age, and user intrinsic characteristics.
Collapse
Affiliation(s)
- Aythami Morales
- BiDA-Lab, Department of Electrical Engineering, Universidad Autonoma de Madrid, 28049 Madrid, Spain
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA 02114, USA
| | - Francisco M Costela
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA 02114, USA
- Department of Ophthalmology, Harvard Medical School, Boston, MA 02115, USA
| | - Russell L Woods
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA 02114, USA
- Department of Ophthalmology, Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
8
|
Ryu D, Cooke A, Bellomo E, Woodman T. Watch out for the hazard! Blurring peripheral vision facilitates hazard perception in driving. ACCIDENT; ANALYSIS AND PREVENTION 2020; 146:105755. [PMID: 32927281 DOI: 10.1016/j.aap.2020.105755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 08/04/2020] [Accepted: 08/27/2020] [Indexed: 06/11/2023]
Abstract
The objectives of this paper were to directly examine the roles of central and peripheral vision in hazard perception and to test whether perceptual training can enhance hazard perception. We also examined putative cortical mechanisms underpinning any effect of perceptual training on performance. To address these objectives, we used the gaze-contingent display paradigm to selectively present information to central and peripheral parts of the visual field. In Experiment 1, we compared hazard perception abilities of experienced and inexperienced drivers while watching video clips in three different viewing conditions (full vision; clear central and blurred peripheral vision; blurred central and clear peripheral vision). Participants' visual search behaviour and cortical activity were simultaneously recorded. In Experiment 2, we determined whether training with clear central and blurred peripheral vision could improve hazard perception among non-licensed drivers. Results demonstrated that (i) information from central vision is more important than information from peripheral vision in identifying hazard situations, for screen-based hazard perception tests, (ii) clear central and blurred peripheral vision viewing helps the alignment of line-of-gaze and attention, (iii) training with clear central and blurred peripheral vision can improve screen-based hazard perception. The findings have important implications for road safety and provide a new training paradigm to improve hazard perception.
Collapse
Affiliation(s)
- Donghyun Ryu
- School of Sport, Exercise and Health Sciences, Loughborough University, Loughborough, United Kingdom; School of Sport, Health & Exercise Sciences, Bangor University, Bangor, United Kingdom.
| | - Andrew Cooke
- School of Sport, Health & Exercise Sciences, Bangor University, Bangor, United Kingdom
| | - Eduardo Bellomo
- School of Sport, Health & Exercise Sciences, Bangor University, Bangor, United Kingdom
| | - Tim Woodman
- School of Sport, Health & Exercise Sciences, Bangor University, Bangor, United Kingdom
| |
Collapse
|
9
|
Spjut J, Boudaoud B, Kim J, Greer T, Albert R, Stengel M, Aksit K, Luebke D. Toward Standardized Classification of Foveated Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2126-2134. [PMID: 32078547 DOI: 10.1109/tvcg.2020.2973053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Emergent in the field of head mounted display design is a desire to leverage the limitations of the human visual system to reduce the computation, communication, and display workload in power and form-factor constrained systems. Fundamental to this reduced workload is the ability to match display resolution to the acuity of the human visual system, along with a resulting need to follow the gaze of the eye as it moves, a process referred to as foveation. A display that moves its content along with the eye may be called a Foveated Display, though this term is also commonly used to describe displays with non-uniform resolution that attempt to mimic human visual acuity. We therefore recommend a definition for the term Foveated Display that accepts both of these interpretations. Furthermore, we include a simplified model for human visual Acuity Distribution Functions (ADFs) at various levels of visual acuity, across wide fields of view and propose comparison of this ADF with the Resolution Distribution Function of a foveated display for evaluation of its resolution at a particular gaze direction. We also provide a taxonomy to allow the field to meaningfully compare and contrast various aspects of foveated displays in a display and optical technology-agnostic manner.
Collapse
|
10
|
Implicit processing during change blindness revealed with mouse-contingent and gaze-contingent displays. Atten Percept Psychophys 2019; 80:844-859. [PMID: 29363028 PMCID: PMC5948240 DOI: 10.3758/s13414-017-1468-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People often miss salient events that occur right in front of them. This phenomenon, known as change blindness, reveals the limits of visual awareness. Here, we investigate the role of implicit processing in change blindness using an approach that allows partial dissociation of covert and overt attention. Traditional gaze-contingent paradigms adapt the display in real time according to current gaze position. We compare such a paradigm with a newly designed mouse-contingent paradigm where the visual display changes according to the real-time location of a user-controlled mouse cursor, effectively allowing comparison of change detection with mainly overt attention (gaze-contingent display; Experiment 2) and untethered overt and covert attention (mouse-contingent display; Experiment 1). We investigate implicit indices of target detection during change blindness in eye movement and behavioral data, and test whether affective devaluation of unnoticed targets may contribute to change blindness. The results show that unnoticed targets are processed implicitly, but that the processing is shallower than if the target is consciously detected. Additionally, the partial untethering of covert attention with the mouse-contingent display changes the pattern of search and leads to faster detection of the changing target. Finally, although it remains possible that the deployment of covert attention is linked to implicit processing, the results fall short of establishing a direct connection.
Collapse
|
11
|
Wei L, Sakamoto Y. Fast calculation method with foveated rendering for computer-generated holograms using an angle-changeable ray-tracing method. APPLIED OPTICS 2019; 58:A258-A266. [PMID: 30873999 DOI: 10.1364/ao.58.00a258] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Accepted: 01/11/2019] [Indexed: 06/09/2023]
Abstract
The computer-generated hologram (CGH) technique is a technique that simulates the recording of holography. Although the CGH technique has a lot of advantages, it also has some disadvantages; one of them is the long calculation time. Much research on the human eye has established that humans see 135° vertically and 160° horizontally, but can see fine detail within an only 5° central circle. Foveated rendering uses this characteristic of the human eye to reduce image resolution in the peripheral area and achieve a high calculation speed. In this paper, a new method for CGH fast calculation with foveated rendering using an angle-changeable ray-tracing method is introduced. The experiments demonstrate the effectiveness and high-speed calculation of this method.
Collapse
|
12
|
Wang S, Woods RL, Costela FM, Luo G. Dynamic gaze-position prediction of saccadic eye movements using a Taylor series. J Vis 2017; 17:3. [PMID: 29196761 PMCID: PMC5710308 DOI: 10.1167/17.14.3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Gaze-contingent displays have been widely used in vision research and virtual reality applications. Due to data transmission, image processing, and display preparation, the time delay between the eye tracker and the monitor update may lead to a misalignment between the eye position and the image manipulation during eye movements. We propose a method to reduce the misalignment using a Taylor series to predict the saccadic eye movement. The proposed method was evaluated using two large datasets including 219,335 human saccades (collected with an EyeLink 1000 system, 95% range from 1° to 32°) and 21,844 monkey saccades (collected with a scleral search coil, 95% range from 1° to 9°). When assuming a 10-ms time delay, the prediction of saccade movements using the proposed method could reduce the misalignment greater than the state-of-the-art methods. The average error was about 0.93° for human saccades and 0.26° for monkey saccades. Our results suggest that this proposed saccade prediction method will create more accurate gaze-contingent displays.
Collapse
Affiliation(s)
- Shuhang Wang
- Schepens Eye Research Institute, Mass Eye and Ear, and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Russell L Woods
- Schepens Eye Research Institute, Mass Eye and Ear, and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Francisco M Costela
- Schepens Eye Research Institute, Mass Eye and Ear, and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Gang Luo
- Schepens Eye Research Institute, Mass Eye and Ear, and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
13
|
Abstract
Understanding software engineers’ behaviour plays a vital role in the software development industry. It also provides helpful guidelines for teaching and learning. In this article, we conduct a study of the extrafoveal vision and its role in information processing. This is a new perspective on source code comprehension. Despite its major importance, the extrafoveal vision has been largely ignored by previous studies. The available research has been focused entirely on the foveal information processing and the gaze fixation position. In this work, we share the results of a gaze-contingent study of source code comprehension by expert ( N = 12) and novice ( N = 12) programmers in conditions of the restricted extrafoveal vision. The window-moving paradigm was employed to restrict the extrafoveal area of vision as participants comprehend two source code examples. The results indicate that the semantic preview allowed by the extrafoveal vision provides tangible benefits to expert programmers. When the experts could not use the semantic information from the extrafoveal area, their fixation duration increased to duration similar to novices. The experts’ performance dropped in the restricted-view mode, and they required more time to solve the tasks.
Collapse
Affiliation(s)
- Pavel A. Orlov
- School of Computing, University of Eastern Finland, Joensuu, Finland; Department of Engineering Graphics and Design, Peter the Great Saint-Petersburg Polytechnic University, St. Petersburg, Russia
| | - Roman Bednarik
- School of Computing, University of Eastern Finland, Joensuu, Finland
| |
Collapse
|
14
|
Gaspar JG, Ward N, Neider MB, Crowell J, Carbonari R, Kaczmarski H, Ringer RV, Johnson AP, Kramer AF, Loschky LC. Measuring the Useful Field of View During Simulated Driving With Gaze-Contingent Displays. HUMAN FACTORS 2016; 58:630-641. [PMID: 27091370 DOI: 10.1177/0018720816642092] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2015] [Accepted: 02/29/2016] [Indexed: 06/05/2023]
Abstract
OBJECTIVE We aimed to develop and test a new dynamic measure of transient changes to the useful field of view (UFOV), utilizing a gaze-contingent paradigm for use in realistic simulated environments. BACKGROUND The UFOV, the area from which an observer can extract visual information during a single fixation, has been correlated with driving performance and crash risk. However, some existing measures of the UFOV cannot be used dynamically in realistic simulators, and other UFOV measures involve constant stimuli at fixed locations. We propose a gaze-contingent UFOV measure (the GC-UFOV) that solves the above problems. METHODS Twenty-five participants completed four simulated drives while they concurrently performed an occasional gaze-contingent Gabor orientation discrimination task. Gabors appeared randomly at one of three retinal eccentricities (5°, 10°, or 15°). Cognitive workload was manipulated both with a concurrent auditory working memory task and with driving task difficulty (via presence/absence of lateral wind). RESULTS Cognitive workload had a detrimental effect on Gabor discrimination accuracy at all three retinal eccentricities. Interestingly, this accuracy cost was equivalent across eccentricities, consistent with previous findings of "general interference" rather than "tunnel vision." CONCLUSION The results showed that the GC-UFOV method was able to measure transient changes in UFOV due to cognitive load in a realistic simulated environment. APPLICATION The GC-UFOV paradigm developed and tested in this study is a novel and effective tool for studying transient changes in the UFOV due to cognitive load in the context of complex real-world tasks such as simulated driving.
Collapse
Affiliation(s)
| | - Nathan Ward
- University of Illinois Urbana-Champaign, Champaign
| | | | - James Crowell
- University of Iowa, Iowa CityUniversity of Illinois Urbana-Champaign, ChampaignUniversity of Central Florida, OrlandoUniversity of Illinois Urbana-Champaign, ChampaignKansas State University, ManhattanConcordia University, Montreal, CanadaNortheastern University, Boston, MAKansas State University, Manhattan
| | - Ronald Carbonari
- University of Iowa, Iowa CityUniversity of Illinois Urbana-Champaign, ChampaignUniversity of Central Florida, OrlandoUniversity of Illinois Urbana-Champaign, ChampaignKansas State University, ManhattanConcordia University, Montreal, CanadaNortheastern University, Boston, MAKansas State University, Manhattan
| | | | | | | | | | | |
Collapse
|
15
|
Abstract
The moving-window paradigm, based on gazecontingent technic, traditionally used in a studies of the visual perceptual span. There is a strong demand for new environments that could be employed by non-technical researchers. We have developed an easy-to-use tool with a graphical user interface (GUI) allowing both execution and control of visual gaze-contingency studies. This work describes ScreenMasker, an environment that allows create gaze-contingent textured displays used together with stimuli presentation software. ScreenMasker has an architecture that meets the requirements of low-latency real-time eye-movement experiments. It also provides a variety of settings and functions. Effective rendering times and performance are ensured by means of GPU processing under CUDA technology. Performance tests show ScreenMasker's latency to be 67-74 ms on a typical office computer, and high-end 144-Hz screen latencies of about 25-28 ms. ScreenMasker is an open-source system distributed under the GNU Lesser General Public License and is available at https://github.com/PaulOrlov/ScreenMasker .
Collapse
|
16
|
Abstract
Gaze-contingent displays combine a display device with an eyetracking system to rapidly update an image on the basis of the measured eye position. All such systems have a delay, the system latency, between a change in gaze location and the related change in the display. The system latency is the result of the delays contributed by the eyetracker, the display computer, and the display, and it is affected by the properties of each component, which may include variability. We present a direct, simple, and low-cost method to measure the system latency. The technique uses a device to briefly blind the eyetracker system (e.g., for video-based eyetrackers, a device with infrared light-emitting diodes (LED)), creating an eyetracker event that triggers a change to the display monitor. The time between these two events, as captured by a relatively low-cost consumer camera with high-speed video capability (1,000 Hz), is an accurate measurement of the system latency. With multiple measurements, the distribution of system latencies can be characterized. The same approach can be used to synchronize the eye position time series and a video recording of the visual stimuli that would be displayed in a particular gaze-contingent experiment. We present system latency assessments for several popular types of displays and discuss what values are acceptable for different applications, as well as how system latencies might be improved.
Collapse
|
17
|
Abstract
A new study has found that artificial occlusion of central vision leads to rapid emergence, and long-term maintenance of a new preferred retinal locus of fixation. These findings have important implications for the understanding of visual and oculomotor plasticity as well as for the development of rehabilitation techniques.
Collapse
Affiliation(s)
- Martina Poletti
- Department of Psychology, Boston University, Boston, MA 02215, USA.
| | | |
Collapse
|
18
|
Rayner K, Loschky LC, Reingold EM. Eye movements in visual cognition: The contributions of George W. McConkie. VISUAL COGNITION 2014. [DOI: 10.1080/13506285.2014.895463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
19
|
Reingold EM. Eye Tracking Research and Technology: Towards Objective Measurement of Data Quality. VISUAL COGNITION 2014; 22:635-652. [PMID: 24771998 PMCID: PMC3996543 DOI: 10.1080/13506285.2013.876481] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2013] [Accepted: 12/12/2013] [Indexed: 10/25/2022]
Abstract
Two methods for objectively measuring eye tracking data quality are explored. The first method works by tricking the eye tracker to detect an abrupt change in the gaze position of an artificial eye that in actuality does not move. Such a device, referred to as an artificial saccade generator, is shown to be extremely useful for measuring the temporal accuracy and precision of eye tracking systems and for validating the latency to display change in gaze contingent display paradigms. The second method involves an artificial pupil that is mounted on a computer controlled moving platform. This device is designed to be able to provide the eye tracker with motion sequences that closely resemble biological eye movements. The main advantage of using artificial motion for testing eye tracking data quality is the fact that the spatiotemporal signal is fully specified in a manner independent of the eye tracker that is being evaluated and that nearly identical motion sequence can be reproduced multiple times with great precision. The results of the present study demonstrate that the equipment described has the potential to become an important tool in the comprehensive evaluation of data quality.
Collapse
Affiliation(s)
- Eyal M Reingold
- Department of Psychology, University of Toronto, Mississauga, ON, Canada
| |
Collapse
|
20
|
|
21
|
Loschky LC, Ringer RV, Johnson AP, Larson AM, Neider M, Kramer AF. Blur Detection is Unaffected by Cognitive Load. VISUAL COGNITION 2014; 22:522-547. [PMID: 24771997 PMCID: PMC3996539 DOI: 10.1080/13506285.2014.884203] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2013] [Accepted: 01/14/2014] [Indexed: 10/26/2022]
Abstract
Blur detection is affected by retinal eccentricity, but is it also affected by attentional resources? Research showing effects of selective attention on acuity and contrast sensitivity suggests that allocating attention should increase blur detection. However, research showing that blur affects selection of saccade targets suggests that blur detection may be pre-attentive. To investigate this question, we carried out experiments in which viewers detected blur in real-world scenes under varying levels of cognitive load manipulated by the N-back task. We used adaptive threshold estimation to measure blur detection thresholds at 0°, 3°, 6°, and 9° eccentricity. Participants carried out blur detection as a single task, a single task with to-be-ignored letters, or an N-back task with four levels of cognitive load (0, 1, 2, or 3-back). In Experiment 1, blur was presented gaze-contingently for occasional single eye fixations while participants viewed scenes in preparation for an easy picture recognition memory task, and the N-back stimuli were presented auditorily. The results for three participants showed a large effect of retinal eccentricity on blur thresholds, significant effects of N-back level on N-back performance, scene recognition memory, and gaze dispersion, but no effect of N-back level on blur thresholds. In Experiment 2, we replicated Experiment 1 but presented the images tachistoscopically for 200 ms (half with, half without blur), to determine whether gaze-contingent blur presentation in Experiment 1 had produced attentional capture by blur onset during a fixation, thus eliminating any effect of cognitive load on blur detection. The results with three new participants replicated those of Experiment 1, indicating that the use of gaze-contingent blur presentation could not explain the lack of effect of cognitive load on blur detection. Thus, apparently blur detection in real-world scene images is unaffected by attentional resources, as manipulated by the cognitive load produced by the N-back task.
Collapse
Affiliation(s)
- Lester C. Loschky
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, USA
| | - Ryan V. Ringer
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, USA
| | - Aaron P. Johnson
- Department of Psychology, Concordia University, Montreal, Quebec, Canada
| | - Adam M. Larson
- Department of Psychology, University of Findlay, Findlay, OH, USA
| | - Mark Neider
- Department of Psychology, University of Central Florida, Orlando, FL, USA
| | - Arthur F. Kramer
- Department of Psychology and the Beckman Institute, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| |
Collapse
|
22
|
|
23
|
Ryu D, Abernethy B, Mann DL, Poolton JM, Gorman AD. The Role of Central and Peripheral Vision in Expert Decision Making. Perception 2013; 42:591-607. [DOI: 10.1068/p7487] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
The purpose of this study was to investigate the role of central and peripheral vision in expert decision making. A gaze-contingent display was used to selectively present information to the central and peripheral areas of the visual field while participants performed a decision-making task. Eleven skilled and eleven less-skilled male basketball players watched video clips of basketball scenarios in three different viewing conditions: full-image control, moving window (central vision only), and moving mask (peripheral vision only). At the conclusion of each clip participants were required to decide whether it was more appropriate for the ball-carrier to pass the ball or to drive to the basket. The skilled players showed significantly higher response accuracy and faster response times compared with their lesser-skilled counterparts in all three viewing conditions, demonstrating superiority in information extraction that held irrespective of whether they were using central or peripheral vision. The gaze behaviour of the skilled players was less influenced by the gaze-contingent manipulations, suggesting they were better able to use the remaining information to sustain their normal gaze behaviour. The superior capacity of experts to interpret dynamic visual information is evident regardless of whether the visual information is presented across the whole visual field or selectively to either central or peripheral vision alone.
Collapse
Affiliation(s)
- Donghyun Ryu
- Institute of Human Performance, The University of Hong Kong, 5 Sassoon Road, Pokfulam, Hong Kong
| | - Bruce Abernethy
- Institute of Human Performance, The University of Hong Kong, 5 Sassoon Road, Pokfulam, Hong Kong
- School of Human Movement Studies, The University of Queensland, Brisbane, Australia
| | - David L Mann
- Institute of Human Performance, The University of Hong Kong, 5 Sassoon Road, Pokfulam, Hong Kong
- Research Institute MOVE Amsterdam, Faculty of Human Movement Sciences, VU University, Amsterdam, The Netherlands
| | - Jamie M Poolton
- Institute of Human Performance, The University of Hong Kong, 5 Sassoon Road, Pokfulam, Hong Kong
| | - Adam D Gorman
- School of Human Movement Studies, The University of Queensland, Brisbane, Australia
- Movement Science—Skill Acquisition, Australian Institute of Sport, Canberra, Australia
| |
Collapse
|
24
|
Controlling the spotlight of attention: visual span size and flexibility in schizophrenia. Neuropsychologia 2011; 49:3370-6. [PMID: 21871907 DOI: 10.1016/j.neuropsychologia.2011.08.011] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2011] [Revised: 07/28/2011] [Accepted: 08/09/2011] [Indexed: 11/21/2022]
Abstract
The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed.
Collapse
|
25
|
Saccade control in natural images is shaped by the information visible at fixation: evidence from asymmetric gaze-contingent windows. Atten Percept Psychophys 2011; 73:266-83. [PMID: 21258925 DOI: 10.3758/s13414-010-0014-5] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When people view images, their saccades are predominantly horizontal and show a positively skewed distribution of amplitudes. How are these patterns affected by the information close to fixation and the features in the periphery? We recorded saccades while observers encoded a set of scenes with a gaze-contingent window at fixation: Features inside a rectangular (Experiment 1) or elliptical (Experiment 2) window were intact; peripheral background was masked completely or blurred. When the window was asymmetric, with more information preserved either horizontally or vertically, saccades tended to follow the information within the window, rather than exploring unseen regions, which runs counter to the idea that saccades function to maximize information gain on each fixation. Window shape also affected fixation and amplitude distributions, but horizontal windows had less of an impact. The findings suggest that saccades follow the features currently being processed and that normal vision samples these features from a horizontally elongated region.
Collapse
|
26
|
Aguilar C, Castet E. Gaze-contingent simulation of retinopathy: some potential pitfalls and remedies. Vision Res 2011; 51:997-1012. [PMID: 21335024 DOI: 10.1016/j.visres.2011.02.010] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2010] [Revised: 02/08/2011] [Accepted: 02/11/2011] [Indexed: 11/26/2022]
Abstract
Many important results in visual neuroscience rely on the use of gaze-contingent retinal stabilization techniques. Our work focuses on the important fraction of these studies that is concerned with the retinal stabilization of visual filters that degrade some specific portions of the visual field. For instance, macular scotomas, often induced by age related macular degeneration, can be simulated by continuously displaying a gaze-contingent mask in the center of the visual field. The gaze-contingent rules used in most of these studies imply only a very minimal processing of ocular data. By analyzing the relationship between gaze and scotoma locations for different oculo-motor patterns, we show that such a minimal processing might have adverse perceptual and oculomotor consequences due mainly to two potential problems: (a) a transient blink-induced motion of the scotoma while gaze is static, and (b) the intrusion of post-saccadic slow eye movements. We have developed new gaze-contingent rules to solve these two problems. We have also suggested simple ways of tackling two unrecognized problems that are a potential source of mismatch between gaze and scotoma locations. Overall, the present work should help design, describe and test the paradigms used to simulate retinopathy with gaze-contingent displays.
Collapse
Affiliation(s)
- Carlos Aguilar
- Université Aix-Marseille II, CNRS, Institut de Neurosciences Cognitives de la Méditerranée, 31 chemin Joseph Aiguier, 13009 Marseille, France
| | | |
Collapse
|
27
|
Yiend J. The effects of emotion on attention: A review of attentional processing of emotional information. Cogn Emot 2010. [DOI: 10.1080/02699930903205698] [Citation(s) in RCA: 291] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
28
|
Rayner K. The 35th Sir Frederick Bartlett Lecture: Eye movements and attention in reading, scene perception, and visual search. Q J Exp Psychol (Hove) 2009; 62:1457-506. [PMID: 19449261 DOI: 10.1080/17470210902816461] [Citation(s) in RCA: 977] [Impact Index Per Article: 65.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with “real-world” tasks and research utilizing the visual-world paradigm are also briefly discussed.
Collapse
|
29
|
61.1: Invited Paper: Gaze Contingent Displays: Analysis of Saccadic Plasticity in Visual Search. ACTA ACUST UNITED AC 2009. [DOI: 10.1889/1.3256945] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
30
|
Shinar D. Looks are (almost) everything: where drivers look to get information. HUMAN FACTORS 2008; 50:380-384. [PMID: 18689042 DOI: 10.1518/001872008x250647] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
OBJECTIVE To describe the impact of Rockwell's early eye movements research. BACKGROUND The advent of a new technology enabling measurements of eye movements in natural environments launched the seminal research of a Human Factors pioneer, Tom Rockwell, into how drivers process visual information. METHOD In two seminal Human Factors articles -"Mapping Eye-Movement Pattern to the Visual Scene in Driving: An Exploratory Study" (Mourant & Rockwell, 1970) and "Strategies of Visual Search by Novice and Experienced Drivers" (Mourant & Rockwell, 1972)- Rockwell and his student, Ron Mourant, examined drivers' eye movements in naturalistic driving environments. RESULTS The analyses of the visual fixations revealed systematic relationships between the sources of information the drivers needed to drive safely and the spatial distributions of their visual fixations. In addition, they showed that as drivers gain skill and experience, their pattern of fixations changes in a systematic manner. CONCLUSIONS The research demonstrated that fixations and saccadic eye movements provide important insights into drivers' visual search behavior, information needs, and information acquisition processes. APPLICATION This research has been a cornerstone for a myriad of driving-related studies, by Rockwell and other researchers. Building on Rockwell's pioneering work, these studies used eye-tracking systems to describe cognitive aspects of skill acquisition, and the effects of fatigue and other impairments on the process of attention and information gathering. A novel and potentially revolutionary application of this research is to use eye movement recordings for vehicle control and activation of in-vehicle safety systems.
Collapse
Affiliation(s)
- David Shinar
- Department of Industrial Engineering and Management, Ben Gurion University of the Negev, Beer Sheva, Israel.
| |
Collapse
|
31
|
EyeRIS: a general-purpose system for eye-movement-contingent display control. Behav Res Methods 2007; 39:350-64. [PMID: 17958145 DOI: 10.3758/bf03193003] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In experimental studies of visual performance, the need often emerges to modify the stimulus according to the eye movements performed by the subject. The eye-movement-contingent display (EMCD) methodology enables accurate control of the position and motion of the stimulus on the retina. EMCD procedures have been used successfully in many areas of vision science, including studies of visual attention or eye movements and physiological characterization of neuronal response properties. Unfortunately, the difficulty of real-time programming and the unavailability of flexible and economical systems that can be easily adapted to the diversity of experimental needs and laboratory setups have prevented the widespread use of EMCD control. This article describes EyeRIS, a general-purpose system for performing EMCD experiments on a Windows computer. Based on a digital signal processor with analog and digital interfaces, this integrated hardware and software system is responsible for sampling and processing oculomotor signals and subject responses and for modifying the stimulus displayed on a CRT according to a gaze-contingent procedure specified by the experimenter. EyeRIS is designed to update the stimulus with a delay of only 10 msec. To thoroughly evaluate EyeRIS's performance, this study was designed to (1) examine the response of the system in a number of EMCD procedures and computational benchmarking tests; (2) compare the accuracy of implementation of one particular EMCD procedure, retinal stabilization, with that produced by a standard tool used for this task; and (3) examine EyeRIS's performance in one of the many EMCD procedures that cannot be executed by means of any other currently available device.
Collapse
|
32
|
|
33
|
A technique for simulating visual field losses in virtual environments to study human navigation. Behav Res Methods 2007; 39:552-60. [PMID: 17958167 DOI: 10.3758/bf03193025] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The following paper describes a new technique for simulating peripheral field losses in virtual environments to study the roles of the central and peripheral visual fields during navigation. Based on Geisler and Perry's (2002) gaze-contingent multiresolution display concept, the technique extends their methodology to work with three-dimensional images that are both transformed and rendered in real time by a computer graphics system. In order to assess the usefulness of this method for studying visual field losses, an experiment was run in which seven participants were required to walk to a target tree in a virtual forest as quickly and efficiently as possible while artificial head and eye-based delays were systematically introduced. Bilinear fits were applied to the mean trial times in order to assess at what delay lengths breaks in performance could be observed. Results suggest that breaks occur beyond the current delays inherent in the system. Increases in trial times across all delays tested were also observed when simulated peripheral field losses were applied compared to full FOV conditions. Possible applications and limitations of the system are discussed. The source code needed to program visual field losses can be found at lions.med.jhu.edu/archive/turanolab/Simulated_Visual_Field_Loss_Code.html.
Collapse
|
34
|
Toet A. Gaze directed displays as an enabling technology for attention aware systems. COMPUTERS IN HUMAN BEHAVIOR 2006. [DOI: 10.1016/j.chb.2005.12.010] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
35
|
Loschky L, McConkie G, Yang J, Miller M. The limits of visual resolution in natural scene viewing. VISUAL COGNITION 2005. [DOI: 10.1080/13506280444000652] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
36
|
Abstract
Gaze-contingent displays (GCDs) attempt to balance the amount of information displayed against the visual information processing capacity of the observer through real-time eye movement sensing. Based on the assumed knowledge of the instantaneous location of the observer's focus of attention, GCD content can be "tuned" through several display processing means. Screen-based displays alter pixel level information generally matching the resolvability of the human retina in an effort to maximize bandwidth. Model-based displays alter geometric-level primitives along similar goals. Attentive user interfaces (AUIs) manage object- level entities (e.g., windows, applications) depending on the assumed attentive state of the observer. Such real-time display manipulation is generally achieved through non-contact, unobtrusive tracking of the observer's eye movements. This paper briefly reviews past and present display techniques as well as emerging graphics and eye tracking technology for GCD development.
Collapse
Affiliation(s)
- Andrew T Duchowski
- Computer Science Department, Clemson University, Clemson, South Carolina 29634-0974, USA.
| | | | | |
Collapse
|