1
|
Faraji Y, van Rijn JW, van Nispen RMA, van Rens GHMB, Melis-Dankers BJM, Koopman J, van Rijn LJ. A toolkit for wide-screen dynamic area of interest measurements using the Pupil Labs Core Eye Tracker. Behav Res Methods 2023; 55:3820-3830. [PMID: 36253600 PMCID: PMC10616213 DOI: 10.3758/s13428-022-01991-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/26/2022] [Indexed: 11/08/2022]
Abstract
Eye tracking measurements taken while watching a wide field screen are challenging to perform. Commercially available remote eye trackers typically do not measure more than 35 degrees in eccentricity. Analysis software was developed using the Pupil Core Eye Tracking data to analyze viewing behavior under circumstances as natural as possible, on a 1.55-m-wide screen allowing free head movements. Additionally, dynamic area of interest (AOI) analyses were performed on data of participants viewing traffic scenes. A toolkit was created including software for simple allocation of dynamic AOIs (semi-automatically and manually), measurement of parameters such as dwell times and time to first entry, and overlaying gaze and AOIs on video. Participants (n =11) were asked to look at 13 dynamic AOIs in traffic scenes from appearance to disappearance in order to validate the setup and software. Different AOI margins were explored for the included objects. The median ratio between total appearance time and dwell time was about 90% for most objects when appropriate margins were chosen. This validated open-source toolkit is readily available for researchers who want to perform dynamic AOI analyses with the Pupil Core eye tracker, especially when measurements are desired on a wide screen, in various fields such as psychology, transportation, and low vision research.
Collapse
Affiliation(s)
- Yasmin Faraji
- Amsterdam UMC location Vrije Universiteit Amsterdam, Ophthalmology, Amsterdam, The Netherlands
- Amsterdam Public Health, Quality of Care, Societal Participation & Health, Mental Health, Aging and Later Life, Amsterdam, The Netherlands
| | - Joris W van Rijn
- Amsterdam UMC location Vrije Universiteit Amsterdam, Ophthalmology, Amsterdam, The Netherlands
| | - Ruth M A van Nispen
- Amsterdam UMC location Vrije Universiteit Amsterdam, Ophthalmology, Amsterdam, The Netherlands
- Amsterdam Public Health, Quality of Care, Societal Participation & Health, Mental Health, Aging and Later Life, Amsterdam, The Netherlands
| | - Ger H M B van Rens
- Amsterdam UMC location Vrije Universiteit Amsterdam, Ophthalmology, Amsterdam, The Netherlands
- Amsterdam Public Health, Quality of Care, Societal Participation & Health, Mental Health, Aging and Later Life, Amsterdam, The Netherlands
| | - Bart J M Melis-Dankers
- Royal Dutch Visio, Centre of Expertise for Blind and Partially Sighted People, Huizen, The Netherlands
| | - Jan Koopman
- Royal Dutch Visio, Centre of Expertise for Blind and Partially Sighted People, Huizen, The Netherlands
| | - Laurentius J van Rijn
- Amsterdam UMC location Vrije Universiteit Amsterdam, Ophthalmology, Amsterdam, The Netherlands.
- Department of Ophthalmology, Onze Lieve Vrouwe Gasthuis, Amsterdam, The Netherlands.
- Amsterdam Neuroscience, Systems & Network Neurosciences, Amsterdam, The Netherlands.
| |
Collapse
|
2
|
A hidden Markov model for analyzing eye-tracking of moving objects : Case study in a sustained attention paradigm. Behav Res Methods 2020; 52:1225-1243. [PMID: 31898297 DOI: 10.3758/s13428-019-01313-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Eye-tracking provides an opportunity to generate and analyze high-density data relevant to understanding cognition. However, while events in the real world are often dynamic, eye-tracking paradigms are typically limited to assessing gaze toward static objects. In this study, we propose a generative framework, based on a hidden Markov model (HMM), for using eye-tracking data to analyze behavior in the context of multiple moving objects of interest. We apply this framework to analyze data from a recent visual object tracking task paradigm, TrackIt, for studying selective sustained attention in children. Within this paradigm, we present two validation experiments to show that the HMM provides a viable approach to studying eye-tracking data with moving stimuli, and to illustrate the benefits of the HMM approach over some more naive possible approaches. The first experiment utilizes a novel 'supervised' variant of TrackIt, while the second compares directly with judgments made by human coders using data from the original TrackIt task. Our results suggest that the HMM-based method provides a robust analysis of eye-tracking data with moving stimuli, both for adults and for children as young as 3.5-6 years old.
Collapse
|
3
|
Eye Behavior During Multiple Object Tracking and Multiple Identity Tracking. Vision (Basel) 2019; 3:vision3030037. [PMID: 31735838 PMCID: PMC6802796 DOI: 10.3390/vision3030037] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 07/22/2019] [Accepted: 07/23/2019] [Indexed: 11/17/2022] Open
Abstract
We review all published eye-tracking studies to date that have used eye movements to examine multiple object (MOT) or multiple identity tracking (MIT). In both tasks, observers dynamically track multiple moving objects. In MOT the objects are identical, whereas in MIT they have distinct identities. In MOT, observers prefer to fixate on blank space, which is often the center of gravity formed by the moving targets (centroid). In contrast, in MIT observers have a strong preference for the target-switching strategy, presumably to refresh and maintain identity-location bindings for the targets. To account for the qualitative differences between MOT and MIT, two mechanisms have been posited, a position tracking (MOT) and an identity tracking (MOT & MIT) mechanism. Eye-tracking studies of MOT have also demonstrated that observers execute rescue saccades toward targets in danger of becoming occluded or are about to change direction after a collision. Crowding attracts the eyes close to it in order to increase visual acuity for the crowded objects to prevent target loss. It is suggested that future studies should concentrate more on MIT, as MIT more closely resembles tracking in the real world.
Collapse
|
4
|
Mandal S, Kang Z. Using Eye Movement Data Visualization to Enhance Training of Air Traffic Controllers: A Dynamic Network Approach. J Eye Mov Res 2018; 11. [PMID: 33828703 PMCID: PMC7899734 DOI: 10.16910/jemr.11.4.1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The Federal Aviation Administration (FAA) forecasted substantial increase in the US air traffic volume creating a high demand in Air Traffic Control Specialists (ATCSs). Training times and passing rates for ATCSs might be improved if expert ATCSs’ eye movement (EM) characteristics can be utilized to support effective training. However, effective EM visualization is difficult for a dynamic task (e.g. aircraft conflict detection and mitigation) that includes interrogating multi-element targets that are dynamically moving, appearing, disappearing, and overlapping within a display. To address the issues, a dynamic network-based approach is introduced that integrates adapted visualizations (i.e. time-frame networks and normalized dot/bar plots) with measures used in network science (i.e. indegree, closeness, and betweenness) to provide in-depth EM analysis. The proposed approach was applied in an aircraft conflict task using a high-fidelity simulator; employing the use of veteran ATCSs and pseudo pilots. Results show that, ATCSs’ visual attention to multi-element dynamic targets can be effectively interpreted and supported through multiple evidences obtained from the various visualization and associated measures. In addition, we discovered that fewer eye fixation numbers or shorter eye fixation durations on a target may not necessarily indicate the target is less important when analyzing the flow of visual attention within a network. The results show promise in cohesively analyzing and visualizing various eye movement characteristics to better support training.
Collapse
|
5
|
A guideline for integrating dynamic areas of interests in existing set-up for capturing eye movement: Looking at moving aircraft. Behav Res Methods 2018; 49:822-834. [PMID: 27287446 DOI: 10.3758/s13428-016-0745-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Today, capturing the behavior of a human eye is considered a standard method for measuring the information-gathering process and thereby gaining insights into cognitive processes. Due to the dynamic character of most task environments there is still a lack of a structured and automated approach for analyzing eye movement in combination with moving objects. In this article, we present a guideline for advanced gaze analysis, called IGDAI (Integration Guideline for Dynamic Areas of Interest). The application of IGDAI allows gathering dynamic areas of interest and simplifies its combination with eye movement. The first step of IGDAI defines the basic requirements for the experimental setup including the embedding of an eye tracker. The second step covers the issue of storing the information of task environments for the dynamic AOI analysis. Implementation examples in XML are presented fulfilling the requirements for most dynamic task environments. The last step includes algorithms to combine the captured eye movement and the dynamic areas of interest. A verification study was conducted, presenting an air traffic controller environment to participants. The participants had to distinguish between different types of dynamic objects. The results show that in comparison to static areas of interest, IGDAI allows a faster and more detailed view on the distribution of eye movement.
Collapse
|
6
|
Le Naour T, Bresciani JP. A skeleton-based approach to analyzing oculomotor behavior when viewing animated characters. J Eye Mov Res 2017; 10. [PMID: 33828668 PMCID: PMC7140781 DOI: 10.16910/jemr.10.5.7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Knowing what people look at and understanding how they analyze the dynamic gestures of their peers is an exciting challenge. In this context, we propose a new approach to quantifying and visualizing the oculomotor behavior of viewers watching the movements of animated characters in dynamic sequences. Using this approach, we were able to illustrate, on a 'heat mesh', the gaze distribution of one or several viewers, i.e., the time spent on each part of the body, and to visualize viewers' timelines, which are linked to the heat mesh. Our approach notably provides an 'intuitive' overview combining the spatial and temporal characteristics of the gaze pattern, thereby constituting an efficient tool for quickly comparing the oculomotor behaviors of different viewers. The functionalities of our system are illustrated through two use case experiments with 2D and 3D animated media sources, respectively.
Collapse
|
7
|
Bodala IP, Abbasi NI, Bezerianos A, Al-Nashash H, Thakor NV. Measuring vigilance decrement using computer vision assisted eye tracking in dynamic naturalistic environments. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:2478-2481. [PMID: 29060401 DOI: 10.1109/embc.2017.8037359] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Eye tracking offers a practical solution for monitoring cognitive performance in real world tasks. However, eye tracking in dynamic environments is difficult due to high spatial and temporal variation of stimuli, needing further and thorough investigation. In this paper, we study the possibility of developing a novel computer vision assisted eye tracking analysis by using fixations. Eye movement data is obtained from a long duration naturalistic driving experiment. Source invariant feature transform (SIFT) algorithm was implemented using VLFeat toolbox to identify multiple areas of interest (AOIs). A new measure called `fixation score' was defined to understand the dynamics of fixation position between the target AOI and the non target AOIs. Fixation score is maximum when the subjects focus on the target AOI and diminishes when they gaze at the non-target AOIs. Statistically significant negative correlation was found between fixation score and reaction time data (r =-0.2253 and p<;0.05). This implies that with vigilance decrement, the fixation score decreases due to visual attention shifting away from the target objects resulting in an increase in the reaction time.
Collapse
|
8
|
Visualizing the Intellectual Structure of Eye Movement Research in Cartography. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2016. [DOI: 10.3390/ijgi5100168] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
9
|
Gaze position lagging behind scene content in multiple object tracking: Evidence from forward and backward presentations. Atten Percept Psychophys 2016; 78:2456-2468. [DOI: 10.3758/s13414-016-1178-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
10
|
Abstract
User evaluations of interactive and dynamic applications face various challenges related to the active nature of these displays. For example, users can often zoom and pan on digital products, and these interactions cause changes in the extent and/or level of detail of the stimulus. Therefore, in eye tracking studies, when a user's gaze is at a particular screen position (gaze position) over a period of time, the information contained in this particular position may have changed. Such digital activities are commonplace in modern life, yet it has been difficult to automatically compare the changing information at the viewed position, especially across many participants. Existing solutions typically involve tedious and time-consuming manual work. In this article, we propose a methodology that can overcome this problem. By combining eye tracking with user logging (mouse and keyboard actions) with cartographic products, we are able to accurately reference screen coordinates to geographic coordinates. This referencing approach allows researchers to know which geographic object (location or attribute) corresponds to the gaze coordinates at all times. We tested the proposed approach through two case studies, and discuss the advantages and disadvantages of the applied methodology. Furthermore, the applicability of the proposed approach is discussed with respect to other fields of research that use eye tracking-namely, marketing, sports and movement sciences, and experimental psychology. From these case studies and discussions, we conclude that combining eye tracking and user-logging data is an essential step forward in efficiently studying user behavior with interactive and static stimuli in multiple research fields.
Collapse
|
11
|
Abstract
BACKGROUND Task (over-)load imposed on surgeons is a main contributing factor to surgical errors. Recent research has shown that gaze metrics represent a valid and objective index to asses operator task load in non-surgical scenarios. Thus, gaze metrics have the potential to improve workplace safety by providing accurate measurements of task load variations. However, the direct relationship between gaze metrics and surgical task load has not been investigated yet. We studied the effects of surgical task complexity on the gaze metrics of surgical trainees. METHODS We recorded the eye movements of 18 surgical residents, using a mobile eye tracker system, during the performance of three high-fidelity virtual simulations of laparoscopic exercises of increasing complexity level: Clip Applying exercise, Cutting Big exercise, and Translocation of Objects exercise. We also measured performance accuracy and subjective rating of complexity. RESULTS Gaze entropy and velocity linearly increased with increased task complexity: Visual exploration pattern became less stereotyped (i.e., more random) and faster during the more complex exercises. Residents performed better the Clip Applying exercise and the Cutting Big exercise than the Translocation of Objects exercise and their perceived task complexity differed accordingly. CONCLUSIONS Our data show that gaze metrics are a valid and reliable surgical task load index. These findings have potential impacts to improve patient safety by providing accurate measurements of surgeon task (over-)load and might provide future indices to assess residents' learning curves, independently of expensive virtual simulators or time-consuming expert evaluation.
Collapse
|
12
|
Abstract
Novel open source R extension package for general-purpose eye tracking results analysis proposed. Now supported features are data loading from SMI eye trackers, different methods of fixations detection, various imaging techniques for raw data, and detected fixations (time sequence, scanpath, heatmap, and dynamic visualization). The modular structure of the package and a detailed description of each function provide a convenient way to further extend the functionality. Effective use of package requires knowledge of R programming language and environment.
Collapse
Affiliation(s)
| | - Pavel A Marmalyuk
- Moscow Municipal University of Psychology and Education, Moscow, Russia
| |
Collapse
|
13
|
van Gog T, Jarodzka H. Eye Tracking as a Tool to Study and Enhance Cognitive and Metacognitive Processes in Computer-Based Learning Environments. INTERNATIONAL HANDBOOK OF METACOGNITION AND LEARNING TECHNOLOGIES 2013. [DOI: 10.1007/978-1-4419-5546-3_10] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
14
|
Abstract
This article presents GazeAlyze, a software package, written as a MATLAB (MathWorks Inc., Natick, MA) toolbox developed for the analysis of eye movement data. GazeAlyze was developed for the batch processing of multiple data files and was designed as a framework with extendable modules. GazeAlyze encompasses the main functions of the entire processing queue of eye movement data to static visual stimuli. This includes detecting and filtering artifacts, detecting events, generating regions of interest, generating spread sheets for further statistical analysis, and providing methods for the visualization of results, such as path plots and fixation heat maps. All functions can be controlled through graphical user interfaces. GazeAlyze includes functions for correcting eye movement data for the displacement of the head relative to the camera after calibration in fixed head mounts. The preprocessing and event detection methods in GazeAlyze are based on the software ILAB 3.6.8 Gitelman (Behav Res Methods Instrum Comput 34(4), 605-612, 2002). GazeAlyze is distributed free of charge under the terms of the GNU public license and allows code modifications to be made so that the program's performance can be adjusted according to a user's scientific requirements.
Collapse
|
15
|
Abstract
When observing someone else acting on an object, people implement goal-specific eye movement programs that are driven by their own motor representation of the observed action. Usually, however, we observe people acting in contexts where more objects, different in shape and size, are present. Is our brain able to select the intended target even when there are different objects in the visual scene? And if this is the case, what kind of information does our motor system capitalize on? We recorded eye movements while participants observed an actor reaching for and grasping one of two objects requiring two different kinds of grip to be picked up. In a control condition, the actor merely reached for and touched one of the two objects without preshaping her hand according to the target features. Results showed higher accuracy and earlier saccadic movements when participants observed an actually grasping hand than when they observed a mere reaching hand devoid of any kind of target-related preshaping. This clearly suggests that the hand preshaping provided the observer with enough motor cues to proactively and reliably saccade toward the object to be grasped, thus identifying it even when the action target was not previously known. Our findings strongly corroborate the direct matching hypothesis suggesting that in processing others' actions, we take advantage of the same motor knowledge that enables us to efficiently perform those actions.
Collapse
Affiliation(s)
- Ettore Ambrosini
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University “G. d'Annunzio,” and Institute for Advanced Biomedical Technologies, Foundation University “G. d'Annunzio,” Chieti; and
| | - Marcello Costantini
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University “G. d'Annunzio,” and Institute for Advanced Biomedical Technologies, Foundation University “G. d'Annunzio,” Chieti; and
| | | |
Collapse
|
16
|
Livingstone-Lee SA, Murchison S, Zeman PM, Gandhi M, van Gerven D, Stewart L, Livingston NJ, Skelton RW. Simple gaze analysis and special design of a virtual Morris water maze provides a new method for differentiating egocentric and allocentric navigational strategy choice. Behav Brain Res 2011; 225:117-25. [PMID: 21771614 DOI: 10.1016/j.bbr.2011.07.005] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2011] [Revised: 06/07/2011] [Accepted: 07/03/2011] [Indexed: 11/30/2022]
Abstract
We present a novel method of combining eye tracking with specially designed virtual environments to provide objective evidence of navigational strategy selection. A simple, inexpensive video camera with an easily built infrared LED array is used to capture eye movements at 60Hz. Simple algorithms analyze gaze position at the start of each virtual maze trial to identify stimuli used for navigational orientation. To validate the methodology, human participants were tested in two virtual environments which differed with respect to features usable for navigation and which forced participants to use one or another of two well-known navigational strategies. Because the environmental features for the two kinds of navigation were clustered in different regions of the environment (and the video display), a simple analysis of gaze-position during the first (i.e., orienting) second of each trial revealed which features were being attended to, and therefore, which navigational strategy was about to be employed on the upcoming trial.
Collapse
|
17
|
Huff M, Papenmeier F, Jahn G, Hesse FW. Eye movements across viewpoint changes in multiple object tracking. VISUAL COGNITION 2010. [DOI: 10.1080/13506285.2010.495878] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|