1
|
Nolte D, Vidal De Palol M, Keshava A, Madrid-Carvajal J, Gert AL, von Butler EM, Kömürlüoğlu P, König P. Combining EEG and eye-tracking in virtual reality: Obtaining fixation-onset event-related potentials and event-related spectral perturbations. Atten Percept Psychophys 2024:10.3758/s13414-024-02917-3. [PMID: 38977612 DOI: 10.3758/s13414-024-02917-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/03/2024] [Indexed: 07/10/2024]
Abstract
Extensive research conducted in controlled laboratory settings has prompted an inquiry into how results can be generalized to real-world situations influenced by the subjects' actions. Virtual reality lends itself ideally to investigating complex situations but requires accurate classification of eye movements, especially when combining it with time-sensitive data such as EEG. We recorded eye-tracking data in virtual reality and classified it into gazes and saccades using a velocity-based classification algorithm, and we cut the continuous data into smaller segments to deal with varying noise levels, as introduced in the REMoDNav algorithm. Furthermore, we corrected for participants' translational movement in virtual reality. Various measures, including visual inspection, event durations, and the velocity and dispersion distributions before and after gaze onset, indicate that we can accurately classify the continuous, free-exploration data. Combining the classified eye-tracking with the EEG data, we generated fixation-onset event-related potentials (ERPs) and event-related spectral perturbations (ERSPs), providing further evidence for the quality of the eye-movement classification and timing of the onset of events. Finally, investigating the correlation between single trials and the average ERP and ERSP identified that fixation-onset ERSPs are less time sensitive, require fewer repetitions of the same behavior, and are potentially better suited to study EEG signatures in naturalistic settings. We modified, designed, and tested an algorithm that allows the combination of EEG and eye-tracking data recorded in virtual reality.
Collapse
Affiliation(s)
- Debora Nolte
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany.
| | - Marc Vidal De Palol
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Ashima Keshava
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - John Madrid-Carvajal
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Anna L Gert
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Eva-Marie von Butler
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Pelin Kömürlüoğlu
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
2
|
Taore A, Tiang M, Dakin SC. (The limits of) eye-tracking with iPads. J Vis 2024; 24:1. [PMID: 38953861 PMCID: PMC11223623 DOI: 10.1167/jov.24.7.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 04/22/2024] [Indexed: 07/04/2024] Open
Abstract
Applications for eye-tracking-particularly in the clinic-are limited by a reliance on dedicated hardware. Here we compare eye-tracking implemented on an Apple iPad Pro 11" (third generation)-using the device's infrared head-tracking and front-facing camera-with a Tobii 4c infrared eye-tracker. We estimated gaze location using both systems while 28 observers performed a variety of tasks. For estimating fixation, gaze position estimates from the iPad were less accurate and precise than the Tobii (mean absolute error of 3.2° ± 2.0° compared with 0.75° ± 0.43°), but fixation stability estimates were correlated across devices (r = 0.44, p < 0.05). For tasks eliciting saccades >1.5°, estimated saccade counts (r = 0.4-0.73, all p < 0.05) were moderately correlated across devices. For tasks eliciting saccades >8° we observed moderate correlations in estimated saccade speed and amplitude (r = 0.4-0.53, all p < 0.05). We did, however, note considerable variation in the vertical component of estimated smooth pursuit speed from the iPad and a catastrophic failure of tracking on the iPad in 5% to 20% of observers (depending on the test). Our findings sound a note of caution to researchers seeking to use iPads for eye-tracking and emphasize the need to properly examine their eye-tracking data to remove artifacts and outliers.
Collapse
Affiliation(s)
- Aryaman Taore
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand
| | - Michelle Tiang
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand
| | - Steven C Dakin
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
| |
Collapse
|
3
|
Smith SL, Maldonado-Codina C, Morgan PB, Read ML. Gaze and behavioural metrics in the refractive correction of presbyopia. Ophthalmic Physiol Opt 2024; 44:774-786. [PMID: 38578134 DOI: 10.1111/opo.13310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/06/2024]
Abstract
PURPOSE To investigate gaze and behavioural metrics at different viewing distances with multifocal contact lenses (MFCLs), single vision contact lenses (SVCLs) and progressive addition lenses (PALs). METHODS Fifteen presbyopic contact lens wearers participated over five separate study visits. At each visit, participants were randomly assigned to wear one of five refractive corrections: habitual PAL spectacles, delefilcon A (Alcon Inc.) MFCLs and three separate pairs of delefilcon A single vision lenses worn as distance, intermediate and near corrections. Participants wore a Pupil Core headset to record eye and head movements while performing three visual tasks: reading, visual search and scene observation. Data were investigated using linear regression and post-hoc testing. Parameters of interest included gaze (fixation duration, head movement) and behavioural (reading speed, reading accuracy, visual search time) metrics. RESULTS Reading speed in SVCLs was significantly faster than in MFCLs and PAL spectacles (F = 16.3, p < 0.0001). Refractive correction worn did not influence visual search times (F = 0.16, p = 0.85). Fixation duration was significantly affected by the type of visual task (F = 60.2, p < 0.001), and an interaction effect was observed between viewing distance and refractive correction (F = 4.3, p = 0.002). There was significantly more horizontal and vertical head movement (F = 3.2, p = 0.01 and F = 3.3, p = 0.01, respectively) during visual search tasks when wearing PAL spectacles compared to SVCLs or MFCLs. CONCLUSION This work showed that the type of refractive correction affects behavioural metrics such as reading speed and gaze behaviour by affecting horizontal and vertical head movements. The findings of this study suggest that under certain conditions, wearers of MFCLs make fewer head movements compared to PAL spectacles. Gaze behaviour metrics offer a new approach to compare and understand contact lens and spectacle performance, with potential applications including peripheral optical designs for myopia management.
Collapse
Affiliation(s)
- Sarah L Smith
- Eurolens Research, Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
| | - Carole Maldonado-Codina
- Eurolens Research, Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
| | - Philip B Morgan
- Eurolens Research, Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
| | - Michael L Read
- Eurolens Research, Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
| |
Collapse
|
4
|
Ibragimov B, Mello-Thoms C. The Use of Machine Learning in Eye Tracking Studies in Medical Imaging: A Review. IEEE J Biomed Health Inform 2024; 28:3597-3612. [PMID: 38421842 PMCID: PMC11262011 DOI: 10.1109/jbhi.2024.3371893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2024]
Abstract
Machine learning (ML) has revolutionized medical image-based diagnostics. In this review, we cover a rapidly emerging field that can be potentially significantly impacted by ML - eye tracking in medical imaging. The review investigates the clinical, algorithmic, and hardware properties of the existing studies. In particular, it evaluates 1) the type of eye-tracking equipment used and how the equipment aligns with study aims; 2) the software required to record and process eye-tracking data, which often requires user interface development, and controller command and voice recording; 3) the ML methodology utilized depending on the anatomy of interest, gaze data representation, and target clinical application. The review concludes with a summary of recommendations for future studies, and confirms that the inclusion of gaze data broadens the ML applicability in Radiology from computer-aided diagnosis (CAD) to gaze-based image annotation, physicians' error detection, fatigue recognition, and other areas of potentially high research and clinical impact.
Collapse
|
5
|
Rose JM, Ganbold O, Rose AM, Thibodeau JC, Rotaru K. Overcoming resistance to belief revision and correction of misinformation beliefs: psychophysiological and behavioral effects of a counterfactual mindset. Sci Rep 2024; 14:12493. [PMID: 38822014 PMCID: PMC11143297 DOI: 10.1038/s41598-024-63230-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 05/27/2024] [Indexed: 06/02/2024] Open
Abstract
In a series of experiments involving beliefs and misinformation beliefs, we find that individuals who are prompted with a counterfactual mindset are significantly more likely to change their existing beliefs when presented with evidence that contradicts their beliefs. While research finds that beliefs that are considered part of one's identity are highly resistant to change in the face of evidence that challenges these beliefs, four experiments provide evidence that counterfactual generation causes individuals to adjust beliefs and correct misinformation beliefs in response to contradicting evidence. Indeed, we find that a counterfactual mindset was effective in promoting incorporation of accurate facts and causing individuals to revise misinformation beliefs about COVID vaccination safety for a large sample of individuals who have rejected COVID vaccinations. Finally, the results of the psychophysiological experiment reveal that counterfactual generation alters decision makers' search strategies, increases their cognitive arousal in response to evidence that challenges their beliefs, and increases their desire to seek out disconfirming evidence. Overall, the four experiments indicate that counterfactual generation can effectively activate mindsets that increase individuals' willingness to evaluate evidence that contradicts their beliefs and adjust their beliefs in response to evidence.
Collapse
Affiliation(s)
- Jacob M Rose
- Department of Accounting, University of Northern Colorado, Greeley, CO, 80639, USA
| | - Odkhishig Ganbold
- Department of Medicine at Royal Melbourne Hospital, Melbourne Medical School, The University of Melbourne, Melbourne, VIC, 3050, Australia
| | - Anna M Rose
- Department of Accounting, University of Northern Colorado, Greeley, CO, 80639, USA
| | - Jay C Thibodeau
- Department of Accounting, Bentley University, Waltham, MA, 02452, US
| | - Kristian Rotaru
- Department of Accounting, Monash Business School, Monash University, Caulfield East, VIC, 3145, Australia.
- The Turner Institute for Brain and Mental Health, School of Psychological Sciences and Monash Biomedical Imaging Facility, Monash University, BrainPark, Clayton, VIC, 3800, Australia.
| |
Collapse
|
6
|
Johari K, Bhardwaj R, Kim JJ, Yow WQ, Tan UX. Eye movement analysis for real-world settings using segmented linear regression. Comput Biol Med 2024; 174:108364. [PMID: 38599067 DOI: 10.1016/j.compbiomed.2024.108364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 03/02/2024] [Accepted: 03/21/2024] [Indexed: 04/12/2024]
Abstract
Eye movement analysis is critical to studying human brain phenomena such as perception, cognition, and behavior. However, under uncontrolled real-world settings, the recorded gaze coordinates (commonly used to track eye movements) are typically noisy and make it difficult to track change in the state of each phenomenon precisely, primarily because the expected change is usually a slower transient process. This paper proposes an approach, Improved Naive Segmented linear regression (INSLR), which approximates the gaze coordinates with a piecewise linear function (PLF) referred to as a hypothesis. INSLR improves the existing NSLR approach by employing a hypotheses clustering algorithm, which redefines the final hypothesis estimation in two steps: (1) At each time-stamp, measure the likelihood of each hypothesis in the candidate list of hypotheses by using the least square fit score and its distance from the k-means of the hypotheses in the list. (2) Filter hypothesis based on a pre-defined threshold. We demonstrate the significance of the INSLR method in addressing the challenges of uncontrolled real-world settings such as gaze denoising and minimizing gaze prediction errors from cost-effective devices like webcams. Experiment results show INSLR consistently outperforms the baseline NSLR in denoising noisy signals from three eye movement datasets and minimizes the error in gaze prediction from a low precision device for 71.1% samples. Furthermore, this improvement in denoising quality is further validated by the improved accuracy of the oculomotor event classifier called NSLR-HMM and enhanced sensitivity in detecting variations in attention induced by distractor during online lecture.
Collapse
Affiliation(s)
- Kritika Johari
- Engineering Product Development Pillar, Singapore University of Technology and Design, Singapore.
| | - Rishabh Bhardwaj
- Information Systems Technology and Design Pillar, Singapore University of Technology and Design, Singapore
| | - Jung-Jae Kim
- Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore
| | - Wei Quin Yow
- Humanities, Arts and Social Sciences, Singapore University of Technology and Design, Singapore
| | - U-Xuan Tan
- Engineering Product Development Pillar, Singapore University of Technology and Design, Singapore
| |
Collapse
|
7
|
Drews M, Dierkes K. Strategies for enhancing automatic fixation detection in head-mounted eye tracking. Behav Res Methods 2024:10.3758/s13428-024-02360-0. [PMID: 38594440 DOI: 10.3758/s13428-024-02360-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/30/2024] [Indexed: 04/11/2024]
Abstract
Moving through a dynamic world, humans need to intermittently stabilize gaze targets on their retina to process visual information. Overt attention being thus split into discrete intervals, the automatic detection of such fixation events is paramount to downstream analysis in many eye-tracking studies. Standard algorithms tackle this challenge in the limiting case of little to no head motion. In this static scenario, which is approximately realized for most remote eye-tracking systems, it amounts to detecting periods of relative eye stillness. In contrast, head-mounted eye trackers allow for experiments with subjects moving naturally in everyday environments. Detecting fixations in these dynamic scenarios is more challenging, since gaze-stabilizing eye movements need to be reliably distinguished from non-fixational gaze shifts. Here, we propose several strategies for enhancing existing algorithms developed for fixation detection in the static case to allow for robust fixation detection in dynamic real-world scenarios recorded with head-mounted eye trackers. Specifically, we consider (i) an optic-flow-based compensation stage explicitly accounting for stabilizing eye movements during head motion, (ii) an adaptive adjustment of algorithm sensitivity according to head-motion intensity, and (iii) a coherent tuning of all algorithm parameters. Introducing a new hand-labeled dataset, recorded with the Pupil Invisible glasses by Pupil Labs, we investigate their individual contributions. The dataset comprises both static and dynamic scenarios and is made publicly available. We show that a combination of all proposed strategies improves standard thresholding algorithms and outperforms previous approaches to fixation detection in head-mounted eye tracking.
Collapse
Affiliation(s)
- Michael Drews
- Pupil Labs, Sanderstraße 28, 12047, Berlin, Germany.
| | - Kai Dierkes
- Pupil Labs, Sanderstraße 28, 12047, Berlin, Germany.
| |
Collapse
|
8
|
Callahan-Flintoft C, Jensen E, Naeem J, Nonte MW, Madison AM, Ries AJ. A Comparison of Head Movement Classification Methods. SENSORS (BASEL, SWITZERLAND) 2024; 24:1260. [PMID: 38400418 PMCID: PMC10893452 DOI: 10.3390/s24041260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 02/06/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
To understand human behavior, it is essential to study it in the context of natural movement in immersive, three-dimensional environments. Virtual reality (VR), with head-mounted displays, offers an unprecedented compromise between ecological validity and experimental control. However, such technological advancements mean that new data streams will become more widely available, and therefore, a need arises to standardize methodologies by which these streams are analyzed. One such data stream is that of head position and rotation tracking, now made easily available from head-mounted systems. The current study presents five candidate algorithms of varying complexity for classifying head movements. Each algorithm is compared against human rater classifications and graded based on the overall agreement as well as biases in metrics such as movement onset/offset time and movement amplitude. Finally, we conclude this article by offering recommendations for the best practices and considerations for VR researchers looking to incorporate head movement analysis in their future studies.
Collapse
Affiliation(s)
- Chloe Callahan-Flintoft
- U.S. Army Combat Capabilities Development Command (DEVCOM) Army Research Laboratory, Aberdeen, MD 21005, USA; (A.M.M.); (A.J.R.)
| | - Emily Jensen
- Department of Computer Science, University of Colorado Boulder, Boulder, CO 80303, USA;
| | - Jasim Naeem
- DCS Corporation, Alexandria, VA 22310, USA; (J.N.); (M.W.N.)
| | | | - Anna M. Madison
- U.S. Army Combat Capabilities Development Command (DEVCOM) Army Research Laboratory, Aberdeen, MD 21005, USA; (A.M.M.); (A.J.R.)
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO 80840, USA
| | - Anthony J. Ries
- U.S. Army Combat Capabilities Development Command (DEVCOM) Army Research Laboratory, Aberdeen, MD 21005, USA; (A.M.M.); (A.J.R.)
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO 80840, USA
| |
Collapse
|
9
|
Angeles D, Kurtek S, Klein E, Brinkman M, Ferketich A. Geometric framework for statistical analysis of eye tracking heat maps, with application to a tobacco waterpipe study. J Appl Stat 2023; 51:1191-1209. [PMID: 38628449 PMCID: PMC11018012 DOI: 10.1080/02664763.2023.2233143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 06/17/2023] [Indexed: 04/19/2024]
Abstract
Health warning labels have been found to increase awareness of the harmful effects of tobacco products. An eye tracking study was conducted to determine the optimal placement and type of a health warning label on tobacco waterpipes. Participants viewed images that contained one of (1) four waterpipes, (2) three different types of warning labels, (3) placed in three locations. Typically, statistical analysis of eye tracking data is conducted based on summary statistics such as total dwell time, duration score, and number of visits to an area of interest. However, these summary statistics fail to capture the complete variability in a participant's eye movement. Instead, we propose to estimate heat maps defined on the entire image domain using the raw two-dimensional coordinates of eye movement via kernel density estimation. For statistical analysis of heat maps, we adopt the Fisher-Rao Riemannian geometric framework, which enables computationally efficient comparisons of heat maps, statistical summarization and exploration of variability in a sample of heat maps, and metric-based hierarchical clustering. We apply this framework to eye tracking data from the tobacco waterpipe study and comment on the results in the context of the optimal placement and type of health warning labels on tobacco waterpipes.
Collapse
Affiliation(s)
- David Angeles
- College of Public Health, The Ohio State University, Columbus, OH, USA
| | - Sebastian Kurtek
- Department of Statistics, The Ohio State University, Columbus, OH, USA
| | - Elizabeth Klein
- College of Public Health, The Ohio State University, Columbus, OH, USA
| | - Marielle Brinkman
- College of Public Health, The Ohio State University, Columbus, OH, USA
| | - Amy Ferketich
- College of Public Health, The Ohio State University, Columbus, OH, USA
| |
Collapse
|
10
|
Kredel R, Hernandez J, Hossner EJ, Zahno S. Eye-tracking technology and the dynamics of natural gaze behavior in sports: an update 2016-2022. Front Psychol 2023; 14:1130051. [PMID: 37359890 PMCID: PMC10286576 DOI: 10.3389/fpsyg.2023.1130051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 05/16/2023] [Indexed: 06/28/2023] Open
Abstract
Updating and complementing a previous review on eye-tracking technology and the dynamics of natural gaze behavior in sports, this short review focuses on the progress concerning researched sports tasks, applied methods of gaze data collection and analysis as well as derived gaze measures for the time interval of 2016-2022. To that end, a systematic review according to the PRISMA guidelines was conducted, searching Web of Science, PubMed Central, SPORTDiscus, and ScienceDirect for the keywords: eye tracking, gaze behavio*r, eye movement, and visual search. Thirty-one studies were identified for the review. On the one hand, a generally increased research interest and a wider area of researched sports with a particular increase in official's gaze behavior were diagnosed. On the other hand, a general lack of progress concerning sample sizes, amounts of trials, employed eye-tracking technology and gaze analysis procedures must be acknowledged. Nevertheless, first attempts to automated gaze-cue-allocations (GCA) in mobile eye-tracking studies were seen, potentially enhancing objectivity, and alleviating the burden of manual workload inherently associated with conventional gaze analyses. Reinforcing the claims of the previous review, this review concludes by describing four distinct technological approaches to automating GCA, some of which are specifically suited to tackle the validity and generalizability issues associated with the current limitations of mobile eye-tracking studies on natural gaze behavior in sports.
Collapse
|
11
|
Schröder B, Mühlberger A. Measuring attentional bias in smokers during and after psychosocial stress induction with a Trier Social Stress Test in virtual reality via eye tracking. Front Psychol 2023; 14:1129422. [PMID: 37063522 PMCID: PMC10103713 DOI: 10.3389/fpsyg.2023.1129422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 03/14/2023] [Indexed: 04/03/2023] Open
Abstract
IntroductionAttentional bias (AB) is considered an important factor not only in the etiology of addiction, but also with respect to relapse. However, evidence for the predictive ability of AB for relapse is not robust. One reason for this might be fluctuations of AB due to stress. Therefore, the current study investigated whether AB was present during and after stress induction and whether AB was enhanced by stress induction.MethodsA Virtual Reality (VR) adaptation of the Trier Social Stress Test (VR-TSST) was used to induce psychosocial stress in smokers (n = 34) and non-smokers (n = 37) followed by a novel free-viewing task in VR. Eye tracking data was recorded to examine gaze behavior to smoking-related and neutral stimuli presented in the VR-TSST and the free-viewing task.ResultsStress ratings increased significantly from baseline to post VR-TSST in smokers and non-smokers. During the VR-TSST we observed, more frequent, longer, and earlier fixations on smoke-related compared with neutral stimuli without significant group differences. However, in the free-viewing task following the stress induction, a specific AB of smokers in terms of earlier and longer fixations on smoke stimuli was found.ConclusionResults indicate that AB is not a persistent trait in smokers, but is context dependent. It is suggested that emotional learning processes such as smoking in the context of relief after stress may contribute to changes of AB both in terms of increased initial attention and deeper stimulus processing. Additionally, the potential of the VR-TSST to induce psychosocial stress could be replicated.
Collapse
|
12
|
Skaramagkas V, Ktistakis E, Manousos D, Kazantzaki E, Tachos NS, Tripoliti E, Fotiadis DI, Tsiknakis M. eSEE-d: Emotional State Estimation Based on Eye-Tracking Dataset. Brain Sci 2023; 13:brainsci13040589. [PMID: 37190554 DOI: 10.3390/brainsci13040589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 03/22/2023] [Accepted: 03/29/2023] [Indexed: 04/03/2023] Open
Abstract
Affective state estimation is a research field that has gained increased attention from the research community in the last decade. Two of the main catalysts for this are the advancement in the data analysis using artificial intelligence and the availability of high-quality video. Unfortunately, benchmarks and public datasets are limited, thus making the development of new methodologies and the implementation of comparative studies essential. The current work presents the eSEE-d database, which is a resource to be used for emotional State Estimation based on Eye-tracking data. Eye movements of 48 participants were recorded as they watched 10 emotion-evoking videos, each of them followed by a neutral video. Participants rated four emotions (tenderness, anger, disgust, sadness) on a scale from 0 to 10, which was later translated in terms of emotional arousal and valence levels. Furthermore, each participant filled three self-assessment questionnaires. An extensive analysis of the participants’ answers to the questionnaires’ self-assessment scores as well as their ratings during the experiments is presented. Moreover, eye and gaze features were extracted from the low-level eye-recorded metrics, and their correlations with the participants’ ratings are investigated. Finally, we take on the challenge to classify arousal and valence levels based solely on eye and gaze features, leading to promising results. In particular, the Deep Multilayer Perceptron (DMLP) network we developed achieved an accuracy of 92% in distinguishing positive valence from non-positive and 81% in distinguishing low arousal from medium arousal. The dataset is made publicly available.
Collapse
|
13
|
Tang Z, Liu X, Huo H, Tang M, Qiao X, Chen D, Dong Y, Fan L, Wang J, Du X, Guo J, Tian S, Fan Y. Eye movement characteristics in a mental rotation task presented in virtual reality. Front Neurosci 2023; 17:1143006. [PMID: 37051147 PMCID: PMC10083294 DOI: 10.3389/fnins.2023.1143006] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 03/13/2023] [Indexed: 03/28/2023] Open
Abstract
IntroductionEye-tracking technology provides a reliable and cost-effective approach to characterize mental representation according to specific patterns. Mental rotation tasks, referring to the mental representation and transformation of visual information, have been widely used to examine visuospatial ability. In these tasks, participants visually perceive three-dimensional (3D) objects and mentally rotate them until they identify whether the paired objects are identical or mirrored. In most studies, 3D objects are presented using two-dimensional (2D) images on a computer screen. Currently, visual neuroscience tends to investigate visual behavior responding to naturalistic stimuli rather than image stimuli. Virtual reality (VR) is an emerging technology used to provide naturalistic stimuli, allowing the investigation of behavioral features in an immersive environment similar to the real world. However, mental rotation tasks using 3D objects in immersive VR have been rarely reported.MethodsHere, we designed a VR mental rotation task using 3D stimuli presented in a head-mounted display (HMD). An eye tracker incorporated into the HMD was used to examine eye movement characteristics during the task synchronically. The stimuli were virtual paired objects oriented at specific angular disparities (0, 60, 120, and 180°). We recruited thirty-three participants who were required to determine whether the paired 3D objects were identical or mirrored.ResultsBehavioral results demonstrated that the response times when comparing mirrored objects were longer than identical objects. Eye-movement results showed that the percent fixation time, the number of within-object fixations, and the number of saccades for the mirrored objects were significantly lower than that for the identical objects, providing further explanations for the behavioral results.DiscussionIn the present work, we examined behavioral and eye movement characteristics during a VR mental rotation task using 3D stimuli. Significant differences were observed in response times and eye movement metrics between identical and mirrored objects. The eye movement data provided further explanation for the behavioral results in the VR mental rotation task.
Collapse
Affiliation(s)
- Zhili Tang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Xiaoyu Liu
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
- *Correspondence: Xiaoyu Liu,
| | - Hongqiang Huo
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Min Tang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Xiaofeng Qiao
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Duo Chen
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Ying Dong
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Linyuan Fan
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Jinghui Wang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Xin Du
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Jieyi Guo
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Shan Tian
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Yubo Fan
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
- Yubo Fan,
| |
Collapse
|
14
|
Brilhault A, Neuenschwander S, Rios RA. A new robust multivariate mode estimator for eye-tracking calibration. Behav Res Methods 2023; 55:516-553. [PMID: 35297014 DOI: 10.3758/s13428-022-01809-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/27/2022] [Indexed: 11/08/2022]
Abstract
We propose in this work a new method for estimating the main mode of multivariate distributions, with application to eye-tracking calibration. When performing eye-tracking experiments with poorly cooperative subjects, such as infants or monkeys, the calibration data generally suffer from high contamination. Outliers are typically organized in clusters, corresponding to fixations in the time intervals when subjects were not looking at the calibration points. In this type of multimodal distributions, most central tendency measures fail at estimating the principal fixation coordinates (the first mode), resulting in errors and inaccuracies when mapping the gaze to the screen coordinates. Here, we developed a new algorithm to identify the first mode of multivariate distributions, named BRIL, which relies on recursive depth-based filtering. This novel approach was tested on artificial mixtures of Gaussian and Uniform distributions, and compared to existing methods (conventional depth medians, robust estimators of location and scatter, and clustering-based approaches). We obtained outstanding performances, even for distributions containing very high proportions of outliers, both grouped in clusters and randomly distributed. Finally, we demonstrate the strength of our method in a real-world scenario using experimental data from eye-tracking calibrations with Capuchin monkeys, especially for highly contaminated distributions where other algorithms typically lack accuracy.
Collapse
Affiliation(s)
- Adrien Brilhault
- Department of Computer Science, Federal University of Bahia, Salvador, Brazil
| | | | - Ricardo Araujo Rios
- Department of Computer Science, Federal University of Bahia, Salvador, Brazil.
| |
Collapse
|
15
|
Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, Drieghe D, Dunn MJ, Ettinger U, Fiedler S, Foulsham T, van der Geest JN, Hansen DW, Hutton SB, Kasneci E, Kingstone A, Knox PC, Kok EM, Lee H, Lee JY, Leppänen JM, Macknik S, Majaranta P, Martinez-Conde S, Nuthmann A, Nyström M, Orquin JL, Otero-Millan J, Park SY, Popelka S, Proudlock F, Renkewitz F, Roorda A, Schulte-Mecklenbeck M, Sharif B, Shic F, Shovman M, Thomas MG, Venrooij W, Zemblys R, Hessels RS. Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2023; 55:364-416. [PMID: 35384605 PMCID: PMC9535040 DOI: 10.3758/s13428-021-01762-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Department of Psychology, Nicolaus Copernicus University, Torun, Poland.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Department of Psychology, Regensburg University, Regensburg, Germany.
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Social, Health and Organizational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | | | - Lewis L Chuang
- Department of Ergonomics, Leibniz Institute for Working Environments and Human Factors, Dortmund, Germany
- Institute of Informatics, LMU Munich, Munich, Germany
| | | | - Denis Drieghe
- School of Psychology, University of Southampton, Southampton, UK
| | - Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | - Susann Fiedler
- Vienna University of Economics and Business, Vienna, Austria
| | - Tom Foulsham
- Department of Psychology, University of Essex, Essex, UK
| | | | - Dan Witzner Hansen
- Machine Learning Group, Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | | | - Enkelejda Kasneci
- Human-Computer Interaction, University of Tübingen, Tübingen, Germany
| | | | - Paul C Knox
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Ellen M Kok
- Department of Education and Pedagogy, Division Education, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, The Netherlands
| | - Helena Lee
- University of Southampton, Southampton, UK
| | - Joy Yeonjoo Lee
- School of Health Professions Education, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Jukka M Leppänen
- Department of Psychology and Speed-Language Pathology, University of Turku, Turku, Finland
| | - Stephen Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Päivi Majaranta
- TAUCHI Research Center, Computing Sciences, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jacob L Orquin
- Department of Management, Aarhus University, Aarhus, Denmark
- Center for Research in Marketing and Consumer Psychology, Reykjavik University, Reykjavik, Iceland
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, Vienna, Austria
| | - Stanislav Popelka
- Department of Geoinformatics, Palacký University Olomouc, Olomouc, Czech Republic
| | - Frank Proudlock
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Frank Renkewitz
- Department of Psychology, University of Erfurt, Erfurt, Germany
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of General Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Mark Shovman
- Eyeviation Systems, Herzliya, Israel
- Department of Industrial Design, Bezalel Academy of Arts and Design, Jerusalem, Israel
| | - Mervyn G Thomas
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Ward Venrooij
- Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Enschede, The Netherlands
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
16
|
Friedman L, Prokopenko V, Djanian S, Katrychuk D, Komogortsev OV. Factors affecting inter-rater agreement in human classification of eye movements: a comparison of three datasets. Behav Res Methods 2023; 55:417-427. [PMID: 35411475 DOI: 10.3758/s13428-021-01782-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/22/2021] [Indexed: 11/08/2022]
Abstract
Manual classification of eye-movements is used in research and as a basis for comparison with automatic algorithms in the development phase. However, human classification will not be useful if it is unreliable and unrepeatable. Therefore, it is important to know what factors might influence and enhance the accuracy and reliability of human classification of eye-movements. In this report we compare three datasets of human manual classification, two from earlier datasets and one, our own dataset, which we present here for the first time. For inter-rater reliability, we assess both the event-level F1-score and sample-level Cohen's κ, across groups of raters. The report points to several possible influences on human classification reliability: eye-tracker quality, use of head restraint, characteristics of the recorded subjects, the availability of detailed scoring rules, and the characteristics and training of the raters.
Collapse
Affiliation(s)
- Lee Friedman
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA.
| | - Vladyslav Prokopenko
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
| | - Shagen Djanian
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
- Department of Computer Science, Aalborg University, Selma Lagerlofs Vej 300, 9220, Aalborg East, Denmark
| | - Dmytro Katrychuk
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
| | - Oleg V Komogortsev
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
| |
Collapse
|
17
|
Hooge ITC, Niehorster DC, Nyström M, Andersson R, Hessels RS. Fixation classification: how to merge and select fixation candidates. Behav Res Methods 2022; 54:2765-2776. [PMID: 35023066 PMCID: PMC9729319 DOI: 10.3758/s13428-021-01723-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/06/2021] [Indexed: 12/16/2022]
Abstract
Eye trackers are applied in many research fields (e.g., cognitive science, medicine, marketing research). To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data with high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5∘), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by a rule that selects saccades with amplitudes larger than 1.0∘ and a rule that selects fixations with duration longer than 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters.
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
18
|
Birawo B, Kasprowski P. Review and Evaluation of Eye Movement Event Detection Algorithms. SENSORS (BASEL, SWITZERLAND) 2022; 22:8810. [PMID: 36433407 PMCID: PMC9699548 DOI: 10.3390/s22228810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 11/04/2022] [Accepted: 11/08/2022] [Indexed: 06/16/2023]
Abstract
Eye tracking is a technology aimed at understanding the direction of the human gaze. Event detection is a process of detecting and classifying eye movements that are divided into several types. Nowadays, event detection is almost exclusively done by applying a detection algorithm to the raw recorded eye-tracking data. However, due to the lack of a standard procedure for how to perform evaluations, evaluating and comparing various detection algorithms in eye-tracking signals is very challenging. In this paper, we used data from a high-speed eye-tracker SMI HiSpeed 1250 system and compared event detection performance. The evaluation focused on fixations, saccades and post-saccadic oscillation classification. It used sample-by-sample comparisons to compare the algorithms and inter-agreement between algorithms and human coders. The impact of varying threshold values on threshold-based algorithms was examined and the optimum threshold values were determined. This evaluation differed from previous evaluations by using the same dataset to evaluate the event detection algorithms and human coders. We evaluated and compared the different algorithms from threshold-based, machine learning-based and deep learning event detection algorithms. The evaluation results show that all methods perform well for fixation and saccade detection; however, there are substantial differences in classification results. Generally, CNN (Convolutional Neural Network) and RF (Random Forest) algorithms outperform threshold-based methods.
Collapse
|
19
|
Zyrianov V, Peterson CS, Guarnera DT, Behler J, Weston P, Sharif B, Maletic JI. Deja Vu: semantics-aware recording and replay of high-speed eye tracking and interaction data to support cognitive studies of software engineering tasks-methodology and analyses. EMPIRICAL SOFTWARE ENGINEERING 2022; 27:168. [PMID: 36159896 PMCID: PMC9486800 DOI: 10.1007/s10664-022-10209-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 07/13/2022] [Indexed: 06/16/2023]
Abstract
The paper introduces a fundamental technological problem with collecting high-speed eye tracking data while studying software engineering tasks in an integrated development environment. The use of eye trackers is quickly becoming an important means to study software developers and how they comprehend source code and locate bugs. High quality eye trackers can record upwards of 120 to 300 gaze points per second. However, it is not always possible to map each of these points to a line and column position in a source code file (in the presence of scrolling and file switching) in real time at data rates over 60 gaze points per second without data loss. Unfortunately, higher data rates are more desirable as they allow for finer granularity and more accurate study analyses. To alleviate this technological problem, a novel method for eye tracking data collection is presented. Instead of performing gaze analysis in real time, all telemetry (keystrokes, mouse movements, and eye tracker output) data during a study is recorded as it happens. Sessions are then replayed at a much slower speed allowing for ample time to map gaze point positions to the appropriate file, line, and column to perform additional analysis. A description of the method and corresponding tool, Deja Vu, is presented. An evaluation of the method and tool is conducted using three different eye trackers running at four different speeds (60 Hz, 120 Hz, 150 Hz, and 300 Hz). This timing evaluation is performed in Visual Studio, Eclipse, and Atom IDEs. Results show that Deja Vu can playback 100% of the data recordings, correctly mapping the gaze to corresponding elements, making it a well-founded and suitable post processing step for future eye tracking studies in software engineering. Finally, a proof of concept replication analysis of four tasks from two previous studies is performed. Due to using the Deja Vu approach, this replication resulted in richer collected data and improved on the number of distinct syntactic categories that gaze was mapped on in the code.
Collapse
Affiliation(s)
- Vlas Zyrianov
- Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, IL USA
| | - Cole S. Peterson
- School of Computing, University of Nebraska-Lincoln, Lincoln, NE USA
| | - Drew T. Guarnera
- Department of Computer Science, The College of Wooster, Wooster, OH USA
| | - Joshua Behler
- Department of Computer Science, Kent State University, Kent, OH USA
| | - Praxis Weston
- Department of Computer Science, Kent State University, Kent, OH USA
| | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, NE USA
| | | |
Collapse
|
20
|
Ktistakis E, Skaramagkas V, Manousos D, Tachos NS, Tripoliti E, Fotiadis DI, Tsiknakis M. COLET: A dataset for COgnitive workLoad estimation based on eye-tracking. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 224:106989. [PMID: 35870415 DOI: 10.1016/j.cmpb.2022.106989] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 06/02/2022] [Accepted: 06/28/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE The cognitive workload is an important component in performance psychology, ergonomics, and human factors. Publicly available datasets are scarce, making it difficult to establish new approaches and comparative studies. In this work, COLET-COgnitive workLoad estimation based on Eye-Tracking dataset is presented. METHODS Forty-seven (47) individuals' eye movements were monitored as they solved puzzles involving visual search activities of varying complexity and duration. The participants' cognitive workload level was evaluated with the subjective test of NASA-TLX and this score is used as an annotation of the activity. Extensive data analysis was performed in order to derive eye and gaze features from low-level eye recorded metrics, and a range of machine learning models were evaluated and tested regarding the estimation of the cognitive workload level. RESULTS The activities induced four different levels of cognitive workload. Multi tasking and time pressure have induced a higher level of cognitive workload than the one induced by single tasking and absence of time pressure. Multi tasking had a significant effect on 17 eye features while time pressure had a significant effect on 7 eye features. Both binary and multi-class identification attempts were performed by testing a variety of well-known classifiers, resulting in encouraging results towards cognitive workload levels estimation, with up to 88% correct predictions between low and high cognitive workload. CONCLUSIONS Machine learning analysis demonstrated potential in discriminating cognitive workload levels using only eye-tracking characteristics. The proposed dataset includes a much higher sample size and a wider spectrum of eye and gaze metrics than other similar datasets, allowing for the examination of their relations with various cognitive states.
Collapse
Affiliation(s)
- Emmanouil Ktistakis
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), GR-700 13 Heraklion, Greece; Laboratory of Optics and Vision, School of Medicine, University of Crete, GR-710 03 Heraklion, Greece.
| | - Vasileios Skaramagkas
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), GR-700 13 Heraklion, Greece; Dept. of Electrical and Computer Engineering, Hellenic Mediterranean University, GR-710 04 Heraklion, Crete, Greece
| | - Dimitris Manousos
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), GR-700 13 Heraklion, Greece
| | - Nikolaos S Tachos
- Biomedical Research Institute, FORTH, GR-451 10, Ioannina, Greece and the Dept. of Materials Science and Engineering, Unit of Medical Technology and Intelligent Information Systems, University of Ioannina, GR-451 10, Ioannina, Greece
| | - Evanthia Tripoliti
- Dept. of Materials Science and Engineering, Unit of Medical Technology and Intelligent Information Systems, University of Ioannina, GR-451 10, Ioannina, Greece
| | - Dimitrios I Fotiadis
- Biomedical Research Institute, FORTH, GR-451 10, Ioannina, Greece and the Dept. of Materials Science and Engineering, Unit of Medical Technology and Intelligent Information Systems, University of Ioannina, GR-451 10, Ioannina, Greece
| | - Manolis Tsiknakis
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), GR-700 13 Heraklion, Greece; Dept. of Electrical and Computer Engineering, Hellenic Mediterranean University, GR-710 04 Heraklion, Crete, Greece
| |
Collapse
|
21
|
Eye Tracking Use in Surgical Research: A Systematic Review. J Surg Res 2022; 279:774-787. [PMID: 35944332 DOI: 10.1016/j.jss.2022.05.024] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 03/18/2022] [Accepted: 05/22/2022] [Indexed: 11/20/2022]
Abstract
INTRODUCTION Eye tracking (ET) is a popular tool to study what factors affect the visual behaviour of surgical team members. To our knowledge, there have been no reviews to date that evaluate the broad use of ET in surgical research. This review aims to identify and assess the quality of this evidence, to synthesize how ET can be used to inform surgical practice, and to provide recommendations to improve future ET surgical studies. METHODS In line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, a systematic literature review was conducted. An electronic search was performed in MEDLINE, Cochrane Central, Embase, and Web of Science databases up to September 2020. Included studies used ET to measure the visual behaviour of members of the surgical team during surgery or surgical tasks. The included studies were assessed by two independent reviewers. RESULTS A total of 7614 studies were identified, and 111 were included for data extraction. Eleven applications were identified; the four most common were skill assessment (41%), visual attention assessment (22%), workload measurement (17%), and skills training (10%). A summary was provided of the various ways ET could be used to inform surgical practice, and three areas were identified for the improvement of future ET studies in surgery. CONCLUSIONS This review provided a comprehensive summary of the various applications of ET in surgery and how ET could be used to inform surgical practice, including how to use ET to improve surgical education. The information provided in this review can also aid in the design and conduct of future ET surgical studies.
Collapse
|
22
|
Katz CN, Schjetnan AGP, Patel K, Barkley V, Hoffman KL, Kalia SK, Duncan KD, Valiante TA. A corollary discharge mediates saccade-related inhibition of single units in mnemonic structures of the human brain. Curr Biol 2022; 32:3082-3094.e4. [PMID: 35779529 DOI: 10.1016/j.cub.2022.06.015] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 04/04/2022] [Accepted: 06/08/2022] [Indexed: 11/18/2022]
Abstract
Despite the critical link between visual exploration and memory, little is known about how neuronal activity in the human mesial temporal lobe (MTL) is modulated by saccades. Here, we characterize saccade-associated neuronal modulations, unit-by-unit, and contrast them to image onset and to occipital lobe neurons. We reveal evidence for a corollary discharge (CD)-like modulatory signal that accompanies saccades, inhibiting/exciting a unique population of broad-/narrow-spiking units, respectively, before and during saccades and with directional selectivity. These findings comport well with the timing, directional nature, and inhibitory circuit implementation of a CD. Additionally, by linking neuronal activity to event-related potentials (ERPs), which are directionally modulated following saccades, we recontextualize the ERP associated with saccades as a proxy for both the strength of inhibition and saccade direction, providing a mechanistic underpinning for the more commonly recorded saccade-related ERP in the human brain.
Collapse
Affiliation(s)
- Chaim N Katz
- Krembil Brain Institute, Toronto Western Hospital (TWH), Toronto, ON M5T 1M8, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada; CRANIA, University Health Network and University of Toronto, Toronto, ON M5G 2A2, Canada; Faculty of Medicine, University of Calgary, Calgary, AB T2N 4N1, Canada
| | - Andrea G P Schjetnan
- Krembil Brain Institute, Toronto Western Hospital (TWH), Toronto, ON M5T 1M8, Canada; CRANIA, University Health Network and University of Toronto, Toronto, ON M5G 2A2, Canada
| | - Kramay Patel
- Krembil Brain Institute, Toronto Western Hospital (TWH), Toronto, ON M5T 1M8, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada; Faculty of Medicine, University of Toronto, Toronto, ON M5S 1A8, Canada; CRANIA, University Health Network and University of Toronto, Toronto, ON M5G 2A2, Canada
| | - Victoria Barkley
- Krembil Brain Institute, Toronto Western Hospital (TWH), Toronto, ON M5T 1M8, Canada; CRANIA, University Health Network and University of Toronto, Toronto, ON M5G 2A2, Canada
| | - Kari L Hoffman
- Department of Psychology, Vanderbilt University, Nashville, TN 37240, USA
| | - Suneil K Kalia
- Krembil Brain Institute, Toronto Western Hospital (TWH), Toronto, ON M5T 1M8, Canada; Division of Neurosurgery, Department of Surgery, University of Toronto, Toronto, ON M5S 1A1, Canada; CRANIA, University Health Network and University of Toronto, Toronto, ON M5G 2A2, Canada; The KITE Research Institute, University Health Network, Toronto, ON M5G 2A2, Canada
| | - Katherine D Duncan
- Department of Psychology, University of Toronto, Toronto, ON M5S 3G3, Canada
| | - Taufik A Valiante
- Krembil Brain Institute, Toronto Western Hospital (TWH), Toronto, ON M5T 1M8, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada; Division of Neurosurgery, Department of Surgery, University of Toronto, Toronto, ON M5S 1A1, Canada; Institute of Medical Sciences, University of Toronto, Toronto, ON M5S 1A8, Canada; Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S 3G4, Canada; CRANIA, University Health Network and University of Toronto, Toronto, ON M5G 2A2, Canada; The KITE Research Institute, University Health Network, Toronto, ON M5G 2A2, Canada; Max Planck-University of Toronto Center for Neural Science and Technology, Toronto, ON, Canada.
| |
Collapse
|
23
|
Evaluating Eye Movement Event Detection: A Review of the State of the Art. Behav Res Methods 2022:10.3758/s13428-021-01763-7. [PMID: 35715615 DOI: 10.3758/s13428-021-01763-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/27/2021] [Indexed: 11/08/2022]
Abstract
Detecting eye movements in raw eye tracking data is a well-established research area by itself, as well as a common pre-processing step before any subsequent analysis. As in any field, however, progress and successful collaboration can only be achieved provided a shared understanding of the pursued goal. This is often formalised via defining metrics that express the quality of an approach to solving the posed problem. Both the big-picture intuition behind the evaluation strategies and seemingly small implementation details influence the resulting measures, making even studies with outwardly similar procedures essentially incomparable, impeding a common understanding. In this review, we systematically describe and analyse evaluation methods and measures employed in the eye movement event detection field to date. While recently developed evaluation strategies tend to quantify the detector's mistakes at the level of whole eye movement events rather than individual gaze samples, they typically do not separate establishing correspondences between true and predicted events from the quantification of the discovered errors. In our analysis we separate these two steps where possible, enabling their almost arbitrary combinations in an evaluation pipeline. We also present the first large-scale empirical analysis of event matching strategies in the literature, examining these various combinations both in practice and theoretically. We examine the particular benefits and downsides of the evaluation methods, providing recommendations towards more intuitive and informative assessment. We implemented the evaluation strategies on which this work focuses in a single publicly available library: https://github.com/r-zemblys/EM-event-detection-evaluation .
Collapse
|
24
|
Visual control during climbing: Variability in practice fosters a proactive gaze pattern. PLoS One 2022; 17:e0269794. [PMID: 35687600 PMCID: PMC9187105 DOI: 10.1371/journal.pone.0269794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 05/30/2022] [Indexed: 11/19/2022] Open
Abstract
In climbing, the visual system is confronted with a dual demand: controlling ongoing movement and searching for upcoming movement possibilities. The aims of the present research were: (i) to investigate the effect of different modes of practice on how learners deal with this dual demand; and (ii) to analyze the extent this effect may facilitate transfer of learning to a new climbing route. The effect of a constant practice, an imposed schedule of variations and a self-controlled schedule of variations on the gaze behaviors and the climbing fluency of novices were compared. Results showed that the constant practice group outperformed the imposed variability group on the training route and the three groups climbing fluency on the transfer route did not differ. Analyses of the gaze behaviors showed that the constant practice group used more online gaze control during the last session whereas the imposed variability group relied on a more proactive gaze control. This last gaze pattern was also used on the transfer route by the imposed variability group. Self-controlled variability group displayed more interindividual differences in gaze behaviors. These findings reflect that learning protocols induce different timing for gaze patterns that may differently facilitate adaptation to new climbing routes.
Collapse
|
25
|
Vu A, Sorel A, Limballe A, Bideau B, Kulpa R. Multiple Players Tracking in Virtual Reality: Influence of Soccer Specific Trajectories and Relationship With Gaze Activity. Front Psychol 2022; 13:901438. [PMID: 35668965 PMCID: PMC9165719 DOI: 10.3389/fpsyg.2022.901438] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 04/19/2022] [Indexed: 11/16/2022] Open
Abstract
The perceptual-cognitive ability to track multiple moving objects and its contribution to team sports performance has traditionally been studied in the laboratory under non-sports specific conditions. It is thus questionable whether the measured visual tracking performance and the underlying gaze activity reflected the actual ability of team sports players to track teammates and opponents on a real field. Using a Virtual Reality-based visual tracking task, the ability of participants to track multiple moving virtual players as they would do on a soccer field was observed to pursue two objectives. (i) See the influence of different scenario types (soccer-specific trajectories versus pseudo-random trajectories) on the visual tracking performance of soccer (n = 15) compared to non-soccer players (n = 16). (ii) Observe the influence of spatial features of the simulated situations on gaze activity between soccer players and non-soccer players. (i) The linear mixed model regression revealed a significant main effect of the group but no interaction effect between group and the type of trajectories, suggesting that the visual tracking ability of soccer players did not benefit from their specific knowledge when they faced scenarios with real game trajectories. (ii) Virtual players’ spatial dispersion and crowding affected the participants’ gaze activity and their visual tracking performance. Furthermore, the gaze activity of soccer players differed in some aspects from the gaze activity of non-soccer players. Assumptions are formulated as to the implication of these results in the difference in visual tracking performance between soccer players and non-soccer players. Overall, using soccer-specific trajectories might not be enough to replicate the representativeness of the field conditions in the study of visual tracking performance. Multitasking constraints should be considered along with motor-cognitive dual-tasks in future research to develop the representativeness of visual exploration conditions.
Collapse
Affiliation(s)
- Alexandre Vu
- Univ Rennes, Inria, M2S - EA 7470, Rennes, France
| | | | | | | | | |
Collapse
|
26
|
Human–System Interaction Based on Eye Tracking for a Virtual Workshop. SUSTAINABILITY 2022. [DOI: 10.3390/su14116841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With the constant exploration and development of intelligent manufacturing, the concept of digital twins has been proposed and applied. In view of the complexity and intellectualization of virtual workshop systems, real workshops can link with virtual workshosp based on AR under the structure of digital twins, which allows users to interact with virtual information and perceive the virtual information superimposed on the real world with great immersion. However, the three-dimensionality of virtual workshops and interaction with complex workshop information can be challenging for users. Due to a shortage of input bandwidth and the nontraditional mode of interaction, a more natural interaction technique for virtual workshops is required. To solve such problems, this paper presents a technical framework for 3D eye movement interaction applied to a virtual workshop. An eye movement interaction technique, oriented to implicit interaction and explicit interaction, is developed by establishing behavior recognition and interaction intention understanding. An eye-movement experiment verifies the former’s accuracy is above 90% and had better recognition performance. A better feature vector group of the latter is selected to establish a model and verify its feasibility and effectiveness. Finally, the feasibility of the framework is verified through the development of an application example.
Collapse
|
27
|
Yin J, Sun J, Li J, Liu K. An Effective Gaze-Based Authentication Method with the Spatiotemporal Feature of Eye Movement. SENSORS (BASEL, SWITZERLAND) 2022; 22:3002. [PMID: 35458986 PMCID: PMC9032520 DOI: 10.3390/s22083002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 03/10/2022] [Accepted: 03/12/2022] [Indexed: 06/14/2023]
Abstract
Eye movement has become a new behavioral feature for biometric authentication. In the eye movement-based authentication methods that use temporal features and artificial design features, the required duration of eye movement recordings are too long to be applied. Therefore, this study aims at using eye movement recordings with shorter duration to realize authentication. And we give out a reasonable eye movement recording duration that should be less than 12 s, referring to the changing pattern of the deviation degree between the gaze point and the stimulus point on the screen. In this study, the temporal motion features of the gaze points and the spatial distribution features of the saccade are using to represent the personal identity. Two datasets are constructed for the experiments, including 5 s and 12 s of eye movement recordings. On the datasets constructed in this paper, the open-set authentication results show that the Equal Error Rate of our proposed methods can reach 10.62% when recording duration is 12 s and 12.48% when recording duration is 5 s. The closed-set authentication results show that the Equal Error Rate of our proposed methods can reach 5.25% when recording duration is 12 s and 7.82% when recording duration is 5 s. It demonstrates that the proposed method provides a reference for the eye movements data-based identity authentication.
Collapse
Affiliation(s)
- Jinghui Yin
- School of Information Science and Engineering, Shandong Normal University, Jinan 250399, China; (J.Y.); (K.L.)
| | - Jiande Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan 250399, China; (J.Y.); (K.L.)
| | - Jing Li
- School of Journalism and Communication, Shandong Normal University, Jinan 250399, China;
| | - Ke Liu
- School of Information Science and Engineering, Shandong Normal University, Jinan 250399, China; (J.Y.); (K.L.)
| |
Collapse
|
28
|
Holmqvist K, Örbom SL, Zemblys R. Small head movements increase and colour noise in data from five video-based P-CR eye trackers. Behav Res Methods 2022; 54:845-863. [PMID: 34357538 PMCID: PMC8344338 DOI: 10.3758/s13428-021-01648-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/06/2021] [Indexed: 11/08/2022]
Abstract
We empirically investigate the role of small, almost imperceptible balance and breathing movements of the head on the level and colour of noise in data from five commercial video-based P-CR eye trackers. By comparing noise from recordings with completely static artificial eyes to noise from recordings where the artificial eyes are worn by humans, we show that very small head movements increase levels and colouring of the noise in data recorded from all five eye trackers in this study. This increase of noise levels is seen not only in the gaze signal, but also in the P and CR signals of the eye trackers that provide these camera image features. The P and CR signals of the SMI eye trackers correlate strongly during small head movements, but less so or not at all when the head is completely still, indicating that head movements are registered by the P and CR images in the eye camera. By recording with artificial eyes, we can also show that the pupil size artefact has no major role in increasing and colouring noise. Our findings add to and replicate the observation by Niehorster et al., (2021) that lowpass filters in video-based P-CR eye trackers colour the data. Irrespective of source, filters or head movements, coloured noise can be confused for oculomotor drift. We also find that usage of the default head restriction in the EyeLink 1000+, the EyeLink II and the HiSpeed240 result in noisier data compared to less head restriction. Researchers investigating data quality in eye trackers should consider not using the Gen 2 artificial eye from SR Research / EyeLink. Data recorded with this artificial eye are much noisier than data recorded with other artificial eyes, on average 2.2-14.5 times worse for the five eye trackers.
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Institute of Psychology, Nicolaus Copernicus University in Torun, Torun, Poland
- Department of Psychology, Regensburg University, Regensburg, Germany
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | | |
Collapse
|
29
|
Evaluation of Fixational Behavior throughout Life. Brain Sci 2021; 12:brainsci12010019. [PMID: 35053764 PMCID: PMC8774210 DOI: 10.3390/brainsci12010019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 12/20/2021] [Accepted: 12/21/2021] [Indexed: 11/17/2022] Open
Abstract
Background: To quantify development of gaze stability throughout life during short and long fixational tasks using eye tracking technology. Methods: Two hundred and fifty-nine participants aged between 5 months and 77 years were recruited along the study. All participants underwent a complete ophthalmological assessment. Fixational behavior during long and short fixational tasks was analyzed using a DIVE (Device for an Integral Visual Examination), a digital test assisted with eye tracking technology. The participants were divided into ten groups according to their age. Group 1, 0–2 years; group 2, 2–5 years; group 3, 5–10 years; group 4, 10–20 years; group 5, 20–30 years; group 6, 30–40 years; group 7, 40–50 years; group 8, 50–60 years; group 9, 60–70 years; and group 10, over 70 years. Results: Gaze stability, assessed by logBCEA (log-transformed bivariate contour ellipse area), improved with age from 5 months to 30 years (1.27 vs. 0.57 deg2 for long fixational task, 0.73 vs. −0.04 deg2 for short fixational task), while fixations tend to be longer (1.95 vs. 2.80 msec for long fixational tasks and 0.80 vs. 1.71 msec for short fixational tasks). All fixational outcomes worsened progressively from the fifth decade of life. Log-transformed bivariate contour ellipse area (0.79, 0.83, 0.91, 1.42 deg2 for long fixational task and 0.01, 0.18, 0.28, 0.44 deg2 for short fixational task, for group 7, 8, 9, and 10 respectively). Stimuli features may influence oculomotor performance, with smaller stimuli providing prolonged fixations. Conclusions: Fixational behavior can be accurately assessed from 5 months of age using a DIVE. We report normative data of gaze stability and duration of fixations for every age group. Currently available technology may increase the accuracy of our visual assessments at any age.
Collapse
|
30
|
Trabulsi J, Norouzi K, Suurmets S, Storm M, Ramsøy TZ. Optimizing Fixation Filters for Eye-Tracking on Small Screens. Front Neurosci 2021; 15:578439. [PMID: 34819830 PMCID: PMC8606821 DOI: 10.3389/fnins.2021.578439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 09/06/2021] [Indexed: 11/27/2022] Open
Abstract
The study of consumer responses to advertising has recently expanded to include the use of eye-tracking to track the gaze of consumers. The calibration and validation of eye-gaze have typically been measured on large screens in static, controlled settings. However, little is known about how precise gaze localizations and eye fixations are on smaller screens, such as smartphones, and in moving feed-based conditions, such as those found on social media websites. We tested the precision of eye-tracking fixation detection algorithms relative to raw gaze mapping in natural scrolling conditions. Our results demonstrate that default fixation detection algorithms normally employed by hardware providers exhibit suboptimal performance on mobile phones. In this paper, we provide a detailed account of how different parameters in eye-tracking software can affect the validity and reliability of critical metrics, such as Percent Seen and Total Fixation Duration. We provide recommendations for producing improved eye-tracking metrics for content on small screens, such as smartphones, and vertically moving environments, such as a social media feed. The adjustments to the fixation detection algorithm we propose improves the accuracy of Percent Seen by 19% compared to a leading eye-tracking provider's default fixation filter settings. The methodological approach provided in this paper could additionally serve as a framework for assessing the validity of applied neuroscience methods and metrics beyond mobile eye-tracking.
Collapse
Affiliation(s)
| | - Kian Norouzi
- Neurons Inc., Department of Applied Neuroscience, Taastrup, Denmark
- Faculty of Management, University of Tehran, Tehran, Iran
| | - Seidi Suurmets
- Department of Marketing, Copenhagen Business School, Copenhagen, Denmark
| | - Mike Storm
- Neurons Inc., Department of Applied Neuroscience, Taastrup, Denmark
| | - Thomas Zoëga Ramsøy
- Neurons Inc., Department of Applied Neuroscience, Taastrup, Denmark
- Faculty of Neuroscience, Singularity University, Santa Clara, CA, United States
| |
Collapse
|
31
|
Einfluss visueller Hilfen und räumlicher Fähigkeiten auf die graphische Interpretation von Vektorfeldern: Eine Eye-Tracking-Untersuchung. Naturwissenschaften 2021. [DOI: 10.1007/s40573-021-00133-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
ZusammenfassungEin solides Verständnis physikalischer Konzepte erfordert den Umgang mit multiplen Repräsentationen wie Formeln und Diagrammen zur Bildung kohärenter mentaler Modelle. Bei komplexen Sachverhalten haben Studierende häufig Schwierigkeiten mit solchen repräsentationalen Verknüpfungen und benötigen deshalb instruktionale Unterstützung. In diesem Beitrag wird der Einfluss von zwei Instruktionen (mit und ohne visuelle Hilfen; VH vs. OH) zur graphischen Interpretation eines Vektorfelddiagramms hinsichtlich Divergenz auf die Leistungsfähigkeit von $$N=141$$
N
=
141
Studierenden untersucht. Beim Lesen der Instruktion und der anschließenden Aufgabenbearbeitung wurden die Augenbewegungen mit einem Eye-Tracker aufgenommen. Die Ergebnisse zeigen, dass Studierende in der VH-Gruppe ($$N=64$$
N
=
64
) eine bessere Testleistung erzielten als Studierende der OH-Gruppe ($$N=77$$
N
=
77
). Der Unterschied ist am stärksten ausgeprägt für die Studierendengruppen mit hohen und mittelstarken räumlichen Fähigkeiten, die im Vorfeld der Untersuchung mittels eines standardisierten Raumspannentests ermittelt wurden. Die Eye-Tracking-Analysen zum Lesen der Instruktionen zeigen, dass den Studierenden mit visuellen Hilfen die Selektion, Organisation und Integration lernrelevanter Informationen im Sinne der kognitionspsychologischen Theorie multimedialen Lernens besser gelingen als Studierenden der OH-Gruppe. Die Analyse der Eye-Tracking-Daten beim anschließenden Problemlösen ergibt, dass Studierende mit VH-Instruktion ihre Augen systematischer über die Vektorfelddiagramme bewegen, was eine korrekte Anwendung der vermittelten Strategie indiziert. Neben dem modellprüfenden Charakter und der Bedeutung visueller Hilfen zeigt die Studie das diagnostische Potential von Eye-Tracking bei Aufgaben mit hohen Ansprüchen an die kognitiv-visuellen Fähigkeiten.
Collapse
|
32
|
The application of noninvasive, restraint-free eye-tracking methods for use with nonhuman primates. Behav Res Methods 2021; 53:1003-1030. [PMID: 32935327 DOI: 10.3758/s13428-020-01465-6] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Over the past 50 years there has been a strong interest in applying eye-tracking techniques to study a myriad of questions related to human and nonhuman primate psychological processes. Eye movements and fixations can provide qualitative and quantitative insights into cognitive processes of nonverbal populations such as nonhuman primates, clarifying the evolutionary, physiological, and representational underpinnings of human cognition. While early attempts at nonhuman primate eye tracking were relatively crude, later, more sophisticated and sensitive techniques required invasive protocols and the use of restraint. In the past decade, technology has advanced to a point where noninvasive eye-tracking techniques, developed for use with human participants, can be applied for use with nonhuman primates in a restraint-free manner. Here we review the corpus of recent studies (N=32) that take such an approach. Despite the growing interest in eye-tracking research, there is still little consensus on "best practices," both in terms of deploying test protocols or reporting methods and results. Therefore, we look to advances made in the field of developmental psychology, as well as our own collective experiences using eye trackers with nonhuman primates, to highlight key elements that researchers should consider when designing noninvasive restraint-free eye-tracking research protocols for use with nonhuman primates. Beyond promoting best practices for research protocols, we also outline an ideal approach for reporting such research and highlight future directions for the field.
Collapse
|
33
|
Visweswaran S, King AJ, Tajgardoon M, Calzoni L, Clermont G, Hochheiser H, Cooper GF. Evaluation of eye tracking for a decision support application. JAMIA Open 2021; 4:ooab059. [PMID: 34350394 PMCID: PMC8327376 DOI: 10.1093/jamiaopen/ooab059] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2020] [Revised: 05/08/2021] [Accepted: 07/01/2021] [Indexed: 11/12/2022] Open
Abstract
Eye tracking is used widely to investigate attention and cognitive processes while performing tasks in electronic medical record (EMR) systems. We explored a novel application of eye tracking to collect training data for a machine learning-based clinical decision support tool that predicts which patient data are likely to be relevant for a clinical task. Specifically, we investigated in a laboratory setting the accuracy of eye tracking compared to manual annotation for inferring which patient data in the EMR are judged to be relevant by physicians. We evaluated several methods for processing gaze points that were recorded using a low-cost eye-tracking device. Our results show that eye tracking achieves accuracy and precision of 69% and 53%, respectively compared to manual annotation and are promising for machine learning. The methods for processing gaze points and scripts that we developed offer a first step in developing novel uses for eye tracking for clinical decision support.
Collapse
Affiliation(s)
- Shyam Visweswaran
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.,Intelligent Systems Program, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Andrew J King
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.,Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | | | - Luca Calzoni
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Gilles Clermont
- Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Harry Hochheiser
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.,Intelligent Systems Program, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Gregory F Cooper
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.,Intelligent Systems Program, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
34
|
Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels. SENSORS 2021; 21:s21144686. [PMID: 34300425 PMCID: PMC8309511 DOI: 10.3390/s21144686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 06/28/2021] [Accepted: 07/06/2021] [Indexed: 11/17/2022]
Abstract
Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult to gain insight through visualization. To avoid the complication, we often employ fixation identification algorithms for more abstract visualizations. In the past, many scientists have focused on gaze data abstraction with the attention map and analyzed detail gaze movement patterns with the scanpath visualization. Abstract eye movement patterns change dramatically depending on fixation identification algorithms in the preprocessing. However, it is difficult to find out how fixation identification algorithms affect gaze movement pattern visualizations. Additionally, scientists often spend much time on adjusting parameters manually in the fixation identification algorithms. In this paper, we propose a gaze behavior-based data processing method for abstract gaze data visualization. The proposed method classifies raw gaze data using machine learning models for image classification, such as CNN, AlexNet, and LeNet. Additionally, we compare the velocity-based identification (I-VT), dispersion-based identification (I-DT), density-based fixation identification, velocity and dispersion-based (I-VDT), and machine learning based and behavior-based modelson various visualizations at each abstraction level, such as attention map, scanpath, and abstract gaze movement visualization.
Collapse
|
35
|
Application of Eye Tracking Technology in Aviation, Maritime, and Construction Industries: A Systematic Review. SENSORS 2021; 21:s21134289. [PMID: 34201734 PMCID: PMC8271947 DOI: 10.3390/s21134289] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 06/15/2021] [Accepted: 06/17/2021] [Indexed: 11/25/2022]
Abstract
Most accidents in the aviation, maritime, and construction industries are caused by human error, which can be traced back to impaired mental performance and attention failure. In 1596, Du Laurens, a French anatomist and medical scientist, said that the eyes are the windows of the mind. Eye tracking research dates back almost 150 years and it has been widely used in different fields for several purposes. Overall, eye tracking technologies provide the means to capture in real time a variety of eye movements that reflect different human cognitive, emotional, and physiological states, which can be used to gain a wider understanding of the human mind in different scenarios. This systematic literature review explored the different applications of eye tracking research in three high-risk industries, namely aviation, maritime, and construction. The results of this research uncovered the demographic distribution and applications of eye tracking research, as well as the different technologies that have been integrated to study the visual, cognitive, and attentional aspects of human mental performance. Moreover, different research gaps and potential future research directions were highlighted in relation to the usage of additional technologies to support, validate, and enhance eye tracking research to better understand human mental performance.
Collapse
|
36
|
Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. Detection of normal and slow saccades using implicit piecewise polynomial approximation. J Vis 2021; 21:8. [PMID: 34125160 PMCID: PMC8212426 DOI: 10.1167/jov.21.6.8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The quantitative analysis of saccades in eye movement data unveils information associated with intention, cognition, and health status. Abnormally slow saccades are indicative of neurological disorders and often imply a specific pathological disturbance. However, conventional saccade detection algorithms are not designed to detect slow saccades, and are correspondingly unreliable when saccades are unusually slow. In this article, we propose an algorithm that is effective for the detection of both normal and slow saccades. The proposed algorithm is partly based on modeling saccadic waveforms as piecewise-quadratic signals. The algorithm first decreases noise in acquired eye-tracking data using optimization to minimize a prescribed objective function, then uses velocity thresholding to detect saccades. Using both simulated saccades and real saccades generated by healthy subjects and patients, we evaluate the performance of the proposed algorithm and 10 other detection algorithms. We show the proposed algorithm is more accurate in detecting both normal and slow saccades than other algorithms.
Collapse
Affiliation(s)
- Weiwei Dai
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - Ivan Selesnick
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - John-Ross Rizzo
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Janet Rucker
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Todd Hudson
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| |
Collapse
|
37
|
Swan G, Goldstein RB, Savage SW, Zhang L, Ahmadi A, Bowers AR. Automatic processing of gaze movements to quantify gaze scanning behaviors in a driving simulator. Behav Res Methods 2021; 53:487-506. [PMID: 32748237 PMCID: PMC7854873 DOI: 10.3758/s13428-020-01427-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Eye and head movements are used to scan the environment when driving. In particular, when approaching an intersection, large gaze scans to the left and right, comprising head and multiple eye movements, are made. We detail an algorithm called the gaze scan algorithm that automatically quantifies the magnitude, duration, and composition of such large lateral gaze scans. The algorithm works by first detecting lateral saccades, then merging these lateral saccades into gaze scans, with the start and end points of each gaze scan marked in time and eccentricity. We evaluated the algorithm by comparing gaze scans generated by the algorithm to manually marked "consensus ground truth" gaze scans taken from gaze data collected in a high-fidelity driving simulator. We found that the gaze scan algorithm successfully marked 96% of gaze scans and produced magnitudes and durations close to ground truth. Furthermore, the differences between the algorithm and ground truth were similar to the differences found between expert coders. Therefore, the algorithm may be used in lieu of manual marking of gaze data, significantly accelerating the time-consuming marking of gaze movement data in driving simulator studies. The algorithm also complements existing eye tracking and mobility research by quantifying the number, direction, magnitude, and timing of gaze scans and can be used to better understand how individuals scan their environment.
Collapse
Affiliation(s)
- Garrett Swan
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA, 02114, USA.
| | - Robert B Goldstein
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA, 02114, USA
| | - Steven W Savage
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA, 02114, USA
| | - Lily Zhang
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA, 02114, USA
| | - Aliakbar Ahmadi
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA, 02114, USA
- Department of Mechanical Engineering, Technical University of Munich, Munich, Germany
| | - Alex R Bowers
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA, 02114, USA
| |
Collapse
|
38
|
Flindall J, Sara A, Kingstone A. Head and eye movements are each facilitated by the offset of a central fixation point in a virtual gap paradigm. Exp Brain Res 2020; 239:117-126. [PMID: 33118041 DOI: 10.1007/s00221-020-05905-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 08/11/2020] [Indexed: 11/30/2022]
Abstract
Eye movements exhibit reduced latencies when the point of fixation is extinguished prior to, or coincident with, the appearance of a peripheral target. Two independent components are responsible for this facilitation. If the offset occurs before target onset, it presents a warning which stimulates response preparation and execution. If offset occurs prior to or coincident with target onset, it triggers the release of fixation-maintenance neurons in the superior colliculus that can delay saccadic responses. While the warning effect facilitates responses regardless of effector, the fixation release effect is thought to be specific to the oculomotor system. Head movements, like saccades, contribute significantly to gaze shifts and may be generated directly by the SC. While head movements have been shown to benefit from the warning effect, it is unknown if, and to what degree, they are affected by the release of fixation-maintenance neurons responsible for inhibiting saccades. To address this issue, we measured head and eye response latencies in a virtual reality-based gap paradigm, turning off the fixation point either 200 ms before (temporal gap condition), coincident with (step condition), or 1000 ms after (temporal overlap/baseline condition) target onset. Our results indicate that head movements, like saccades, are facilitated by both the warning and release components of the gap paradigm. Further, rotational kinematics during gap trials differed significantly from those observed in step and overlap trials (higher, earlier peak velocities). These results are discussed with respect to the theorized structure and organisation of the superior colliculus in humans.
Collapse
Affiliation(s)
- Jason Flindall
- Department of Psychology, University of British Columbia, Vancouver, Canada.
| | - Aman Sara
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, Canada
| |
Collapse
|
39
|
Abstract
Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources.
Collapse
|
40
|
Agtzidis I, Startsev M, Dorr M. Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching. J Eye Mov Res 2020; 13. [PMID: 33828806 PMCID: PMC8005322 DOI: 10.16910/jemr.13.4.5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In this short article we present our manual annotation of the eye movement events in a
subset of the large-scale eye tracking data set Hollywood2. Our labels include fixations,
saccades, and smooth pursuits, as well as a noise event type (the latter representing either
blinks, loss of tracking, or physically implausible signals). In order to achieve more
consistent annotations, the gaze samples were labelled by a novice rater based on
rudimentary algorithmic suggestions, and subsequently corrected by an expert rater.
Overall, we annotated eye movement events in the recordings corresponding to 50
randomly selected test set clips and 6 training set clips from Hollywood2, which were
viewed by 16 observers and amount to a total of approximately 130 minutes of gaze data.
In these labels, 62.4% of the samples were attributed to fixations, 9.1% – to saccades, and,
notably, 24.2% – to pursuit (the remainder marked as noise). After evaluation of 15
published eye movement classification algorithms on our newly collected annotated data
set, we found that the most recent algorithms perform very well on average, and even
reach human-level labelling quality for fixations and saccades, but all have a much larger
room for improvement when it comes to smooth pursuit classification. The data set is
made available at https://gin.g-node.org/ioannis.agtzidis/hollywood2_em.
Collapse
|
41
|
Drivers' Visual Attention Characteristics under Different Cognitive Workloads: An On-Road Driving Behavior Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17155366. [PMID: 32722496 PMCID: PMC7432427 DOI: 10.3390/ijerph17155366] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 07/17/2020] [Accepted: 07/23/2020] [Indexed: 11/17/2022]
Abstract
In this study, an on-road driving experiment was designed to investigate the visual attention fixation and transition characteristics of drivers when they are under different cognitive workloads. First, visual attention was macroscopically analyzed through the entropy method. Second, the Markov glance one- and two-step transition probability matrices were constructed, which can study the visual transition characteristics under different conditions from a microscopic perspective. Results indicate that the fixation entropy value of male drivers is 23.08% higher than that of female drivers. Under the normal driving state, drivers’ fixation on in-vehicle systems is not continuous and usually shifts to the front and left areas quickly after such fixation. When under cognitive workload, drivers’ vision transition is concentrated only in the front and right areas. In mild cognitive workload, drivers’ sight trajectory is mainly focused on the distant front area. As the workload level increases, the transition trajectory shifts to the junction near the front and far sides. The current study finds that the difference between an on-road test and a driving simulation is that during the on-road driving process, drivers are twice as attentive to the front area than to the driving simulator. The research provides practical guidance for the improvement of traffic safety.
Collapse
|
42
|
Michaiel AM, Abe ET, Niell CM. Dynamics of gaze control during prey capture in freely moving mice. eLife 2020; 9:57458. [PMID: 32706335 PMCID: PMC7438109 DOI: 10.7554/elife.57458] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 07/23/2020] [Indexed: 12/24/2022] Open
Abstract
Many studies of visual processing are conducted in constrained conditions such as head- and gaze-fixation, and therefore less is known about how animals actively acquire visual information in natural contexts. To determine how mice target their gaze during natural behavior, we measured head and bilateral eye movements in mice performing prey capture, an ethological behavior that engages vision. We found that the majority of eye movements are compensatory for head movements, thereby serving to stabilize the visual scene. During movement, however, periods of stabilization are interspersed with non-compensatory saccades that abruptly shift gaze position. Notably, these saccades do not preferentially target the prey location. Rather, orienting movements are driven by the head, with the eyes following in coordination to sequentially stabilize and recenter the gaze. These findings relate eye movements in the mouse to other species, and provide a foundation for studying active vision during ethological behaviors in the mouse. As you read this sentence, your eyes will move automatically from one word to the next, while your head remains still. Moving your eyes enables you to view each word using your central – as opposed to peripheral – vision. Central vision allows you to see objects in fine detail. It relies on a specialized area of the retina called the fovea. When you move your eyes across a page, you keep the images of the words you are currently reading on the fovea. This provides the detailed vision required for reading. The same process works for tracking moving objects. When watching a bird fly across the sky, you can track its progress by moving your eyes to keep the bird in the center of your visual field, over the fovea. But the majority of mammals do not have a fovea, and yet are still able to track moving targets. Think of a lion hunting a gazelle, for instance, or a cat stalking a mouse. Even mice themselves can track and capture insect prey such as crickets, despite not having a fovea. And yet, exactly how they do this is unknown. This is particularly surprising given that mice have long been used to study the neural basis of vision. By fitting mice with miniature head-mounted cameras, Michaiel et al. now reveal how the rodents track and capture moving crickets. It turns out that unlike animals with a fovea, mice do not use eye movements to track moving objects. Instead, when a mouse wants to look at something new, it moves its head to point at the target. The eyes then follow and ‘land’ on the target. In essence, head movements lead the way and the eyes catch up afterwards. These findings are consistent with the idea that mammals with large heads evolved eye movements to overcome the energy costs of turning the head whenever they want to look at something new. For small animals, moving the head is less energetically expensive. As a result, being able to move the eyes independent of the head is unnecessary. Future work could use a combination of behavioral experiments and brain recordings to reveal how visual areas of the brain process what an animal is seeing in real time.
Collapse
Affiliation(s)
- Angie M Michaiel
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, United States
| | - Elliott Tt Abe
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, United States
| | - Cristopher M Niell
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, United States
| |
Collapse
|
43
|
Ishrat M, Abrol P. Image complexity analysis with scanpath identification using remote gaze estimation model. MULTIMEDIA TOOLS AND APPLICATIONS 2020; 79:24393-24412. [PMID: 32837248 PMCID: PMC7305931 DOI: 10.1007/s11042-020-09117-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2018] [Revised: 05/20/2020] [Accepted: 05/27/2020] [Indexed: 06/11/2023]
Abstract
Analysis of gaze points has been a vital tool for understanding varied human behavioral pattern and underlying psychological processing. Gaze points are analyzed generally in terms of two events of fixations and saccades that are collectively termed as scanpath. Scanpath could potentially establish correlation between visual scenery and human cognitive tendencies. Scanpath has been analyzed for different domains that include visual perception, usability, memory, visual search or low level attributes like color, illumination and edges in an image. Visual search is one prominent area that examines scanpath of subjects while a target object is searched in a given set of images. Visual search explores behavioral tendencies of subjects with respect to image complexity. Complexity of an image is governed by spatial, frequency and color information present in the image. Scanpath based image complexity analysis determines human visual behavior that could lead to development of interactive and intelligent systems. There are several sophisticated eye tracking devices and associated algorithms for recording and classification of scanpath. However, in the present scenario when the chances of viral infections (COVID-19) from known and unknown sources are high, it is very important that the contact less methods and models be designed. In addition, even though the devices acquire and process eye movement data with fair accuracy but are intrusive and costly. The objective of current research work is to establish the complexity of the given set of images while target objects are searched and to present analysis of gaze search pattern. To achieve these objectives a remote gaze estimation and analysis model has been proposed for scanpath identification and analysis. The model is an alternate option for gaze point tracking and scanpath analysis that is non intrusive and low cost. The gaze points are tracked remotely as against sophisticated wearable eye tracking devices available in the market. The model employs easily available softwares and hardware devices. In the current work, complexity is derived on the basis of analysis of fixation and saccade gaze points. Based on the results generated by the proposed model, influence on subjects due to external stimuli is studied. The set of images chosen, act as external stimuli for the subjects during visual search. In order to statistically analyze scanpath for different subjects, certain scanpath parameters have been identified. The model maps and classifies eye movement gaze points into fixations and saccades and generates data for identified parameters. For eye detection and subsequent iris detection voila jones and circular hough transform (CHT) algorithms have been used. Identification by dispersion threshold (I-DT) is implemented for scanpath identification. The algorithms are customized for better iris and scanpath detection. Algorithms are developed for gaze screen mapping and classification of fixations and saccades. The experimentation has been carried on different subjects. Variations during visual search have been observed and analyzed. The present model requires no contact of human subject with any equipment including eye tracking devices, screen or computing devices.
Collapse
Affiliation(s)
- Mohsina Ishrat
- Department of Computer Science & IT, University of Jammu (J&K), Jammu, India
| | - Pawanesh Abrol
- Department of Computer Science & IT, University of Jammu (J&K), Jammu, India
| |
Collapse
|
44
|
Katz CN, Patel K, Talakoub O, Groppe D, Hoffman K, Valiante TA. Differential Generation of Saccade, Fixation, and Image-Onset Event-Related Potentials in the Human Mesial Temporal Lobe. Cereb Cortex 2020; 30:5502-5516. [PMID: 32494805 PMCID: PMC7472212 DOI: 10.1093/cercor/bhaa132] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 04/17/2020] [Accepted: 04/21/2020] [Indexed: 11/23/2022] Open
Abstract
Event-related potentials (ERPs) are a commonly used electrophysiological signature for studying mesial temporal lobe (MTL) function during visual memory tasks. The ERPs associated with the onset of visual stimuli (image-onset) and eye movements (saccades and fixations) provide insights into the mechanisms of their generation. We hypothesized that since eye movements and image-onset provide MTL structures with salient visual information, perhaps they both engage similar neural mechanisms. To explore this question, we used intracranial electroencephalographic data from the MTLs of 11 patients with medically refractory epilepsy who participated in a visual search task. We characterized the electrophysiological responses of MTL structures to saccades, fixations, and image-onset. We demonstrated that the image-onset response is an evoked/additive response with a low-frequency power increase. In contrast, ERPs following eye movements appeared to arise from phase resetting of higher frequencies than the image-onset ERP. Intriguingly, this reset was associated with saccade onset and not termination (fixation), suggesting it is likely the MTL response to a corollary discharge, rather than a response to visual stimulation. We discuss the distinct mechanistic underpinnings of these responses which shed light on the underlying neural circuitry involved in visual memory processing.
Collapse
Affiliation(s)
- Chaim N Katz
- Krembil Research Institute, Toronto Western Hospital, Toronto, ON M5T 1M8, Canada.,Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada
| | - Kramay Patel
- Krembil Research Institute, Toronto Western Hospital, Toronto, ON M5T 1M8, Canada.,Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada.,Faculty of Medicine, University of Toronto, Toronto, ON M5S 1A8, Canada
| | - Omid Talakoub
- Krembil Research Institute, Toronto Western Hospital, Toronto, ON M5T 1M8, Canada.,Department of Psychology, Vanderbilt University, Nashville, TN 37240, USA
| | - David Groppe
- Krembil Research Institute, Toronto Western Hospital, Toronto, ON M5T 1M8, Canada
| | - Kari Hoffman
- Department of Psychology, Vanderbilt University, Nashville, TN 37240, USA
| | - Taufik A Valiante
- Krembil Research Institute, Toronto Western Hospital, Toronto, ON M5T 1M8, Canada.,Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada.,Division of Neurosurgery, Department of Surgery, University of Toronto, Toronto, ON M5S 1A1, Canada.,Institute of Medical Sciences, University of Toronto, Toronto, ON M5S 1A8, Canada.,Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
45
|
Voloh B, Watson MR, König S, Womelsdorf T. MAD saccade: statistically robust saccade threshold estimation via the median absolute deviation. J Eye Mov Res 2020; 12:10.16910/jemr.12.8.3. [PMID: 33828776 PMCID: PMC7881893 DOI: 10.16910/jemr.12.8.3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Saccade detection is a critical step in the analysis of gaze data. A common method for saccade detection is to use a simple threshold for velocity or acceleration values, which can be estimated from the data using the mean and standard deviation. However, this method has the downside of being influenced by the very signal it is trying to detect, the outlying velocities or accelerations that occur during saccades. We propose instead to use the median absolute deviation (MAD), a robust estimator of dispersion that is not influenced by outliers. We modify an algorithm proposed by Nyström and colleagues, and quantify saccade detection performance in both simulated and human data. Our modified algorithm shows a significant and marked improvement in saccade detection - showing both more true positives and less false negatives - especially under higher noise levels. We conclude that robust estimators can be widely adopted in other common, automatic gaze classification algorithms due to their ease of implementation.
Collapse
|
46
|
Auerbach-Asch CR, Bein O, Deouell LY. Face Selective Neural Activity: Comparisons Between Fixed and Free Viewing. Brain Topogr 2020; 33:336-354. [PMID: 32236786 DOI: 10.1007/s10548-020-00764-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 03/14/2020] [Indexed: 10/24/2022]
Abstract
Event Related Potentials (ERPs) are widely used to study category-selective EEG responses to visual stimuli, such as the face-selective N170 component. Typically, this is done by flashing stimuli at the point of static gaze fixation. While allowing for good experimental control, these paradigms ignore the dynamic role of eye-movements in natural vision. Fixation-related potentials (FRPs), obtained using simultaneous EEG and eye-tracking, overcome this limitation. Various studies have used FRPs to study processes such as lexical processing, target detection and attention allocation. The goal of this study was to carefully compare face-sensitive activity time-locked to an abrupt stimulus onset at fixation, with that time-locked to a self-generated fixation on a stimulus. Twelve participants participated in three experimental conditions: Free-viewing (FRPs), Cued-viewing (FRPs) and Control (ERPs). We used a multiple regression approach to disentangle overlapping activity components. Our results show that the N170 face-effect is evident for the first fixation on a stimulus, whether it follows a self-generated saccade or stimulus appearance at fixation point. The N170 face-effect has similar topography across viewing conditions, but there were major differences within each stimulus category. We ascribe these differences to an overlap of the fixation-related lambda response and the N170. We tested the plausibility of this account using dipole simulations. Finally, the N170 exhibits category-specific adaptation in free viewing. This study establishes the comparability of the free-viewing N170 face-effect with the classic event-related effect, while highlighting the importance of accounting for eye-movement related effects.
Collapse
Affiliation(s)
- Carmel R Auerbach-Asch
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Edmond J. Safra Campus, Givat-Ram, 91904, Jerusalem, Israel.
| | - Oded Bein
- The Department of Psychology, New York University, 6 Washington Pl, New York, NY, 10003, USA
| | - Leon Y Deouell
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Edmond J. Safra Campus, Givat-Ram, 91904, Jerusalem, Israel
- The Department of Psychology, The Hebrew University of Jerusalem, Mount Scopus, 91905, Jerusalem, Israel
| |
Collapse
|
47
|
Kothari R, Yang Z, Kanan C, Bailey R, Pelz JB, Diaz GJ. Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities. Sci Rep 2020; 10:2539. [PMID: 32054884 PMCID: PMC7018838 DOI: 10.1038/s41598-020-59251-5] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 01/23/2020] [Indexed: 11/21/2022] Open
Abstract
The study of gaze behavior has primarily been constrained to controlled environments in which the head is fixed. Consequently, little effort has been invested in the development of algorithms for the categorization of gaze events (e.g. fixations, pursuits, saccade, gaze shifts) while the head is free, and thus contributes to the velocity signals upon which classification algorithms typically operate. Our approach was to collect a novel, naturalistic, and multimodal dataset of eye + head movements when subjects performed everyday tasks while wearing a mobile eye tracker equipped with an inertial measurement unit and a 3D stereo camera. This Gaze-in-the-Wild dataset (GW) includes eye + head rotational velocities (deg/s), infrared eye images and scene imagery (RGB + D). A portion was labelled by coders into gaze motion events with a mutual agreement of 0.74 sample based Cohen's κ. This labelled data was used to train and evaluate two machine learning algorithms, Random Forest and a Recurrent Neural Network model, for gaze event classification. Assessment involved the application of established and novel event based performance metrics. Classifiers achieve ~87% human performance in detecting fixations and saccades but fall short (50%) on detecting pursuit movements. Moreover, pursuit classification is far worse in the absence of head movement information. A subsequent analysis of feature significance in our best performing model revealed that classification can be done using only the magnitudes of eye and head movements, potentially removing the need for calibration between the head and eye tracking systems. The GW dataset, trained classifiers and evaluation metrics will be made publicly available with the intention of facilitating growth in the emerging area of head-free gaze event classification.
Collapse
Affiliation(s)
- Rakshit Kothari
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA.
| | - Zhizhuo Yang
- Golisano College of Computing and Information Sciences, RIT, Rochester, NY, USA
| | - Christopher Kanan
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA
| | - Reynold Bailey
- Golisano College of Computing and Information Sciences, RIT, Rochester, NY, USA
| | - Jeff B Pelz
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA
| | - Gabriel J Diaz
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA
| |
Collapse
|
48
|
Park SY, Bacelar CE, Holmqvist K. Dog eye movements are slower than human eye movements. J Eye Mov Res 2020; 12:10.16910/jemr.12.8.4. [PMID: 33828775 PMCID: PMC7881887 DOI: 10.16910/jemr.12.8.4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Eye movement of a species reflects the visual behavior strategy that it has adapted to during its evolution. What are eye movements of domestic dogs (Canis lupus familiaris) like? Investigations of dog eye movements per se have not been done, despite the increasing number of visuo-cognitive studies in dogs using eye-tracking systems. To fill this gap, we have recorded dog eye movements using a video-based eye-tracking system, and compared the dog data to that of humans. We found dog saccades follow the systematic relationships between saccade metrics previously shown in humans and other animal species. Yet, the details of the relationships, and the quantities of each metric of dog saccades and fixations differed from those of humans. Overall, dog saccades were slower and fixations were longer than those of humans. We hope our findings contribute to existing comparative analyses of eye movement across animal species, and also to improvement of algorithms used for classifying eye movement data of dogs.
Collapse
Affiliation(s)
- Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna Medical University of Vienna, University of Vienna, Austria
| | - Catarina Espanca Bacelar
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna Medical University of Vienna, University of Vienna, Austria
| | | |
Collapse
|
49
|
Wadehn F, Weber T, Mack DJ, Heldt T, Loeliger HA. Model-Based Separation, Detection, and Classification of Eye Movements. IEEE Trans Biomed Eng 2020; 67:588-600. [PMID: 31150326 DOI: 10.1109/tbme.2019.2918986] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE We present a physiologically motivated eye movement analysis framework for model-based separation, detection, and classification (MBSDC) of eye movements. By estimating kinematic and neural controller signals for saccades, smooth pursuit, and fixational eye movements in a mechanistic model of the oculomotor system we are able to separate and analyze these eye movements independently. METHODS We extended an established oculomotor model for horizontal eye movements by neural controller signals and by a blink artifact model. To estimate kinematic (position, velocity, acceleration, forces) and neural controller signals from eye position data, we employ Kalman smoothing and sparse input estimation techniques. The estimated signals are used for detecting saccade start and end points, and for classifying the recording into saccades, smooth pursuit, fixations, post-saccadic oscillations, and blinks. RESULTS On simulated data, the reconstruction error of the velocity profiles is about half the error value obtained by the commonly employed approach of filtering and numerical differentiation. In experiments with smooth pursuit data from human subjects, we observe an accurate signal separation. In addition, in neural recordings from non-human primates, the estimated neural controller signals match the real recordings strikingly well. SIGNIFICANCE The MBSDC framework enables the analysis of multi-type eye movement recordings and provides a physiologically motivated approach to study motor commands and might aid the discovery of new digital biomarkers. CONCLUSION The proposed framework provides a model-based approach for a wide variety of eye movement analysis tasks.
Collapse
|
50
|
Imaoka Y, Flury A, de Bruin ED. Assessing Saccadic Eye Movements With Head-Mounted Display Virtual Reality Technology. Front Psychiatry 2020; 11:572938. [PMID: 33093838 PMCID: PMC7527608 DOI: 10.3389/fpsyt.2020.572938] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 08/18/2020] [Indexed: 12/16/2022] Open
Abstract
As our society is ageing globally, neurodegenerative disorders are becoming a relevant issue. Assessment of saccadic eye movement could provide objective values to help to understand the symptoms of disorders. HTC Corporation launched a new virtual reality (VR) headset, VIVE Pro Eye, implementing an infrared-based eye tracking technique together with VR technology. The purpose of this study is to evaluate whether the device can be used as an assessment tool of saccadic eye movement and to investigate the technical features of eye tracking. We developed a measurement system of saccadic eye movement with a simple VR environment on Unity VR design platform, following an internationally proposed standard saccade measurement protocol. We then measured the saccadic eye movement of seven healthy young adults to analyze the oculo-metrics of latency, peak velocity, and error rate of pro- and anti-saccade tasks: 120 trials in each task. We calculated these parameters based on the saccade detection algorithm that we have developed following previous studies. Consequently, our results revealed latency of 220.40 ± 43.16 ms, peak velocity of 357.90 ± 111.99°/s, and error rate of 0.24 ± 0.41% for the pro-saccade task, and latency of 343.35 ± 76.42 ms, peak velocity of 318.79 ± 116.69°/s, and error rate of 0.66 ± 0.76% for the anti-saccade task. In addition, we observed pupil diameter of 4.30 ± 1.15 mm (left eye) and 4.29 ± 1.08 mm (right eye) for the pro-saccade task, and of 4.21 ± 1.04 mm (left eye) and 4.22 ± 0.97 mm (right eye) for the anti-saccade task. Comparing between the descriptive statistics of previous studies and our results suggests that VIVE Pro Eye can function as an assessment tool of saccadic eye movement since our results are in the range of or close to the results of previous studies. Nonetheless, we found technical limitations especially about time-related measurement parameters. Further improvements in software and hardware of the device and measurement protocol, and more measurements with diverse age-groups and people with different health conditions are warranted to enhance the whole assessment system of saccadic eye movement.
Collapse
Affiliation(s)
- Yu Imaoka
- Motor Control & Learning Laboratory, Institute of Human Movement Sciences and Sport, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Andri Flury
- Motor Control & Learning Laboratory, Institute of Human Movement Sciences and Sport, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Eling D de Bruin
- Motor Control & Learning Laboratory, Institute of Human Movement Sciences and Sport, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland.,Division of Physiotherapy, Department of Neurobiology, Care Sciences and Society, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|