1
|
Johari K, Bhardwaj R, Kim JJ, Yow WQ, Tan UX. Eye movement analysis for real-world settings using segmented linear regression. Comput Biol Med 2024; 174:108364. [PMID: 38599067 DOI: 10.1016/j.compbiomed.2024.108364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 03/02/2024] [Accepted: 03/21/2024] [Indexed: 04/12/2024]
Abstract
Eye movement analysis is critical to studying human brain phenomena such as perception, cognition, and behavior. However, under uncontrolled real-world settings, the recorded gaze coordinates (commonly used to track eye movements) are typically noisy and make it difficult to track change in the state of each phenomenon precisely, primarily because the expected change is usually a slower transient process. This paper proposes an approach, Improved Naive Segmented linear regression (INSLR), which approximates the gaze coordinates with a piecewise linear function (PLF) referred to as a hypothesis. INSLR improves the existing NSLR approach by employing a hypotheses clustering algorithm, which redefines the final hypothesis estimation in two steps: (1) At each time-stamp, measure the likelihood of each hypothesis in the candidate list of hypotheses by using the least square fit score and its distance from the k-means of the hypotheses in the list. (2) Filter hypothesis based on a pre-defined threshold. We demonstrate the significance of the INSLR method in addressing the challenges of uncontrolled real-world settings such as gaze denoising and minimizing gaze prediction errors from cost-effective devices like webcams. Experiment results show INSLR consistently outperforms the baseline NSLR in denoising noisy signals from three eye movement datasets and minimizes the error in gaze prediction from a low precision device for 71.1% samples. Furthermore, this improvement in denoising quality is further validated by the improved accuracy of the oculomotor event classifier called NSLR-HMM and enhanced sensitivity in detecting variations in attention induced by distractor during online lecture.
Collapse
Affiliation(s)
- Kritika Johari
- Engineering Product Development Pillar, Singapore University of Technology and Design, Singapore.
| | - Rishabh Bhardwaj
- Information Systems Technology and Design Pillar, Singapore University of Technology and Design, Singapore
| | - Jung-Jae Kim
- Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore
| | - Wei Quin Yow
- Humanities, Arts and Social Sciences, Singapore University of Technology and Design, Singapore
| | - U-Xuan Tan
- Engineering Product Development Pillar, Singapore University of Technology and Design, Singapore
| |
Collapse
|
2
|
Drews M, Dierkes K. Strategies for enhancing automatic fixation detection in head-mounted eye tracking. Behav Res Methods 2024:10.3758/s13428-024-02360-0. [PMID: 38594440 DOI: 10.3758/s13428-024-02360-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/30/2024] [Indexed: 04/11/2024]
Abstract
Moving through a dynamic world, humans need to intermittently stabilize gaze targets on their retina to process visual information. Overt attention being thus split into discrete intervals, the automatic detection of such fixation events is paramount to downstream analysis in many eye-tracking studies. Standard algorithms tackle this challenge in the limiting case of little to no head motion. In this static scenario, which is approximately realized for most remote eye-tracking systems, it amounts to detecting periods of relative eye stillness. In contrast, head-mounted eye trackers allow for experiments with subjects moving naturally in everyday environments. Detecting fixations in these dynamic scenarios is more challenging, since gaze-stabilizing eye movements need to be reliably distinguished from non-fixational gaze shifts. Here, we propose several strategies for enhancing existing algorithms developed for fixation detection in the static case to allow for robust fixation detection in dynamic real-world scenarios recorded with head-mounted eye trackers. Specifically, we consider (i) an optic-flow-based compensation stage explicitly accounting for stabilizing eye movements during head motion, (ii) an adaptive adjustment of algorithm sensitivity according to head-motion intensity, and (iii) a coherent tuning of all algorithm parameters. Introducing a new hand-labeled dataset, recorded with the Pupil Invisible glasses by Pupil Labs, we investigate their individual contributions. The dataset comprises both static and dynamic scenarios and is made publicly available. We show that a combination of all proposed strategies improves standard thresholding algorithms and outperforms previous approaches to fixation detection in head-mounted eye tracking.
Collapse
Affiliation(s)
- Michael Drews
- Pupil Labs, Sanderstraße 28, 12047, Berlin, Germany.
| | - Kai Dierkes
- Pupil Labs, Sanderstraße 28, 12047, Berlin, Germany.
| |
Collapse
|
3
|
Elmadjian C, Gonzales C, Costa RLD, Morimoto CH. Online eye-movement classification with temporal convolutional networks. Behav Res Methods 2023; 55:3602-3620. [PMID: 36220951 DOI: 10.3758/s13428-022-01978-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2022] [Indexed: 11/08/2022]
Abstract
The simultaneous classification of the three most basic eye-movement patterns is known as the ternary eye-movement classification problem (3EMCP). Dynamic, interactive real-time applications that must instantly adjust or respond to certain eye behaviors would highly benefit from accurate, robust, fast, and low-latency classification methods. Recent developments based on 1D-CNN-BiLSTM and TCN architectures have demonstrated to be more accurate and robust than previous solutions, but solely considering offline applications. In this paper, we propose a TCN classifier for the 3EMCP, adapted to online applications, that does not require look-ahead buffers. We introduce a new lightweight preprocessing technique that allows the TCN to make real-time predictions at about 500 Hz with low latency using commodity hardware. We evaluate the TCN performance against other two deep neural models: a CNN-LSTM and a CNN-BiLSTM, also adapted to online classification. Furthermore, we compare the performance of the deep neural models against a lightweight real-time Bayesian classifier (I-BDT). Our results, considering two publicly available datasets, show that the proposed TCN model consistently outperforms other methods for all classes. The results also show that, though it is possible to achieve reasonable accuracy levels with zero-length look ahead, the performance of all methods improve with the use of look-ahead information. The codebase, pre-trained models, and datasets are available at https://github.com/elmadjian/OEMC.
Collapse
Affiliation(s)
- Carlos Elmadjian
- University of São Paulo, R. do Matão, 1010, 256-A, São Paulo, Brazil.
| | - Candy Gonzales
- University of São Paulo, R. do Matão, 1010, 256-A, São Paulo, Brazil
| | | | - Carlos H Morimoto
- University of São Paulo, R. do Matão, 1010, 209-C, São Paulo, Brazil
| |
Collapse
|
4
|
Evaluating Eye Movement Event Detection: A Review of the State of the Art. Behav Res Methods 2022:10.3758/s13428-021-01763-7. [PMID: 35715615 DOI: 10.3758/s13428-021-01763-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/27/2021] [Indexed: 11/08/2022]
Abstract
Detecting eye movements in raw eye tracking data is a well-established research area by itself, as well as a common pre-processing step before any subsequent analysis. As in any field, however, progress and successful collaboration can only be achieved provided a shared understanding of the pursued goal. This is often formalised via defining metrics that express the quality of an approach to solving the posed problem. Both the big-picture intuition behind the evaluation strategies and seemingly small implementation details influence the resulting measures, making even studies with outwardly similar procedures essentially incomparable, impeding a common understanding. In this review, we systematically describe and analyse evaluation methods and measures employed in the eye movement event detection field to date. While recently developed evaluation strategies tend to quantify the detector's mistakes at the level of whole eye movement events rather than individual gaze samples, they typically do not separate establishing correspondences between true and predicted events from the quantification of the discovered errors. In our analysis we separate these two steps where possible, enabling their almost arbitrary combinations in an evaluation pipeline. We also present the first large-scale empirical analysis of event matching strategies in the literature, examining these various combinations both in practice and theoretically. We examine the particular benefits and downsides of the evaluation methods, providing recommendations towards more intuitive and informative assessment. We implemented the evaluation strategies on which this work focuses in a single publicly available library: https://github.com/r-zemblys/EM-event-detection-evaluation .
Collapse
|
5
|
BTN: Neuroanatomical aligning between visual object tracking in deep neural network and smooth pursuit in brain. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.02.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
6
|
De Cock L, Van de Weghe N, Ooms K, Vanhaeren N, Ridolfi M, De Poorter E, De Maeyer P. Taking a closer look at indoor route guidance; usability study to compare an adapted and non-adapted mobile prototype. SPATIAL COGNITION AND COMPUTATION 2022. [DOI: 10.1080/13875868.2021.1885411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Laure De Cock
- Department of Geography, Ghent University, Ghent, Belgium
| | | | - Kristien Ooms
- Department of Geography, Ghent University, Ghent, Belgium
| | - Nina Vanhaeren
- Department of Geography, Ghent University, Ghent, Belgium
| | - Matteo Ridolfi
- IDLab, Department of Information Technology, Ghent University - Imec, Ghent, Belgium
| | - Eli De Poorter
- IDLab, Department of Information Technology, Ghent University - Imec, Ghent, Belgium
| | | |
Collapse
|
7
|
Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels. SENSORS 2021; 21:s21144686. [PMID: 34300425 PMCID: PMC8309511 DOI: 10.3390/s21144686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 06/28/2021] [Accepted: 07/06/2021] [Indexed: 11/17/2022]
Abstract
Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult to gain insight through visualization. To avoid the complication, we often employ fixation identification algorithms for more abstract visualizations. In the past, many scientists have focused on gaze data abstraction with the attention map and analyzed detail gaze movement patterns with the scanpath visualization. Abstract eye movement patterns change dramatically depending on fixation identification algorithms in the preprocessing. However, it is difficult to find out how fixation identification algorithms affect gaze movement pattern visualizations. Additionally, scientists often spend much time on adjusting parameters manually in the fixation identification algorithms. In this paper, we propose a gaze behavior-based data processing method for abstract gaze data visualization. The proposed method classifies raw gaze data using machine learning models for image classification, such as CNN, AlexNet, and LeNet. Additionally, we compare the velocity-based identification (I-VT), dispersion-based identification (I-DT), density-based fixation identification, velocity and dispersion-based (I-VDT), and machine learning based and behavior-based modelson various visualizations at each abstraction level, such as attention map, scanpath, and abstract gaze movement visualization.
Collapse
|
8
|
Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. Detection of normal and slow saccades using implicit piecewise polynomial approximation. J Vis 2021; 21:8. [PMID: 34125160 PMCID: PMC8212426 DOI: 10.1167/jov.21.6.8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The quantitative analysis of saccades in eye movement data unveils information associated with intention, cognition, and health status. Abnormally slow saccades are indicative of neurological disorders and often imply a specific pathological disturbance. However, conventional saccade detection algorithms are not designed to detect slow saccades, and are correspondingly unreliable when saccades are unusually slow. In this article, we propose an algorithm that is effective for the detection of both normal and slow saccades. The proposed algorithm is partly based on modeling saccadic waveforms as piecewise-quadratic signals. The algorithm first decreases noise in acquired eye-tracking data using optimization to minimize a prescribed objective function, then uses velocity thresholding to detect saccades. Using both simulated saccades and real saccades generated by healthy subjects and patients, we evaluate the performance of the proposed algorithm and 10 other detection algorithms. We show the proposed algorithm is more accurate in detecting both normal and slow saccades than other algorithms.
Collapse
Affiliation(s)
- Weiwei Dai
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - Ivan Selesnick
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - John-Ross Rizzo
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Janet Rucker
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Todd Hudson
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| |
Collapse
|
9
|
Agtzidis I, Meyhöfer I, Dorr M, Lencer R. Following Forrest Gump: Smooth pursuit related brain activation during free movie viewing. Neuroimage 2020; 216:116491. [DOI: 10.1016/j.neuroimage.2019.116491] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 12/13/2019] [Accepted: 12/22/2019] [Indexed: 10/25/2022] Open
|
10
|
Abstract
Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources.
Collapse
|
11
|
Stuart S, Parrington L, Martini D, Peterka R, Chesnutt J, King L. The Measurement of Eye Movements in Mild Traumatic Brain Injury: A Structured Review of an Emerging Area. Front Sports Act Living 2020; 2:5. [PMID: 33345000 PMCID: PMC7739790 DOI: 10.3389/fspor.2020.00005] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Accepted: 01/08/2020] [Indexed: 11/13/2022] Open
Abstract
Mild traumatic brain injury (mTBI), or concussion, occurs following a direct or indirect force to the head that causes a change in brain function. Many neurological signs and symptoms of mTBI can be subtle and transient, and some can persist beyond the usual recovery timeframe, such as balance, cognitive or sensory disturbance that may pre-dispose to further injury in the future. There is currently no accepted definition or diagnostic criteria for mTBI and therefore no single assessment has been developed or accepted as being able to identify those with an mTBI. Eye-movement assessment may be useful, as specific eye-movements and their metrics can be attributed to specific brain regions or functions, and eye-movement involves a multitude of brain regions. Recently, research has focused on quantitative eye-movement assessments using eye-tracking technology for diagnosis and monitoring symptoms of an mTBI. However, the approaches taken to objectively measure eye-movements varies with respect to instrumentation, protocols and recognition of factors that may influence results, such as cognitive function or basic visual function. This review aimed to examine previous work that has measured eye-movements within those with mTBI to inform the development of robust or standardized testing protocols. Medline/PubMed, CINAHL, PsychInfo and Scopus databases were searched. Twenty-two articles met inclusion/exclusion criteria and were reviewed, which examined saccades, smooth pursuits, fixations and nystagmus in mTBI compared to controls. Current methodologies for data collection, analysis and interpretation from eye-tracking technology in individuals following an mTBI are discussed. In brief, a wide range of eye-movement instruments and outcome measures were reported, but validity and reliability of devices and metrics were insufficiently reported across studies. Interpretation of outcomes was complicated by poor study reporting of demographics, mTBI-related features (e.g., time since injury), and few studies considered the influence that cognitive or visual functions may have on eye-movements. The reviewed evidence suggests that eye-movements are impaired in mTBI, but future research is required to accurately and robustly establish findings. Standardization and reporting of eye-movement instruments, data collection procedures, processing algorithms and analysis methods are required. Recommendations also include comprehensive reporting of demographics, mTBI-related features, and confounding variables.
Collapse
Affiliation(s)
- Samuel Stuart
- Department of Sport, Exercise and Rehabilitation, Northumbria University, Newcastle upon Tyne, United Kingdom
- Department of Neurology, Oregon Health and Science University, Portland, OR, United States
- Veterans Affairs Portland Health Care System, Portland, OR, United States
| | - Lucy Parrington
- Department of Neurology, Oregon Health and Science University, Portland, OR, United States
- Veterans Affairs Portland Health Care System, Portland, OR, United States
| | - Douglas Martini
- Department of Neurology, Oregon Health and Science University, Portland, OR, United States
- Veterans Affairs Portland Health Care System, Portland, OR, United States
| | - Robert Peterka
- Department of Neurology, Oregon Health and Science University, Portland, OR, United States
- Veterans Affairs Portland Health Care System, Portland, OR, United States
- National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, Portland, OR, United States
| | - James Chesnutt
- Department of Neurology, Oregon Health and Science University, Portland, OR, United States
- Department of Family Medicine, Oregon Health & Science University, Portland, OR, United States
- Orthopaedics and Rehabilitation, Oregon Health & Science University, Portland, OR, United States
| | - Laurie King
- Department of Neurology, Oregon Health and Science University, Portland, OR, United States
- Veterans Affairs Portland Health Care System, Portland, OR, United States
| |
Collapse
|
12
|
Startsev M, Agtzidis I, Dorr M. Characterizing and automatically detecting smooth pursuit in a large-scale ground-truth data set of dynamic natural scenes. J Vis 2019; 19:10. [DOI: 10.1167/19.14.10] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Mikhail Startsev
- Human-Machine Communication, Technical University of Munich, Munich, Germany
| | - Ioannis Agtzidis
- Human-Machine Communication, Technical University of Munich, Munich, Germany
| | - Michael Dorr
- Human-Machine Communication, Technical University of Munich, Munich, Germany
| |
Collapse
|
13
|
Peng H, Li B, He D, Wang J. Identification of fixations, saccades and smooth pursuits based on segmentation and clustering. INTELL DATA ANAL 2019. [DOI: 10.3233/ida-184184] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
14
|
Castro H, Costa G, Lage G, Praça G, Fernández-Echeverría C, Moreno M, Greco P. COMPORTAMIENTO VISUAL Y TOMA DE DECISIONES EN SITUACIONES DE ATAQUE EN VOLEIBOL. REVISTA INTERNACIONAL DE MEDICINA Y CIENCIAS DE LA ACTIVIDAD FÍSICA Y DEL DEPORTE 2019. [DOI: 10.15366/rimcafd2019.75.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
15
|
Ehinger BV, Groß K, Ibs I, König P. A new comprehensive eye-tracking test battery concurrently evaluating the Pupil Labs glasses and the EyeLink 1000. PeerJ 2019; 7:e7086. [PMID: 31328028 PMCID: PMC6625505 DOI: 10.7717/peerj.7086] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Accepted: 05/07/2019] [Indexed: 01/08/2023] Open
Abstract
Eye-tracking experiments rely heavily on good data quality of eye-trackers. Unfortunately, it is often the case that only the spatial accuracy and precision values are available from the manufacturers. These two values alone are not sufficient to serve as a benchmark for an eye-tracker: Eye-tracking quality deteriorates during an experimental session due to head movements, changing illumination or calibration decay. Additionally, different experimental paradigms require the analysis of different types of eye movements; for instance, smooth pursuit movements, blinks or microsaccades, which themselves cannot readily be evaluated by using spatial accuracy or precision alone. To obtain a more comprehensive description of properties, we developed an extensive eye-tracking test battery. In 10 different tasks, we evaluated eye-tracking related measures such as: the decay of accuracy, fixation durations, pupil dilation, smooth pursuit movement, microsaccade classification, blink classification, or the influence of head motion. For some measures, true theoretical values exist. For others, a relative comparison to a reference eye-tracker is needed. Therefore, we collected our gaze data simultaneously from a remote EyeLink 1000 eye-tracker as the reference and compared it with the mobile Pupil Labs glasses. As expected, the average spatial accuracy of 0.57° for the EyeLink 1000 eye-tracker was better than the 0.82° for the Pupil Labs glasses (N = 15). Furthermore, we classified less fixations and shorter saccade durations for the Pupil Labs glasses. Similarly, we found fewer microsaccades using the Pupil Labs glasses. The accuracy over time decayed only slightly for the EyeLink 1000, but strongly for the Pupil Labs glasses. Finally, we observed that the measured pupil diameters differed between eye-trackers on the individual subject level but not on the group level. To conclude, our eye-tracking test battery offers 10 tasks that allow us to benchmark the many parameters of interest in stereotypical eye-tracking situations and addresses a common source of confounds in measurement errors (e.g., yaw and roll head movements). All recorded eye-tracking data (including Pupil Labs' eye videos), the stimulus code for the test battery, and the modular analysis pipeline are freely available (https://github.com/behinger/etcomp).
Collapse
Affiliation(s)
- Benedikt V. Ehinger
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Katharina Groß
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Inga Ibs
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
16
|
Stuart S, Hickey A, Vitorio R, Welman K, Foo S, Keen D, Godfrey A. Eye-tracker algorithms to detect saccades during static and dynamic tasks: a structured review. Physiol Meas 2019; 40:02TR01. [DOI: 10.1088/1361-6579/ab02ab] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
17
|
Harezlak K, Augustyn DR, Kasprowski P. An Analysis of Entropy-Based Eye Movement Events Detection. ENTROPY 2019; 21:e21020107. [PMID: 33266823 PMCID: PMC7514590 DOI: 10.3390/e21020107] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2018] [Revised: 01/15/2019] [Accepted: 01/21/2019] [Indexed: 11/16/2022]
Abstract
Analysis of eye movement has attracted a lot of attention recently in terms of exploring areas of people’s interest, cognitive ability, and skills. The basis for eye movement usage in these applications is the detection of its main components—namely, fixations and saccades, which facilitate understanding of the spatiotemporal processing of a visual scene. In the presented research, a novel approach for the detection of eye movement events is proposed, based on the concept of approximate entropy. By using the multiresolution time-domain scheme, a structure entitled the Multilevel Entropy Map was developed for this purpose. The dataset was collected during an experiment utilizing the “jumping point” paradigm. Eye positions were registered with a 1000 Hz sampling rate. For event detection, the knn classifier was applied. The best classification efficiency in recognizing the saccadic period ranged from 83% to 94%, depending on the sample size used. These promising outcomes suggest that the proposed solution may be used as a potential method for describing eye movement dynamics.
Collapse
|
18
|
1D CNN with BLSTM for automated classification of fixations, saccades, and smooth pursuits. Behav Res Methods 2018; 51:556-572. [DOI: 10.3758/s13428-018-1144-2] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
19
|
|
20
|
Hooge ITC, Niehorster DC, Nyström M, Andersson R, Hessels RS. Is human classification by experienced untrained observers a gold standard in fixation detection? Behav Res Methods 2018; 50:1864-1881. [PMID: 29052166 PMCID: PMC7875941 DOI: 10.3758/s13428-017-0955-x] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Manual classification is still a common method to evaluate event detection algorithms. The procedure is often as follows: Two or three human coders and the algorithm classify a significant quantity of data. In the gold standard approach, deviations from the human classifications are considered to be due to mistakes of the algorithm. However, little is known about human classification in eye tracking. To what extent do the classifications from a larger group of human coders agree? Twelve experienced but untrained human coders classified fixations in 6 min of adult and infant eye-tracking data. When using the sample-based Cohen's kappa, the classifications of the humans agreed near perfectly. However, we found substantial differences between the classifications when we examined fixation duration and number of fixations. We hypothesized that the human coders applied different (implicit) thresholds and selection rules. Indeed, when spatially close fixations were merged, most of the classification differences disappeared. On the basis of the nature of these intercoder differences, we concluded that fixation classification by experienced untrained human coders is not a gold standard. To bridge the gap between agreement measures (e.g., Cohen's kappa) and eye movement parameters (fixation duration, number of fixations), we suggest the use of the event-based F1 score and two new measures: the relative timing offset (RTO) and the relative timing deviation (RTD).
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Helgonabacken 12, 22362, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Helgonabacken 12, 22362, Lund, Sweden
| | - Richard Andersson
- Eye Information Group, IT University of Copenhagen, Copenhagen, Denmark
- Department of Philosophy and Cognitive Sciences, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
- Developmental Psychology, Utrecht University, Helgonabacken 1, 3584 CS, Utrecht, The Netherlands
| |
Collapse
|
21
|
Hessels RS, Niehorster DC, Nyström M, Andersson R, Hooge ITC. Is the eye-movement field confused about fixations and saccades? A survey among 124 researchers. ROYAL SOCIETY OPEN SCIENCE 2018; 5:180502. [PMID: 30225041 PMCID: PMC6124022 DOI: 10.1098/rsos.180502] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Accepted: 08/06/2018] [Indexed: 06/08/2023]
Abstract
Eye movements have been extensively studied in a wide range of research fields. While new methods such as mobile eye tracking and eye tracking in virtual/augmented realities are emerging quickly, the eye-movement terminology has scarcely been revised. We assert that this may cause confusion about two of the main concepts: fixations and saccades. In this study, we assessed the definitions of fixations and saccades held in the eye-movement field, by surveying 124 eye-movement researchers. These eye-movement researchers held a variety of definitions of fixations and saccades, of which the breadth seems even wider than what is reported in the literature. Moreover, these definitions did not seem to be related to researcher background or experience. We urge researchers to make their definitions more explicit by specifying all the relevant components of the eye movement under investigation: (i) the oculomotor component: e.g. whether the eye moves slow or fast; (ii) the functional component: what purposes does the eye movement (or lack thereof) serve; (iii) the coordinate system used: relative to what does the eye move; (iv) the computational definition: how is the event represented in the eye-tracker signal. This should enable eye-movement researchers from different fields to have a discussion without misunderstandings.
Collapse
Affiliation(s)
- Roy S. Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Developmental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Diederick C. Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | | | - Ignace T. C. Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
22
|
Abstract
Event detection is a challenging stage in eye movement data analysis. A major drawback of current event detection methods is that parameters have to be adjusted based on eye movement data quality. Here we show that a fully automated classification of raw gaze samples as belonging to fixations, saccades, or other oculomotor events can be achieved using a machine-learning approach. Any already manually or algorithmically detected events can be used to train a classifier to produce similar classification of other data without the need for a user to set parameters. In this study, we explore the application of random forest machine-learning technique for the detection of fixations, saccades, and post-saccadic oscillations (PSOs). In an effort to show practical utility of the proposed method to the applications that employ eye movement classification algorithms, we provide an example where the method is employed in an eye movement-driven biometric application. We conclude that machine-learning techniques lead to superior detection compared to current state-of-the-art event detection algorithms and can reach the performance of manual coding.
Collapse
|
23
|
Abstract
The present study evaluates the quality of gaze data produced by a low-cost eye tracker (The Eye Tribe©, The Eye Tribe, Copenhagen, Denmark) in order to verify its suitability for the performance of scientific research. An integrated methodological framework, based on artificial eye measurements and human eye tracking data, is proposed towards the implementation of the experimental process. The obtained results are used to remove the modeled noise through manual filtering and when detecting samples (fixations). The outcomes aim to serve as a robust reference for the verification of the validity of low-cost solutions, as well as a guide for the selection of appropriate fixation parameters towards the analysis of experimental data based on the used low-cost device. The results show higher deviation values for the real test persons in comparison to the artificial eyes, but these are still acceptable to be used in a scientific setting.
Collapse
|
24
|
Unsupervised parsing of gaze data with a beta-process vector auto-regressive hidden Markov model. Behav Res Methods 2017; 50:2074-2096. [DOI: 10.3758/s13428-017-0974-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
25
|
Hessels RS, Niehorster DC, Kemner C, Hooge ITC. Noise-robust fixation detection in eye movement data: Identification by two-means clustering (I2MC). Behav Res Methods 2017; 49:1802-1823. [PMID: 27800582 PMCID: PMC5628191 DOI: 10.3758/s13428-016-0822-1] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Eye-tracking research in infants and older children has gained a lot of momentum over the last decades. Although eye-tracking research in these participant groups has become easier with the advance of the remote eye-tracker, this often comes at the cost of poorer data quality than in research with well-trained adults (Hessels, Andersson, Hooge, Nyström, & Kemner Infancy, 20, 601-633, 2015; Wass, Forssman, & Leppänen Infancy, 19, 427-460, 2014). Current fixation detection algorithms are not built for data from infants and young children. As a result, some researchers have even turned to hand correction of fixation detections (Saez de Urabain, Johnson, & Smith Behavior Research Methods, 47, 53-72, 2015). Here we introduce a fixation detection algorithm-identification by two-means clustering (I2MC)-built specifically for data across a wide range of noise levels and when periods of data loss may occur. We evaluated the I2MC algorithm against seven state-of-the-art event detection algorithms, and report that the I2MC algorithm's output is the most robust to high noise and data loss levels. The algorithm is automatic, works offline, and is suitable for eye-tracking data recorded with remote or tower-mounted eye-trackers using static stimuli. In addition to application of the I2MC algorithm in eye-tracking research with infants, school children, and certain patient groups, the I2MC algorithm also may be useful when the noise and data loss levels are markedly different between trials, participants, or time points (e.g., longitudinal research).
Collapse
Affiliation(s)
- Roy S Hessels
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
- Department of Developmental Psychology, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Humanities Laboratory and Department of Psychology, Lund University, Lund, Sweden
- Institute for Psychology, University of Muenster, Muenster, Germany
| | - Chantal Kemner
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Department of Developmental Psychology, Utrecht University, Utrecht, The Netherlands
- Brain Center Rudolf Magnus, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Ignace T C Hooge
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|