1
|
Strauch C, Hoogerbrugge AJ, Ten Brink AF. Gaze data of 4243 participants shows link between leftward and superior attention biases and age. Exp Brain Res 2024; 242:1327-1337. [PMID: 38555556 PMCID: PMC11108882 DOI: 10.1007/s00221-024-06823-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 03/17/2024] [Indexed: 04/02/2024]
Abstract
Healthy individuals typically show more attention to the left than to the right (known as pseudoneglect), and to the upper than to the lower visual field (known as altitudinal pseudoneglect). These biases are thought to reflect asymmetries in neural processes. Attention biases have been used to investigate how these neural asymmetries change with age. However, inconsistent results have been reported regarding the presence and direction of age-related effects on horizontal and vertical attention biases. The observed inconsistencies may be due to insensitive measures and small sample sizes, that usually only feature extreme age groups. We investigated whether spatial attention biases, as indexed by gaze position during free viewing of a single image, are influenced by age. We analysed free-viewing data from 4,243 participants aged 5-65 years and found that attention biases shifted to the right and superior directions with increasing age. These findings are consistent with the idea of developing cerebral asymmetries with age and support the hypothesis of the origin of the leftward bias. Age modulations were found only for the first seven fixations, corresponding to the time window in which an absolute leftward bias in free viewing was previously observed. We interpret this as evidence that the horizontal and vertical attention biases are primarily present when orienting attention to a novel stimulus - and that age modulations of attention orienting are not global modulations of spatial attention. Taken together, our results suggest that attention orienting may be modulated by age and that cortical asymmetries may change with age.
Collapse
Affiliation(s)
- Christoph Strauch
- Experimental Psychology, Utrecht University, Helmholtz Institute, Heidelberglaan 1, Utrecht, 3584CS, The Netherlands.
| | - Alex J Hoogerbrugge
- Experimental Psychology, Utrecht University, Helmholtz Institute, Heidelberglaan 1, Utrecht, 3584CS, The Netherlands
| | - Antonia F Ten Brink
- Experimental Psychology, Utrecht University, Helmholtz Institute, Heidelberglaan 1, Utrecht, 3584CS, The Netherlands
| |
Collapse
|
2
|
Lencastre P, Lotfigolian M, Lind PG. Identifying Autism Gaze Patterns in Five-Second Data Records. Diagnostics (Basel) 2024; 14:1047. [PMID: 38786345 PMCID: PMC11119316 DOI: 10.3390/diagnostics14101047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 05/15/2024] [Accepted: 05/16/2024] [Indexed: 05/25/2024] Open
Abstract
One of the most challenging problems when diagnosing autism spectrum disorder (ASD) is the need for long sets of data. Collecting data during such long periods is challenging, particularly when dealing with children. This challenge motivates the investigation of possible classifiers of ASD that do not need such long data sets. In this paper, we use eye-tracking data sets covering only 5 s and introduce one metric able to distinguish between ASD and typically developed (TD) gaze patterns based on such short time-series and compare it with two benchmarks, one using the traditional eye-tracking metrics and one state-of-the-art AI classifier. Although the data can only track possible disorders in visual attention and our approach is not a substitute to medical diagnosis, we find that our newly introduced metric can achieve an accuracy of 93% in classifying eye gaze trajectories from children with ASD surpassing both benchmarks while needing fewer data. The classification accuracy of our method, using a 5 s data series, performs better than the standard metrics in eye-tracking and is at the level of the best AI benchmarks, even when these are trained with longer time series. We also discuss the advantages and limitations of our method in comparison with the state of the art: besides needing a low amount of data, this method is a simple, understandable, and straightforward criterion to apply, which often contrasts with "black box" AI methods.
Collapse
Affiliation(s)
- Pedro Lencastre
- Department of Computer Science, Oslo Metropolitan University, N-0130 Oslo, Norway (P.G.L.)
- OsloMet Artificial Intelligence Lab, Pilestredet 52, N-0166 Oslo, Norway
- NordSTAR—Nordic Center for Sustainable and Trustworthy AI Research, Pilestredet 52, N-0166 Oslo, Norway
| | - Maryam Lotfigolian
- Department of Computer Science, Oslo Metropolitan University, N-0130 Oslo, Norway (P.G.L.)
| | - Pedro G. Lind
- Department of Computer Science, Oslo Metropolitan University, N-0130 Oslo, Norway (P.G.L.)
- OsloMet Artificial Intelligence Lab, Pilestredet 52, N-0166 Oslo, Norway
- NordSTAR—Nordic Center for Sustainable and Trustworthy AI Research, Pilestredet 52, N-0166 Oslo, Norway
- Simula Research Laboratory, Numerical Analysis and Scientific Computing, N-0164 Oslo, Norway
| |
Collapse
|
3
|
Drews M, Dierkes K. Strategies for enhancing automatic fixation detection in head-mounted eye tracking. Behav Res Methods 2024:10.3758/s13428-024-02360-0. [PMID: 38594440 DOI: 10.3758/s13428-024-02360-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/30/2024] [Indexed: 04/11/2024]
Abstract
Moving through a dynamic world, humans need to intermittently stabilize gaze targets on their retina to process visual information. Overt attention being thus split into discrete intervals, the automatic detection of such fixation events is paramount to downstream analysis in many eye-tracking studies. Standard algorithms tackle this challenge in the limiting case of little to no head motion. In this static scenario, which is approximately realized for most remote eye-tracking systems, it amounts to detecting periods of relative eye stillness. In contrast, head-mounted eye trackers allow for experiments with subjects moving naturally in everyday environments. Detecting fixations in these dynamic scenarios is more challenging, since gaze-stabilizing eye movements need to be reliably distinguished from non-fixational gaze shifts. Here, we propose several strategies for enhancing existing algorithms developed for fixation detection in the static case to allow for robust fixation detection in dynamic real-world scenarios recorded with head-mounted eye trackers. Specifically, we consider (i) an optic-flow-based compensation stage explicitly accounting for stabilizing eye movements during head motion, (ii) an adaptive adjustment of algorithm sensitivity according to head-motion intensity, and (iii) a coherent tuning of all algorithm parameters. Introducing a new hand-labeled dataset, recorded with the Pupil Invisible glasses by Pupil Labs, we investigate their individual contributions. The dataset comprises both static and dynamic scenarios and is made publicly available. We show that a combination of all proposed strategies improves standard thresholding algorithms and outperforms previous approaches to fixation detection in head-mounted eye tracking.
Collapse
Affiliation(s)
- Michael Drews
- Pupil Labs, Sanderstraße 28, 12047, Berlin, Germany.
| | - Kai Dierkes
- Pupil Labs, Sanderstraße 28, 12047, Berlin, Germany.
| |
Collapse
|
4
|
Nyström M, Andersson R, Niehorster DC, Hessels RS, Hooge ITC. What is a blink? Classifying and characterizing blinks in eye openness signals. Behav Res Methods 2024; 56:3280-3299. [PMID: 38424292 PMCID: PMC11133197 DOI: 10.3758/s13428-023-02333-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/22/2023] [Indexed: 03/02/2024]
Abstract
Blinks, the closing and opening of the eyelids, are used in a wide array of fields where human function and behavior are studied. In data from video-based eye trackers, blink rate and duration are often estimated from the pupil-size signal. However, blinks and their parameters can be estimated only indirectly from this signal, since it does not explicitly contain information about the eyelid position. We ask whether blinks detected from an eye openness signal that estimates the distance between the eyelids (EO blinks) are comparable to blinks detected with a traditional algorithm using the pupil-size signal (PS blinks) and how robust blink detection is when data quality is low. In terms of rate, there was an almost-perfect overlap between EO and PS blink (F1 score: 0.98) when the head was in the center of the eye tracker's tracking range where data quality was high and a high overlap (F1 score 0.94) when the head was at the edge of the tracking range where data quality was worse. When there was a difference in blink rate between EO and PS blinks, it was mainly due to data loss in the pupil-size signal. Blink durations were about 60 ms longer in EO blinks compared to PS blinks. Moreover, the dynamics of EO blinks was similar to results from previous literature. We conclude that the eye openness signal together with our proposed blink detection algorithm provides an advantageous method to detect and describe blinks in greater detail.
Collapse
Affiliation(s)
- Marcus Nyström
- Lund University Humanities Lab, Box 201, SE-221 00, Lund, Sweden.
| | | | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Box 201, SE-221 00, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584, CS, Utrecht, The Netherlands
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584, CS, Utrecht, The Netherlands
| |
Collapse
|
5
|
Byrne SA, Nyström M, Maquiling V, Kasneci E, Niehorster DC. Precise localization of corneal reflections in eye images using deep learning trained on synthetic data. Behav Res Methods 2024; 56:3226-3241. [PMID: 38114880 PMCID: PMC11133043 DOI: 10.3758/s13428-023-02297-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/15/2023] [Indexed: 12/21/2023]
Abstract
We present a deep learning method for accurately localizing the center of a single corneal reflection (CR) in an eye image. Unlike previous approaches, we use a convolutional neural network (CNN) that was trained solely using synthetic data. Using only synthetic data has the benefit of completely sidestepping the time-consuming process of manual annotation that is required for supervised training on real eye images. To systematically evaluate the accuracy of our method, we first tested it on images with synthetic CRs placed on different backgrounds and embedded in varying levels of noise. Second, we tested the method on two datasets consisting of high-quality videos captured from real eyes. Our method outperformed state-of-the-art algorithmic methods on real eye images with a 3-41.5% reduction in terms of spatial precision across data sets, and performed on par with state-of-the-art on synthetic images in terms of spatial accuracy. We conclude that our method provides a precise method for CR center localization and provides a solution to the data availability problem, which is one of the important common roadblocks in the development of deep learning models for gaze estimation. Due to the superior CR center localization and ease of application, our method has the potential to improve the accuracy and precision of CR-based eye trackers.
Collapse
Affiliation(s)
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Virmarie Maquiling
- Human-Centered Technologies for Learning, Technical University of Munich, Munich, Germany
| | - Enkelejda Kasneci
- Human-Centered Technologies for Learning, Technical University of Munich, Munich, Germany
| | - Diederick C Niehorster
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy.
- Department of Psychology, Lund University, Lund, Sweden.
| |
Collapse
|
6
|
Hooge ITC, Niehorster DC, Nyström M, Hessels RS. Large eye-head gaze shifts measured with a wearable eye tracker and an industrial camera. Behav Res Methods 2024:10.3758/s13428-023-02316-w. [PMID: 38200239 DOI: 10.3758/s13428-023-02316-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/04/2023] [Indexed: 01/12/2024]
Abstract
We built a novel setup to record large gaze shifts (up to 140[Formula: see text]). The setup consists of a wearable eye tracker and a high-speed camera with fiducial marker technology to track the head. We tested our setup by replicating findings from the classic eye-head gaze shift literature. We conclude that our new inexpensive setup is good enough to investigate the dynamics of large eye-head gaze shifts. This novel setup could be used for future research on large eye-head gaze shifts, but also for research on gaze during e.g., human interaction. We further discuss reference frames and terminology in head-free eye tracking. Despite a transition from head-fixed eye tracking to head-free gaze tracking, researchers still use head-fixed eye movement terminology when discussing world-fixed gaze phenomena. We propose to use more specific terminology for world-fixed phenomena, including gaze fixation, gaze pursuit, and gaze saccade.
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
7
|
Hooge ITC, Niehorster DC, Hessels RS, Benjamins JS, Nyström M. How robust are wearable eye trackers to slow and fast head and body movements? Behav Res Methods 2023; 55:4128-4142. [PMID: 36326998 PMCID: PMC10700439 DOI: 10.3758/s13428-022-02010-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2022] [Indexed: 06/16/2023]
Abstract
How well can modern wearable eye trackers cope with head and body movement? To investigate this question, we asked four participants to stand still, walk, skip, and jump while fixating a static physical target in space. We did this for six different eye trackers. All the eye trackers were capable of recording gaze during the most dynamic episodes (skipping and jumping). The accuracy became worse as movement got wilder. During skipping and jumping, the biggest error was 5.8∘. However, most errors were smaller than 3∘. We discuss the implications of decreased accuracy in the context of different research scenarios.
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, and Social, Health and Organisational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| |
Collapse
|
8
|
Elmadjian C, Gonzales C, Costa RLD, Morimoto CH. Online eye-movement classification with temporal convolutional networks. Behav Res Methods 2023; 55:3602-3620. [PMID: 36220951 DOI: 10.3758/s13428-022-01978-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2022] [Indexed: 11/08/2022]
Abstract
The simultaneous classification of the three most basic eye-movement patterns is known as the ternary eye-movement classification problem (3EMCP). Dynamic, interactive real-time applications that must instantly adjust or respond to certain eye behaviors would highly benefit from accurate, robust, fast, and low-latency classification methods. Recent developments based on 1D-CNN-BiLSTM and TCN architectures have demonstrated to be more accurate and robust than previous solutions, but solely considering offline applications. In this paper, we propose a TCN classifier for the 3EMCP, adapted to online applications, that does not require look-ahead buffers. We introduce a new lightweight preprocessing technique that allows the TCN to make real-time predictions at about 500 Hz with low latency using commodity hardware. We evaluate the TCN performance against other two deep neural models: a CNN-LSTM and a CNN-BiLSTM, also adapted to online classification. Furthermore, we compare the performance of the deep neural models against a lightweight real-time Bayesian classifier (I-BDT). Our results, considering two publicly available datasets, show that the proposed TCN model consistently outperforms other methods for all classes. The results also show that, though it is possible to achieve reasonable accuracy levels with zero-length look ahead, the performance of all methods improve with the use of look-ahead information. The codebase, pre-trained models, and datasets are available at https://github.com/elmadjian/OEMC.
Collapse
Affiliation(s)
- Carlos Elmadjian
- University of São Paulo, R. do Matão, 1010, 256-A, São Paulo, Brazil.
| | - Candy Gonzales
- University of São Paulo, R. do Matão, 1010, 256-A, São Paulo, Brazil
| | | | - Carlos H Morimoto
- University of São Paulo, R. do Matão, 1010, 209-C, São Paulo, Brazil
| |
Collapse
|
9
|
Park SY, Holmqvist K, Niehorster DC, Huber L, Virányi Z. How to improve data quality in dog eye tracking. Behav Res Methods 2023; 55:1513-1536. [PMID: 35680764 PMCID: PMC10250523 DOI: 10.3758/s13428-022-01788-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/02/2022] [Indexed: 11/08/2022]
Abstract
Pupil-corneal reflection (P-CR) eye tracking has gained a prominent role in studying dog visual cognition, despite methodological challenges that often lead to lower-quality data than when recording from humans. In the current study, we investigated if and how the morphology of dogs might interfere with tracking of P-CR systems, and to what extent such interference, possibly in combination with dog-unique eye-movement characteristics, may undermine data quality and affect eye-movement classification when processed through algorithms. For this aim, we have conducted an eye-tracking experiment with dogs and humans, and investigated incidences of tracking interference, compared how they blinked, and examined how differential quality of dog and human data affected the detection and classification of eye-movement events. Our results show that the morphology of dogs' face and eye can interfere with tracking methods of the systems, and dogs blink less often but their blinks are longer. Importantly, the lower quality of dog data lead to larger differences in how two different event detection algorithms classified fixations, indicating that the results of key dependent variables are more susceptible to choice of algorithm in dog than human data. Further, two measures of the Nyström & Holmqvist (Behavior Research Methods, 42(4), 188-204, 2010) algorithm showed that dog fixations are less stable and dog data have more trials with extreme levels of noise. Our findings call for analyses better adjusted to the characteristics of dog eye-tracking data, and our recommendations help future dog eye-tracking studies acquire quality data to enable robust comparisons of visual cognition between dogs and humans.
Collapse
Affiliation(s)
- Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria.
- Medical University Vienna, Vienna, Austria.
- University of Vienna, Vienna, Austria.
| | - Kenneth Holmqvist
- Institute of Psychology, Nicolaus Copernicus University in Torun, Torun, Poland
- Department of Psychology, Regensburg University, Regensburg, Germany
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Ludwig Huber
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria
- Medical University Vienna, Vienna, Austria
- University of Vienna, Vienna, Austria
| | - Zsófia Virányi
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria
- Medical University Vienna, Vienna, Austria
- University of Vienna, Vienna, Austria
| |
Collapse
|
10
|
Artificial intelligence for automated detection of large mammals creates path to upscale drone surveys. Sci Rep 2023; 13:947. [PMID: 36653478 PMCID: PMC9849265 DOI: 10.1038/s41598-023-28240-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 01/16/2023] [Indexed: 01/19/2023] Open
Abstract
Imagery from drones is becoming common in wildlife research and management, but processing data efficiently remains a challenge. We developed a methodology for training a convolutional neural network model on large-scale mosaic imagery to detect and count caribou (Rangifer tarandus), compare model performance with an experienced observer and a group of naïve observers, and discuss the use of aerial imagery and automated methods for large mammal surveys. Combining images taken at 75 m and 120 m above ground level, a faster region-based convolutional neural network (Faster-RCNN) model was trained in using annotated imagery with the labels: "adult caribou", "calf caribou", and "ghost caribou" (animals moving between images, producing blurring individuals during the photogrammetry processing). Accuracy, precision, and recall of the model were 80%, 90%, and 88%, respectively. Detections between the model and experienced observer were highly correlated (Pearson: 0.96-0.99, P value < 0.05). The model was generally more effective in detecting adults, calves, and ghosts than naïve observers at both altitudes. We also discuss the need to improve consistency of observers' annotations if manual review will be used to train models accurately. Generalization of automated methods for large mammal detections will be necessary for large-scale studies with diverse platforms, airspace restrictions, and sensor capabilities.
Collapse
|
11
|
Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, Drieghe D, Dunn MJ, Ettinger U, Fiedler S, Foulsham T, van der Geest JN, Hansen DW, Hutton SB, Kasneci E, Kingstone A, Knox PC, Kok EM, Lee H, Lee JY, Leppänen JM, Macknik S, Majaranta P, Martinez-Conde S, Nuthmann A, Nyström M, Orquin JL, Otero-Millan J, Park SY, Popelka S, Proudlock F, Renkewitz F, Roorda A, Schulte-Mecklenbeck M, Sharif B, Shic F, Shovman M, Thomas MG, Venrooij W, Zemblys R, Hessels RS. Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2023; 55:364-416. [PMID: 35384605 PMCID: PMC9535040 DOI: 10.3758/s13428-021-01762-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Department of Psychology, Nicolaus Copernicus University, Torun, Poland.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Department of Psychology, Regensburg University, Regensburg, Germany.
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Social, Health and Organizational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | | | - Lewis L Chuang
- Department of Ergonomics, Leibniz Institute for Working Environments and Human Factors, Dortmund, Germany
- Institute of Informatics, LMU Munich, Munich, Germany
| | | | - Denis Drieghe
- School of Psychology, University of Southampton, Southampton, UK
| | - Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | - Susann Fiedler
- Vienna University of Economics and Business, Vienna, Austria
| | - Tom Foulsham
- Department of Psychology, University of Essex, Essex, UK
| | | | - Dan Witzner Hansen
- Machine Learning Group, Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | | | - Enkelejda Kasneci
- Human-Computer Interaction, University of Tübingen, Tübingen, Germany
| | | | - Paul C Knox
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Ellen M Kok
- Department of Education and Pedagogy, Division Education, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, The Netherlands
| | - Helena Lee
- University of Southampton, Southampton, UK
| | - Joy Yeonjoo Lee
- School of Health Professions Education, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Jukka M Leppänen
- Department of Psychology and Speed-Language Pathology, University of Turku, Turku, Finland
| | - Stephen Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Päivi Majaranta
- TAUCHI Research Center, Computing Sciences, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jacob L Orquin
- Department of Management, Aarhus University, Aarhus, Denmark
- Center for Research in Marketing and Consumer Psychology, Reykjavik University, Reykjavik, Iceland
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, Vienna, Austria
| | - Stanislav Popelka
- Department of Geoinformatics, Palacký University Olomouc, Olomouc, Czech Republic
| | - Frank Proudlock
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Frank Renkewitz
- Department of Psychology, University of Erfurt, Erfurt, Germany
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of General Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Mark Shovman
- Eyeviation Systems, Herzliya, Israel
- Department of Industrial Design, Bezalel Academy of Arts and Design, Jerusalem, Israel
| | - Mervyn G Thomas
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Ward Venrooij
- Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Enschede, The Netherlands
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
12
|
Friedman L, Prokopenko V, Djanian S, Katrychuk D, Komogortsev OV. Factors affecting inter-rater agreement in human classification of eye movements: a comparison of three datasets. Behav Res Methods 2023; 55:417-427. [PMID: 35411475 DOI: 10.3758/s13428-021-01782-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/22/2021] [Indexed: 11/08/2022]
Abstract
Manual classification of eye-movements is used in research and as a basis for comparison with automatic algorithms in the development phase. However, human classification will not be useful if it is unreliable and unrepeatable. Therefore, it is important to know what factors might influence and enhance the accuracy and reliability of human classification of eye-movements. In this report we compare three datasets of human manual classification, two from earlier datasets and one, our own dataset, which we present here for the first time. For inter-rater reliability, we assess both the event-level F1-score and sample-level Cohen's κ, across groups of raters. The report points to several possible influences on human classification reliability: eye-tracker quality, use of head restraint, characteristics of the recorded subjects, the availability of detailed scoring rules, and the characteristics and training of the raters.
Collapse
Affiliation(s)
- Lee Friedman
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA.
| | - Vladyslav Prokopenko
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
| | - Shagen Djanian
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
- Department of Computer Science, Aalborg University, Selma Lagerlofs Vej 300, 9220, Aalborg East, Denmark
| | - Dmytro Katrychuk
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
| | - Oleg V Komogortsev
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
| |
Collapse
|
13
|
Hooge ITC, Niehorster DC, Nyström M, Andersson R, Hessels RS. Fixation classification: how to merge and select fixation candidates. Behav Res Methods 2022; 54:2765-2776. [PMID: 35023066 PMCID: PMC9729319 DOI: 10.3758/s13428-021-01723-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/06/2021] [Indexed: 12/16/2022]
Abstract
Eye trackers are applied in many research fields (e.g., cognitive science, medicine, marketing research). To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data with high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5∘), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by a rule that selects saccades with amplitudes larger than 1.0∘ and a rule that selects fixations with duration longer than 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters.
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
14
|
Birawo B, Kasprowski P. Review and Evaluation of Eye Movement Event Detection Algorithms. SENSORS (BASEL, SWITZERLAND) 2022; 22:8810. [PMID: 36433407 PMCID: PMC9699548 DOI: 10.3390/s22228810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 11/04/2022] [Accepted: 11/08/2022] [Indexed: 06/16/2023]
Abstract
Eye tracking is a technology aimed at understanding the direction of the human gaze. Event detection is a process of detecting and classifying eye movements that are divided into several types. Nowadays, event detection is almost exclusively done by applying a detection algorithm to the raw recorded eye-tracking data. However, due to the lack of a standard procedure for how to perform evaluations, evaluating and comparing various detection algorithms in eye-tracking signals is very challenging. In this paper, we used data from a high-speed eye-tracker SMI HiSpeed 1250 system and compared event detection performance. The evaluation focused on fixations, saccades and post-saccadic oscillation classification. It used sample-by-sample comparisons to compare the algorithms and inter-agreement between algorithms and human coders. The impact of varying threshold values on threshold-based algorithms was examined and the optimum threshold values were determined. This evaluation differed from previous evaluations by using the same dataset to evaluate the event detection algorithms and human coders. We evaluated and compared the different algorithms from threshold-based, machine learning-based and deep learning event detection algorithms. The evaluation results show that all methods perform well for fixation and saccade detection; however, there are substantial differences in classification results. Generally, CNN (Convolutional Neural Network) and RF (Random Forest) algorithms outperform threshold-based methods.
Collapse
|
15
|
Evaluating Eye Movement Event Detection: A Review of the State of the Art. Behav Res Methods 2022:10.3758/s13428-021-01763-7. [PMID: 35715615 DOI: 10.3758/s13428-021-01763-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/27/2021] [Indexed: 11/08/2022]
Abstract
Detecting eye movements in raw eye tracking data is a well-established research area by itself, as well as a common pre-processing step before any subsequent analysis. As in any field, however, progress and successful collaboration can only be achieved provided a shared understanding of the pursued goal. This is often formalised via defining metrics that express the quality of an approach to solving the posed problem. Both the big-picture intuition behind the evaluation strategies and seemingly small implementation details influence the resulting measures, making even studies with outwardly similar procedures essentially incomparable, impeding a common understanding. In this review, we systematically describe and analyse evaluation methods and measures employed in the eye movement event detection field to date. While recently developed evaluation strategies tend to quantify the detector's mistakes at the level of whole eye movement events rather than individual gaze samples, they typically do not separate establishing correspondences between true and predicted events from the quantification of the discovered errors. In our analysis we separate these two steps where possible, enabling their almost arbitrary combinations in an evaluation pipeline. We also present the first large-scale empirical analysis of event matching strategies in the literature, examining these various combinations both in practice and theoretically. We examine the particular benefits and downsides of the evaluation methods, providing recommendations towards more intuitive and informative assessment. We implemented the evaluation strategies on which this work focuses in a single publicly available library: https://github.com/r-zemblys/EM-event-detection-evaluation .
Collapse
|
16
|
BTN: Neuroanatomical aligning between visual object tracking in deep neural network and smooth pursuit in brain. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.02.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
17
|
Ortega JL. Classification and analysis of
PubPeer
comments: How a web journal club is used. J Assoc Inf Sci Technol 2021. [DOI: 10.1002/asi.24568] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- José Luis Ortega
- Institute for Advanced Social Studies (IESA‐CSIC) Córdoba Spain
- Joint Research Unit Knowledge Transfer and Innovation (UCO‐CSIC) Córdoba Spain
| |
Collapse
|
18
|
Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. Detection of normal and slow saccades using implicit piecewise polynomial approximation. J Vis 2021; 21:8. [PMID: 34125160 PMCID: PMC8212426 DOI: 10.1167/jov.21.6.8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The quantitative analysis of saccades in eye movement data unveils information associated with intention, cognition, and health status. Abnormally slow saccades are indicative of neurological disorders and often imply a specific pathological disturbance. However, conventional saccade detection algorithms are not designed to detect slow saccades, and are correspondingly unreliable when saccades are unusually slow. In this article, we propose an algorithm that is effective for the detection of both normal and slow saccades. The proposed algorithm is partly based on modeling saccadic waveforms as piecewise-quadratic signals. The algorithm first decreases noise in acquired eye-tracking data using optimization to minimize a prescribed objective function, then uses velocity thresholding to detect saccades. Using both simulated saccades and real saccades generated by healthy subjects and patients, we evaluate the performance of the proposed algorithm and 10 other detection algorithms. We show the proposed algorithm is more accurate in detecting both normal and slow saccades than other algorithms.
Collapse
Affiliation(s)
- Weiwei Dai
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - Ivan Selesnick
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - John-Ross Rizzo
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Janet Rucker
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Todd Hudson
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| |
Collapse
|
19
|
Abstract
Due to its reported high sampling frequency and precision, the Tobii Pro Spectrum is of potential interest to researchers who want to study small eye movements during fixation. We test how suitable the Tobii Pro Spectrum is for research on microsaccades by computing data-quality measures and common properties of microsaccades and comparing these to the currently most used system in this field: the EyeLink 1000 Plus. Results show that the EyeLink data provide higher RMS precision and microsaccade rates compared with data acquired with the Tobii Pro Spectrum. However, both systems provide microsaccades with similar directions and shapes, as well as rates consistent with previous literature. Data acquired at 1200 Hz with the Tobii Pro Spectrum provide results that are more similar to the EyeLink, compared to data acquired at 600 Hz. We conclude that the Tobii Pro Spectrum is a useful tool for researchers investigating microsaccades.
Collapse
|
20
|
Abstract
Eye trackers are sometimes used to study the miniature eye movements such as drift that occur while observers fixate a static location on a screen. Specifically, analysis of such eye-tracking data can be performed by examining the temporal spectrum composition of the recorded gaze position signal, allowing to assess its color. However, not only rotations of the eyeball but also filters in the eye tracker may affect the signal’s spectral color. Here, we therefore ask whether colored, as opposed to white, signal dynamics in eye-tracking recordings reflect fixational eye movements, or whether they are instead largely due to filters. We recorded gaze position data with five eye trackers from four pairs of human eyes performing fixation sequences, and also from artificial eyes. We examined the spectral color of the gaze position signals produced by the eye trackers, both with their filters switched on, and for unfiltered data. We found that while filtered data recorded from both human and artificial eyes were colored for all eye trackers, for most eye trackers the signal was white when examining both unfiltered human and unfiltered artificial eye data. These results suggest that color in the eye-movement recordings was due to filters for all eye trackers except the most precise eye tracker where it may partly reflect fixational eye movements. As such, researchers studying fixational eye movements should be careful to examine the properties of the filters in their eye tracker to ensure they are studying eyeball rotation and not filter properties.
Collapse
|
21
|
Correction to: "Is human classification by experienced untrained observers a gold standard in fixation detection?". Behav Res Methods 2021; 53:943-944. [PMID: 33569711 DOI: 10.3758/s13428-021-01537-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
22
|
Abstract
Mobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant’s head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs’ Pupil in 3D mode, and (iv) Pupil-Labs’ Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8–3.1∘ increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup.
Collapse
|
23
|
Abstract
For evaluating whether an eye-tracker is suitable for measuring microsaccades, Poletti & Rucci (2016) propose that a measure called 'resolution' could be better than the more established root-mean-square of the sample-to-sample distances (RMS-S2S). Many open questions exist around the resolution measure, however. Resolution needs to be calculated using data from an artificial eye that can be turned in very small steps. Furthermore, resolution has an unclear and uninvestigated relationship to the RMS-S2S and STD (standard deviation) measures of precision (Holmqvist & Andersson, 2017, p. 159-190), and there is another metric by the same name (Clarke, Ditterich, Drüen, Schönfeld, and Steineke 2002), which instead quantifies the errors of amplitude measurements. In this paper, we present a mechanism, the Stepperbox, for rotating artificial eyes in arbitrary angles from 1' (arcmin) and upward. We then use the Stepperbox to find the minimum reliably detectable rotations in 11 video-based eye-trackers (VOGs) and the Dual Purkinje Imaging (DPI) tracker. We find that resolution correlates significantly with RMS-S2S and, to a lesser extent, with STD. In addition, we find that although most eye-trackers can detect some small rotations of an artificial eye, the rotations of amplitudes up to 2∘ are frequently erroneously measured by video-based eye-trackers. We show evidence that the corneal reflection (CR) feature of these eye-trackers is a major cause of erroneous measurements of small rotations of artificial eyes. Our data strengthen the existing body of evidence that video-based eye-trackers produce errors that may require that we reconsider some results from research on reading, microsaccades, and vergence, where the amplitude of small eye movements have been measured with past or current video-based eye-trackers. In contrast, the DPI reports correct rotation amplitudes down to 1'.
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Institute of Psychology, Nicolaus Copernicus University in Torun, Toruń, Poland.
- Department of Psychology, Regensburg University, Regensburg, Germany.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Faculty of Arts, Masaryk University, Brno, Czech Republic.
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| |
Collapse
|
24
|
Hessels RS, Benjamins JS, van Doorn AJ, Koenderink JJ, Holleman GA, Hooge ITC. Looking behavior and potential human interactions during locomotion. J Vis 2020; 20:5. [PMID: 33007079 PMCID: PMC7545070 DOI: 10.1167/jov.20.10.5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
As humans move through parts of their environment, they meet others that may or may not try to interact with them. Where do people look when they meet others? We had participants wearing an eye tracker walk through a university building. On the way, they encountered nine “walkers.” Walkers were instructed to e.g. ignore the participant, greet him or her, or attempt to hand out a flyer. The participant's gaze was mostly directed to the currently relevant body parts of the walker. Thus, the participants gaze depended on the walker's action. Individual differences in participant's looking behavior were consistent across walkers. Participants who did not respond to the walker seemed to look less at that walker, although this difference was not statistically significant. We suggest that models of gaze allocation should take social motivation into account.
Collapse
Affiliation(s)
- Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, and Social, Health and Organizational Psychology, Utrecht University, Utrecht, the Netherlands.,
| | - Andrea J van Doorn
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| | - Jan J Koenderink
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| | - Gijs A Holleman
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| |
Collapse
|
25
|
Agtzidis I, Meyhöfer I, Dorr M, Lencer R. Following Forrest Gump: Smooth pursuit related brain activation during free movie viewing. Neuroimage 2020; 216:116491. [DOI: 10.1016/j.neuroimage.2019.116491] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 12/13/2019] [Accepted: 12/22/2019] [Indexed: 10/25/2022] Open
|
26
|
Abstract
Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources.
Collapse
|
27
|
Agtzidis I, Startsev M, Dorr M. Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching. J Eye Mov Res 2020; 13. [PMID: 33828806 PMCID: PMC8005322 DOI: 10.16910/jemr.13.4.5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In this short article we present our manual annotation of the eye movement events in a
subset of the large-scale eye tracking data set Hollywood2. Our labels include fixations,
saccades, and smooth pursuits, as well as a noise event type (the latter representing either
blinks, loss of tracking, or physically implausible signals). In order to achieve more
consistent annotations, the gaze samples were labelled by a novice rater based on
rudimentary algorithmic suggestions, and subsequently corrected by an expert rater.
Overall, we annotated eye movement events in the recordings corresponding to 50
randomly selected test set clips and 6 training set clips from Hollywood2, which were
viewed by 16 observers and amount to a total of approximately 130 minutes of gaze data.
In these labels, 62.4% of the samples were attributed to fixations, 9.1% – to saccades, and,
notably, 24.2% – to pursuit (the remainder marked as noise). After evaluation of 15
published eye movement classification algorithms on our newly collected annotated data
set, we found that the most recent algorithms perform very well on average, and even
reach human-level labelling quality for fixations and saccades, but all have a much larger
room for improvement when it comes to smooth pursuit classification. The data set is
made available at https://gin.g-node.org/ioannis.agtzidis/hollywood2_em.
Collapse
|
28
|
Evaluating three approaches to binary event-level agreement scoring. A reply to Friedman (2020). Behav Res Methods 2020; 53:325-334. [PMID: 32705657 PMCID: PMC7880951 DOI: 10.3758/s13428-020-01425-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
29
|
Brief communication: Three errors and two problems in a recent paper: gazeNet: End-to-end eye-movement event detection with deep neural networks (Zemblys, Niehorster, and Holmqvist, 2019). Behav Res Methods 2020; 52:1671-1680. [PMID: 32291731 DOI: 10.3758/s13428-019-01342-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Zemblys et al. (Behavior Research Methods, 51(2), 840-864, 2019) reported on a method for the classification of eye-movements ("gazeNet"). I have found three errors and two problems with that paper that are explained herein. Error 1: The gazeNet classification method was built assuming that a hand-scored dataset from Lund University was all collected at 500 Hz, but in fact, six of the 34 recording files were actually collected at 200 Hz. Of the six datasets that were used as the training set for the gazeNet algorithm, two were actually collected at 200 Hz. Problem 1 has to do with the fact that even among the 500 Hz data, the inter-timestamp intervals varied widely. Problem 2 is that there are many unusual discontinuities in the saccade trajectories from the Lund University dataset that make it a very poor choice for the construction of an automatic classification method. Error 2 The gazeNet algorithm was trained on the Lund dataset, and then compared to other methods, not trained on this dataset, in terms of performance on this dataset. This is an inherently unfair comparison, and yet nowhere in the gazeNet paper is this unfairness mentioned. Error 3 arises out of the novel event-related agreement analysis employed by the gazeNet authors. Although the authors intended to classify unmatched events as either false positives or false negatives, many are actually being classified as true negatives. True negatives are not errors, and any unmatched event misclassified as a true negative is actually driving kappa higher, whereas unmatched events should be driving kappa lower.
Collapse
|
30
|
Hauperich AK, Young LK, Smithson HE. What makes a microsaccade? A review of 70 years of research prompts a new detection method. J Eye Mov Res 2020; 12. [PMID: 33828754 PMCID: PMC7962681 DOI: 10.16910/jemr.12.6.13] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
A new method for detecting microsaccades in eye-movement data is presented, following a review of reported microsaccade properties between the 1940s and today. The review focuses on the parameter ranges within which certain physical markers of microsaccades are thought to occur, as well as any features of microsaccades that have been stably reported over time. One feature of microsaccades, their binocularity, drives the new microsaccade detection method. The binocular correlation method for microsaccade detection is validated on two datasets of binocular eye-movements recorded using video-based systems: one collected as part of this study, and one from Nyström et al, 2017. Comparisons between detection methods are made using precision-recall statistics. This confirms that the binocular correlation method performs well when compared to manual coders and performs favourably compared to the commonly used Engbert & Kliegl (2003) method with subsequent modifications (Engbert & Mergenthaler, 2006). The binocular correlation microsaccade detection method is easy to implement and MATLAB code is made available to download.
Collapse
|
31
|
Mardanbegi D, Wilcockson TDW, Killick R, Xia B, Gellersen H, Sawyer P, Crawford TJ. A comparison of post-saccadic oscillations in European-Born and China-Born British University Undergraduates. PLoS One 2020; 15:e0229177. [PMID: 32097447 PMCID: PMC7041864 DOI: 10.1371/journal.pone.0229177] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Accepted: 02/02/2020] [Indexed: 11/21/2022] Open
Abstract
Previous research has revealed that people from different genetic, racial, biological, and/or cultural backgrounds may display fundamental differences in eye-tracking behavior. These differences may have a cognitive origin or they may be at a lower level within the neurophysiology of the oculomotor network, or they may be related to environment factors. In this paper we investigated one of the physiological aspects of eye movements known as post-saccadic oscillations and we show that this type of eye movement is very different between two different populations. We compared the post-saccadic oscillations recorded by a video-based eye tracker between two groups of participants: European-born and Chinese-born British students. We recorded eye movements from a group of 42 Caucasians defined as White British or White Europeans and 52 Chinese-born participants all with ages ranging from 18 to 36 during a prosaccade task. The post-saccadic oscillations were extracted from the gaze data which was compared between the two groups in terms of their first overshoot and undershoot. The results revealed that the shape of the post-saccadic oscillations varied significantly between the two groups which may indicate a difference in a multitude of genetic, cultural, physiologic, anatomical or environmental factors. We further show that the differences in the post-saccadic oscillations could influence the oculomotor characteristics such as saccade duration. We conclude that genetic, racial, biological, and/or cultural differences can affect the morphology of the eye movement data recorded and should be considered when studying eye movements and oculomotor fixation and saccadic behaviors.
Collapse
Affiliation(s)
- Diako Mardanbegi
- School of Computing and Communications, Lancaster University, Lancaster, United Kingdom
- * E-mail:
| | - Thomas D. W. Wilcockson
- School of Sport, Exercise, and Health Science, Loughborough University, Loughborough, United Kingdom
- Department of Psychology, Lancaster University, Lancaster, United Kingdom
| | - Rebecca Killick
- Department of Mathematics and Statistics, Lancaster University, Lancaster, United Kingdom
| | - Baiqiang Xia
- School of Computing and Communications, Lancaster University, Lancaster, United Kingdom
| | - Hans Gellersen
- School of Computing and Communications, Lancaster University, Lancaster, United Kingdom
| | - Peter Sawyer
- School Engineering and Applied Science, Aston University, Birmingham, United Kingdom
| | - Trevor J. Crawford
- Department of Psychology, Lancaster University, Lancaster, United Kingdom
| |
Collapse
|
32
|
Kothari R, Yang Z, Kanan C, Bailey R, Pelz JB, Diaz GJ. Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities. Sci Rep 2020; 10:2539. [PMID: 32054884 PMCID: PMC7018838 DOI: 10.1038/s41598-020-59251-5] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 01/23/2020] [Indexed: 11/21/2022] Open
Abstract
The study of gaze behavior has primarily been constrained to controlled environments in which the head is fixed. Consequently, little effort has been invested in the development of algorithms for the categorization of gaze events (e.g. fixations, pursuits, saccade, gaze shifts) while the head is free, and thus contributes to the velocity signals upon which classification algorithms typically operate. Our approach was to collect a novel, naturalistic, and multimodal dataset of eye + head movements when subjects performed everyday tasks while wearing a mobile eye tracker equipped with an inertial measurement unit and a 3D stereo camera. This Gaze-in-the-Wild dataset (GW) includes eye + head rotational velocities (deg/s), infrared eye images and scene imagery (RGB + D). A portion was labelled by coders into gaze motion events with a mutual agreement of 0.74 sample based Cohen's κ. This labelled data was used to train and evaluate two machine learning algorithms, Random Forest and a Recurrent Neural Network model, for gaze event classification. Assessment involved the application of established and novel event based performance metrics. Classifiers achieve ~87% human performance in detecting fixations and saccades but fall short (50%) on detecting pursuit movements. Moreover, pursuit classification is far worse in the absence of head movement information. A subsequent analysis of feature significance in our best performing model revealed that classification can be done using only the magnitudes of eye and head movements, potentially removing the need for calibration between the head and eye tracking systems. The GW dataset, trained classifiers and evaluation metrics will be made publicly available with the intention of facilitating growth in the emerging area of head-free gaze event classification.
Collapse
Affiliation(s)
- Rakshit Kothari
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA.
| | - Zhizhuo Yang
- Golisano College of Computing and Information Sciences, RIT, Rochester, NY, USA
| | - Christopher Kanan
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA
| | - Reynold Bailey
- Golisano College of Computing and Information Sciences, RIT, Rochester, NY, USA
| | - Jeff B Pelz
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA
| | - Gabriel J Diaz
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA
| |
Collapse
|
33
|
Wadehn F, Weber T, Mack DJ, Heldt T, Loeliger HA. Model-Based Separation, Detection, and Classification of Eye Movements. IEEE Trans Biomed Eng 2020; 67:588-600. [PMID: 31150326 DOI: 10.1109/tbme.2019.2918986] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE We present a physiologically motivated eye movement analysis framework for model-based separation, detection, and classification (MBSDC) of eye movements. By estimating kinematic and neural controller signals for saccades, smooth pursuit, and fixational eye movements in a mechanistic model of the oculomotor system we are able to separate and analyze these eye movements independently. METHODS We extended an established oculomotor model for horizontal eye movements by neural controller signals and by a blink artifact model. To estimate kinematic (position, velocity, acceleration, forces) and neural controller signals from eye position data, we employ Kalman smoothing and sparse input estimation techniques. The estimated signals are used for detecting saccade start and end points, and for classifying the recording into saccades, smooth pursuit, fixations, post-saccadic oscillations, and blinks. RESULTS On simulated data, the reconstruction error of the velocity profiles is about half the error value obtained by the commonly employed approach of filtering and numerical differentiation. In experiments with smooth pursuit data from human subjects, we observe an accurate signal separation. In addition, in neural recordings from non-human primates, the estimated neural controller signals match the real recordings strikingly well. SIGNIFICANCE The MBSDC framework enables the analysis of multi-type eye movement recordings and provides a physiologically motivated approach to study motor commands and might aid the discovery of new digital biomarkers. CONCLUSION The proposed framework provides a model-based approach for a wide variety of eye movement analysis tasks.
Collapse
|
34
|
Startsev M, Agtzidis I, Dorr M. Characterizing and automatically detecting smooth pursuit in a large-scale ground-truth data set of dynamic natural scenes. J Vis 2019; 19:10. [DOI: 10.1167/19.14.10] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Mikhail Startsev
- Human-Machine Communication, Technical University of Munich, Munich, Germany
| | - Ioannis Agtzidis
- Human-Machine Communication, Technical University of Munich, Munich, Germany
| | - Michael Dorr
- Human-Machine Communication, Technical University of Munich, Munich, Germany
| |
Collapse
|
35
|
Hessels RS, Hooge ITC. Eye tracking in developmental cognitive neuroscience - The good, the bad and the ugly. Dev Cogn Neurosci 2019; 40:100710. [PMID: 31593909 PMCID: PMC6974897 DOI: 10.1016/j.dcn.2019.100710] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Revised: 07/31/2019] [Accepted: 09/10/2019] [Indexed: 02/07/2023] Open
Abstract
Eye tracking is a popular research tool in developmental cognitive neuroscience for studying the development of perceptual and cognitive processes. However, eye tracking in the context of development is also challenging. In this paper, we ask how knowledge on eye-tracking data quality can be used to improve eye-tracking recordings and analyses in longitudinal research so that valid conclusions about child development may be drawn. We answer this question by adopting the data-quality perspective and surveying the eye-tracking setup, training protocols, and data analysis of the YOUth study (investigating neurocognitive development of 6000 children). We first show how our eye-tracking setup has been optimized for recording high-quality eye-tracking data. Second, we show that eye-tracking data quality can be operator-dependent even after a thorough training protocol. Finally, we report distributions of eye-tracking data quality measures for four age groups (5 months, 10 months, 3 years, and 9 years), based on 1531 recordings. We end with advice for (prospective) developmental eye-tracking researchers and generalizations to other methodologies.
Collapse
Affiliation(s)
- Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands; Developmental Psychology, Utrecht University, Utrecht, The Netherlands.
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
36
|
A novel evaluation of two related and two independent algorithms for eye movement classification during reading. Behav Res Methods 2019; 50:1374-1397. [PMID: 29766396 DOI: 10.3758/s13428-018-1050-7] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Nystrӧm and Holmqvist have published a method for the classification of eye movements during reading (ONH) (Nyström & Holmqvist, 2010). When we applied this algorithm to our data, the results were not satisfactory, so we modified the algorithm (now the MNH) to better classify our data. The changes included: (1) reducing the amount of signal filtering, (2) excluding a new type of noise, (3) removing several adaptive thresholds and replacing them with fixed thresholds, (4) changing the way that the start and end of each saccade was determined, (5) employing a new algorithm for detecting PSOs, and (6) allowing a fixation period to either begin or end with noise. A new method for the evaluation of classification algorithms is presented. It was designed to provide comprehensive feedback to an algorithm developer, in a time-efficient manner, about the types and numbers of classification errors that an algorithm produces. This evaluation was conducted by three expert raters independently, across 20 randomly chosen recordings, each classified by both algorithms. The MNH made many fewer errors in determining when saccades start and end, and it also detected some fixations and saccades that the ONH did not. The MNH fails to detect very small saccades. We also evaluated two additional algorithms: the EyeLink Parser and a more current, machine-learning-based algorithm. The EyeLink Parser tended to find more saccades that ended too early than did the other methods, and we found numerous problems with the output of the machine-learning-based algorithm.
Collapse
|
37
|
1D CNN with BLSTM for automated classification of fixations, saccades, and smooth pursuits. Behav Res Methods 2018; 51:556-572. [DOI: 10.3758/s13428-018-1144-2] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
38
|
|
39
|
Hessels RS, Niehorster DC, Nyström M, Andersson R, Hooge ITC. Is the eye-movement field confused about fixations and saccades? A survey among 124 researchers. ROYAL SOCIETY OPEN SCIENCE 2018; 5:180502. [PMID: 30225041 PMCID: PMC6124022 DOI: 10.1098/rsos.180502] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Accepted: 08/06/2018] [Indexed: 06/08/2023]
Abstract
Eye movements have been extensively studied in a wide range of research fields. While new methods such as mobile eye tracking and eye tracking in virtual/augmented realities are emerging quickly, the eye-movement terminology has scarcely been revised. We assert that this may cause confusion about two of the main concepts: fixations and saccades. In this study, we assessed the definitions of fixations and saccades held in the eye-movement field, by surveying 124 eye-movement researchers. These eye-movement researchers held a variety of definitions of fixations and saccades, of which the breadth seems even wider than what is reported in the literature. Moreover, these definitions did not seem to be related to researcher background or experience. We urge researchers to make their definitions more explicit by specifying all the relevant components of the eye movement under investigation: (i) the oculomotor component: e.g. whether the eye moves slow or fast; (ii) the functional component: what purposes does the eye movement (or lack thereof) serve; (iii) the coordinate system used: relative to what does the eye move; (iv) the computational definition: how is the event represented in the eye-tracker signal. This should enable eye-movement researchers from different fields to have a discussion without misunderstandings.
Collapse
Affiliation(s)
- Roy S. Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Developmental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Diederick C. Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | | | - Ignace T. C. Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
40
|
Dalrymple KA, Manner MD, Harmelink KA, Teska EP, Elison JT. An Examination of Recording Accuracy and Precision From Eye Tracking Data From Toddlerhood to Adulthood. Front Psychol 2018; 9:803. [PMID: 29875727 PMCID: PMC5974590 DOI: 10.3389/fpsyg.2018.00803] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Accepted: 05/04/2018] [Indexed: 11/13/2022] Open
Abstract
The quantitative assessment of eye tracking data quality is critical for ensuring accuracy and precision of gaze position measurements. However, researchers often report the eye tracker's optimal manufacturer's specifications rather than empirical data about the accuracy and precision of the eye tracking data being presented. Indeed, a recent report indicates that less than half of eye tracking researchers surveyed take the eye tracker's accuracy into account when determining areas of interest for analysis, an oversight that could impact the validity of reported results and conclusions. Accordingly, we designed a calibration verification protocol to augment independent quality assessment of eye tracking data and examined whether accuracy and precision varied between three age groups of participants. We also examined the degree to which our externally quantified quality assurance metrics aligned with those reported by the manufacturer. We collected data in standard laboratory conditions to demonstrate our method, to illustrate how data quality can vary with participant age, and to give a simple example of the degree to which data quality can differ from manufacturer reported values. In the sample data we collected, accuracy for adults was within the range advertised by the manufacturer, but for school-aged children, accuracy and precision measures were outside this range. Data from toddlers were less accurate and less precise than data from adults. Based on an a priori inclusion criterion, we determined that we could exclude approximately 20% of toddler participants for poor calibration quality quantified using our calibration assessment protocol. We recommend implementing and reporting quality assessment protocols for any eye tracking tasks with participants of any age or developmental ability. We conclude with general observations about our data, recommendations for what factors to consider when establishing data inclusion criteria, and suggestions for stimulus design that can help accommodate variability in calibration. The methods outlined here may be particularly useful for developmental psychologists who use eye tracking as a tool, but who are not experts in eye tracking per se. The calibration verification stimuli and data processing scripts that we developed, along with step-by-step instructions, are freely available for other researchers.
Collapse
Affiliation(s)
- Kirsten A. Dalrymple
- Institute of Child Development, College of Education and Human Development, University of Minnesota, Minneapolis, MN, United States
| | - Marie D. Manner
- Department of Computer Science & Engineering, College of Science & Engineering, University of Minnesota, Minneapolis, MN, United States
| | - Katherine A. Harmelink
- Department of School Counseling, School of Education, North Dakota State University, Fargo, ND, United States
| | - Elayne P. Teska
- Department of Comparative Human Development, University of Chicago, Chicago, IL, United States
| | - Jed T. Elison
- Institute of Child Development, College of Education and Human Development, University of Minnesota, Minneapolis, MN, United States
- Department of Pediatrics, University of Minnesota Medical School, University of Minnesota, Minneapolis, MN, United States
| |
Collapse
|