51
|
Imaoka Y, Flury A, de Bruin ED. Assessing Saccadic Eye Movements With Head-Mounted Display Virtual Reality Technology. Front Psychiatry 2020; 11:572938. [PMID: 33093838 PMCID: PMC7527608 DOI: 10.3389/fpsyt.2020.572938] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 08/18/2020] [Indexed: 12/16/2022] Open
Abstract
As our society is ageing globally, neurodegenerative disorders are becoming a relevant issue. Assessment of saccadic eye movement could provide objective values to help to understand the symptoms of disorders. HTC Corporation launched a new virtual reality (VR) headset, VIVE Pro Eye, implementing an infrared-based eye tracking technique together with VR technology. The purpose of this study is to evaluate whether the device can be used as an assessment tool of saccadic eye movement and to investigate the technical features of eye tracking. We developed a measurement system of saccadic eye movement with a simple VR environment on Unity VR design platform, following an internationally proposed standard saccade measurement protocol. We then measured the saccadic eye movement of seven healthy young adults to analyze the oculo-metrics of latency, peak velocity, and error rate of pro- and anti-saccade tasks: 120 trials in each task. We calculated these parameters based on the saccade detection algorithm that we have developed following previous studies. Consequently, our results revealed latency of 220.40 ± 43.16 ms, peak velocity of 357.90 ± 111.99°/s, and error rate of 0.24 ± 0.41% for the pro-saccade task, and latency of 343.35 ± 76.42 ms, peak velocity of 318.79 ± 116.69°/s, and error rate of 0.66 ± 0.76% for the anti-saccade task. In addition, we observed pupil diameter of 4.30 ± 1.15 mm (left eye) and 4.29 ± 1.08 mm (right eye) for the pro-saccade task, and of 4.21 ± 1.04 mm (left eye) and 4.22 ± 0.97 mm (right eye) for the anti-saccade task. Comparing between the descriptive statistics of previous studies and our results suggests that VIVE Pro Eye can function as an assessment tool of saccadic eye movement since our results are in the range of or close to the results of previous studies. Nonetheless, we found technical limitations especially about time-related measurement parameters. Further improvements in software and hardware of the device and measurement protocol, and more measurements with diverse age-groups and people with different health conditions are warranted to enhance the whole assessment system of saccadic eye movement.
Collapse
Affiliation(s)
- Yu Imaoka
- Motor Control & Learning Laboratory, Institute of Human Movement Sciences and Sport, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Andri Flury
- Motor Control & Learning Laboratory, Institute of Human Movement Sciences and Sport, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Eling D de Bruin
- Motor Control & Learning Laboratory, Institute of Human Movement Sciences and Sport, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland.,Division of Physiotherapy, Department of Neurobiology, Care Sciences and Society, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
52
|
Startsev M, Agtzidis I, Dorr M. Characterizing and automatically detecting smooth pursuit in a large-scale ground-truth data set of dynamic natural scenes. J Vis 2019; 19:10. [DOI: 10.1167/19.14.10] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Mikhail Startsev
- Human-Machine Communication, Technical University of Munich, Munich, Germany
| | - Ioannis Agtzidis
- Human-Machine Communication, Technical University of Munich, Munich, Germany
| | - Michael Dorr
- Human-Machine Communication, Technical University of Munich, Munich, Germany
| |
Collapse
|
53
|
Abstract
This work presents a visual analytics approach to explore microsaccade distributions in high-frequency eye tracking data. Research studies often apply filter algorithms and parameter values for microsaccade detection. Even when the same algorithms are employed, different parameter values might be adopted across different studies. In this paper, we present a visual analytics system (VisME) to promote reproducibility in the data analysis of microsaccades. It allows users to interactively vary the parametric values for microsaccade filters and evaluate the resulting influence on microsaccade behavior across individuals and on a group level. In particular, we exploit brushing-and-linking techniques that allow the microsaccadic properties of space, time, and movement direction to be extracted, visualized, and compared across multiple views. We demonstrate in a case study the use of our visual analytics system on data sets collected from natural scene viewing and show in a qualitative usability study the usefulness of this approach for eye tracking researchers. We believe that interactive tools such as VisME will promote greater transparency in eye movement research by providing researchers with the ability to easily understand complex eye tracking data sets; such tools can also serve as teaching systems. VisME is provided as open source software.
Collapse
|
54
|
Abstract
Recent applications of eye tracking for diagnosis, prognosis and follow-up of therapy in age-related neurological or psychological deficits have been reviewed. The review is focused on active aging, neurodegeneration and cognitive impairments. The potential impacts and current limitations of using characterizing features of eye movements and pupillary responses (oculometrics) as objective biomarkers in the context of aging are discussed. A closer look into the findings, especially with respect to cognitive impairments, suggests that eye tracking is an invaluable technique to study hidden aspects of aging that have not been revealed using any other noninvasive tool. Future research should involve a wider variety of oculometrics, in addition to saccadic metrics and pupillary responses, including nonlinear and combinatorial features as well as blink- and fixation-related metrics to develop biomarkers to trace age-related irregularities associated with cognitive and neural deficits.
Collapse
Affiliation(s)
- Ramtin Z Marandi
- Department of Health Science & Technology, Aalborg University, Aalborg E 9220, Denmark
| | - Parisa Gazerani
- Department of Health Science & Technology, Aalborg University, Aalborg E 9220, Denmark
| |
Collapse
|
55
|
Ehinger BV, Groß K, Ibs I, König P. A new comprehensive eye-tracking test battery concurrently evaluating the Pupil Labs glasses and the EyeLink 1000. PeerJ 2019; 7:e7086. [PMID: 31328028 PMCID: PMC6625505 DOI: 10.7717/peerj.7086] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Accepted: 05/07/2019] [Indexed: 01/08/2023] Open
Abstract
Eye-tracking experiments rely heavily on good data quality of eye-trackers. Unfortunately, it is often the case that only the spatial accuracy and precision values are available from the manufacturers. These two values alone are not sufficient to serve as a benchmark for an eye-tracker: Eye-tracking quality deteriorates during an experimental session due to head movements, changing illumination or calibration decay. Additionally, different experimental paradigms require the analysis of different types of eye movements; for instance, smooth pursuit movements, blinks or microsaccades, which themselves cannot readily be evaluated by using spatial accuracy or precision alone. To obtain a more comprehensive description of properties, we developed an extensive eye-tracking test battery. In 10 different tasks, we evaluated eye-tracking related measures such as: the decay of accuracy, fixation durations, pupil dilation, smooth pursuit movement, microsaccade classification, blink classification, or the influence of head motion. For some measures, true theoretical values exist. For others, a relative comparison to a reference eye-tracker is needed. Therefore, we collected our gaze data simultaneously from a remote EyeLink 1000 eye-tracker as the reference and compared it with the mobile Pupil Labs glasses. As expected, the average spatial accuracy of 0.57° for the EyeLink 1000 eye-tracker was better than the 0.82° for the Pupil Labs glasses (N = 15). Furthermore, we classified less fixations and shorter saccade durations for the Pupil Labs glasses. Similarly, we found fewer microsaccades using the Pupil Labs glasses. The accuracy over time decayed only slightly for the EyeLink 1000, but strongly for the Pupil Labs glasses. Finally, we observed that the measured pupil diameters differed between eye-trackers on the individual subject level but not on the group level. To conclude, our eye-tracking test battery offers 10 tasks that allow us to benchmark the many parameters of interest in stereotypical eye-tracking situations and addresses a common source of confounds in measurement errors (e.g., yaw and roll head movements). All recorded eye-tracking data (including Pupil Labs' eye videos), the stimulus code for the test battery, and the modular analysis pipeline are freely available (https://github.com/behinger/etcomp).
Collapse
Affiliation(s)
- Benedikt V. Ehinger
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Katharina Groß
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Inga Ibs
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
56
|
A novel evaluation of two related and two independent algorithms for eye movement classification during reading. Behav Res Methods 2019; 50:1374-1397. [PMID: 29766396 DOI: 10.3758/s13428-018-1050-7] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Nystrӧm and Holmqvist have published a method for the classification of eye movements during reading (ONH) (Nyström & Holmqvist, 2010). When we applied this algorithm to our data, the results were not satisfactory, so we modified the algorithm (now the MNH) to better classify our data. The changes included: (1) reducing the amount of signal filtering, (2) excluding a new type of noise, (3) removing several adaptive thresholds and replacing them with fixed thresholds, (4) changing the way that the start and end of each saccade was determined, (5) employing a new algorithm for detecting PSOs, and (6) allowing a fixation period to either begin or end with noise. A new method for the evaluation of classification algorithms is presented. It was designed to provide comprehensive feedback to an algorithm developer, in a time-efficient manner, about the types and numbers of classification errors that an algorithm produces. This evaluation was conducted by three expert raters independently, across 20 randomly chosen recordings, each classified by both algorithms. The MNH made many fewer errors in determining when saccades start and end, and it also detected some fixations and saccades that the ONH did not. The MNH fails to detect very small saccades. We also evaluated two additional algorithms: the EyeLink Parser and a more current, machine-learning-based algorithm. The EyeLink Parser tended to find more saccades that ended too early than did the other methods, and we found numerous problems with the output of the machine-learning-based algorithm.
Collapse
|
57
|
Stuart S, Parrington L, Martini D, Popa B, Fino PC, King LA. Validation of a velocity-based algorithm to quantify saccades during walking and turning in mild traumatic brain injury and healthy controls. Physiol Meas 2019; 40:044006. [PMID: 30943463 PMCID: PMC7608620 DOI: 10.1088/1361-6579/ab159d] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
OBJECTIVE Saccadic (fast) eye movements are a routine aspect of neurological examination and are a potential biomarker of mild traumatic brain injury (mTBI). Objective measurement of saccades has become a prominent focus of mTBI research, as eye movements may be a useful assessment tool for deficits in neural structures or processes. However, saccadic measurement within mobile infra-red (IR) eye-tracker raw data requires a valid algorithm. The objective of this study was to validate a velocity-based algorithm for saccade detection in IR eye-tracking raw data during walking (straight ahead and while turning) in people with mTBI and healthy controls. APPROACH Eye-tracking via a mobile IR Tobii Pro Glasses 2 eye-tracker (100 Hz) was performed in people with mTBI (n = 10) and healthy controls (n = 10). Participants completed two walking tasks: straight walking (walking back and forth for 1 min over a 10 m distance), and walking and turning (turns course included 45°, 90° and 135° turns). Five trials per subject, for one-hundred total trials, were completed. A previously reported velocity-based saccade detection algorithm was adapted and validated by assessing agreement between algorithm saccade detections and the number of correct saccade detections determined from manual video inspection (ground truth reference). MAIN RESULTS Compared with video inspection, the IR algorithm detected ~97% (n = 4888) and ~95% (n = 3699) of saccades made by people with mTBI and controls, respectively, with excellent agreement to the ground truth (intra-class correlation coefficient2,1 = .979 to .999). SIGNIFICANCE This study provides a simple yet highly robust algorithm for the processing of mobile eye-tracker raw data in mTBI and controls. Future studies may consider validating this algorithm with other IR eye-trackers and populations.
Collapse
Affiliation(s)
- Samuel Stuart
- Department of Neurology, Oregon Health and Science University, Portland, OR, United States of America. Veterans Affairs Portland Health Care System, Portland, OR, United States of America. Author to whom any correspondence should be addressed
| | | | | | | | | | | |
Collapse
|
58
|
Stuart S, Hickey A, Vitorio R, Welman K, Foo S, Keen D, Godfrey A. Eye-tracker algorithms to detect saccades during static and dynamic tasks: a structured review. Physiol Meas 2019; 40:02TR01. [DOI: 10.1088/1361-6579/ab02ab] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
59
|
Abstract
Eye-trackers are a popular tool for studying cognitive, emotional, and attentional processes in different populations (e.g., clinical and typically developing) and participants of all ages, ranging from infants to the elderly. This broad range of processes and populations implies that there are many inter- and intra-individual differences that need to be taken into account when analyzing eye-tracking data. Standard parsing algorithms supplied by the eye-tracker manufacturers are typically optimized for adults and do not account for these individual differences. This paper presents gazepath, an easy-to-use R-package that comes with a graphical user interface (GUI) implemented in Shiny (RStudio Inc 2015). The gazepath R-package combines solutions from the adult and infant literature to provide an eye-tracking parsing method that accounts for individual differences and differences in data quality. We illustrate the usefulness of gazepath with three examples of different data sets. The first example shows how gazepath performs on free-viewing data of infants and adults, compared to standard EyeLink parsing. We show that gazepath controls for spurious correlations between fixation durations and data quality in infant data. The second example shows that gazepath performs well in high-quality reading data of adults. The third and last example shows that gazepath can also be used on noisy infant data collected with a Tobii eye-tracker and low (60 Hz) sampling rate.
Collapse
|
60
|
Costela FM, Woods RL. When Watching Video, Many Saccades Are Curved and Deviate From a Velocity Profile Model. Front Neurosci 2019; 12:960. [PMID: 30666178 PMCID: PMC6330331 DOI: 10.3389/fnins.2018.00960] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Accepted: 12/03/2018] [Indexed: 12/20/2022] Open
Abstract
Commonly, saccades are thought to be ballistic eye movements, not modified during flight, with a straight path and a well-described velocity profile. However, they do not always follow a straight path and studies of saccade curvature have been reported previously. In a prior study, we developed a real-time, saccade-trajectory prediction algorithm to improve the updating of gaze-contingent displays and found that saccades with a curved path or that deviated from the expected velocity profile were not well fit by our saccade-prediction algorithm (velocity-profile deviation), and thus had larger updating errors than saccades that had a straight path and had a velocity profile that was fit well by the model. Further, we noticed that the curved saccades and saccades with high velocity-profile deviations were more common than we had expected when participants performed a natural-viewing task. Since those saccades caused larger display updating errors, we sought a better understanding of them. Here we examine factors that could affect curvature and velocity profile of saccades using a pool of 218,744 saccades from 71 participants watching “Hollywood” video clips. Those factors included characteristics of the participants (e.g., age), of the videos (importance of faces for following the story, genre), of the saccade (e.g., magnitude, direction), time during the session (e.g., fatigue) and presence and timing of scene cuts. While viewing the video clips, saccades were most likely horizontal or vertical over oblique. Measured curvature and velocity-profile deviation had continuous, skewed frequency distributions. We used mixed-effects regression models that included cubic terms and found a complex relationship between curvature, velocity-profile deviation and saccade duration (or magnitude). Curvature and velocity-profile deviation were related to some video-dependent features such as lighting, face presence, or nature and human figure content. Time during the session was a predictor for velocity profile deviations. Further, we found a relationship for saccades that were in flight at the time of a scene cut to have higher velocity-profile deviations and lower curvature in univariable models. Saccades characteristics vary with a variety of factors, which suggests complex interactions between oculomotor control and scene content that could be explored further.
Collapse
Affiliation(s)
- Francisco M Costela
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA, United States.,Department of Ophthalmology, Harvard Medical School, Boston, MA, United States
| | - Russell L Woods
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA, United States.,Department of Ophthalmology, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
61
|
Bellet ME, Bellet J, Nienborg H, Hafed ZM, Berens P. Human-level saccade detection performance using deep neural networks. J Neurophysiol 2018; 121:646-661. [PMID: 30565968 DOI: 10.1152/jn.00601.2018] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Saccades are ballistic eye movements that rapidly shift gaze from one location of visual space to another. Detecting saccades in eye movement recordings is important not only for studying the neural mechanisms underlying sensory, motor, and cognitive processes, but also as a clinical and diagnostic tool. However, automatically detecting saccades can be difficult, particularly when such saccades are generated in coordination with other tracking eye movements, like smooth pursuits, or when the saccade amplitude is close to eye tracker noise levels, like with microsaccades. In such cases, labeling by human experts is required, but this is a tedious task prone to variability and error. We developed a convolutional neural network to automatically detect saccades at human-level accuracy and with minimal training examples. Our algorithm surpasses state of the art according to common performance metrics and could facilitate studies of neurophysiological processes underlying saccade generation and visual processing. NEW & NOTEWORTHY Detecting saccades in eye movement recordings can be a difficult task, but it is a necessary first step in many applications. We present a convolutional neural network that can automatically identify saccades with human-level accuracy and with minimal training examples. We show that our algorithm performs better than other available algorithms, by comparing performance on a wide range of data sets. We offer an open-source implementation of the algorithm as well as a web service.
Collapse
Affiliation(s)
- Marie E Bellet
- Institute for Ophthalmic Research, University of Tübingen , Tübingen , Germany
| | - Joachim Bellet
- Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen , Tübingen , Germany.,International Max Planck Research School for Cognitive and Systems Neuroscience , Tübingen , Germany.,Hertie Institute for Clinical Brain Research, University of Tübingen , Tübingen , Germany
| | - Hendrikje Nienborg
- Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen , Tübingen , Germany
| | - Ziad M Hafed
- Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen , Tübingen , Germany.,Hertie Institute for Clinical Brain Research, University of Tübingen , Tübingen , Germany
| | - Philipp Berens
- Institute for Ophthalmic Research, University of Tübingen , Tübingen , Germany.,Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen , Tübingen , Germany.,Bernstein Center for Computational Neuroscience , Tübingen , Germany
| |
Collapse
|
62
|
Wadehn F, Mack DJ, Weber T, Loeliger HA. Estimation of Neural Inputs and Detection of Saccades and Smooth Pursuit Eye Movements by Sparse Bayesian Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:2619-2622. [PMID: 30440945 DOI: 10.1109/embc.2018.8512758] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Eye movements reveal a great wealth of information about the visual system and the brain. Therefore, eye movements can serve as diagnostic markers for various neurological disorders. For an objective analysis, it is crucial to have an automatic and robust procedure to extract relevant eye movement parameters. An essential step towards this goal is to detect and separate different types of eye movements such as fixations, saccades and smooth pursuit. We have developed a model-based approach to perform signal detection and separation on eye movement recordings, using source separation techniques from sparse Bayesian learning. The key idea is to model the oculomotor system with a state space model and to perform signal separation in the neural domain by estimating sparse inputs which trigger saccades. The algorithm was evaluated on synthetic data, neural recordings from rhesus monkeys and on manually annotated human eye movement recordings with different smooth pursuit paradigms. The developed approach shows a high noise-robustness, provides saccade and smooth pursuit parameters, as well as estimates of the position, velocity and acceleration profiles. In addition, by estimating the input to the oculomotor system, we obtain an estimate of the neural inputs to the oculomotor muscles.
Collapse
|
63
|
1D CNN with BLSTM for automated classification of fixations, saccades, and smooth pursuits. Behav Res Methods 2018; 51:556-572. [DOI: 10.3758/s13428-018-1144-2] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
64
|
Topalli D, Cagiltay NE. Eye-Hand Coordination Patterns of Intermediate and Novice Surgeons in a Simulation-Based Endoscopic Surgery Training Environment. J Eye Mov Res 2018; 11. [PMID: 33828711 PMCID: PMC7906001 DOI: 10.16910/jemr.11.6.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Endoscopic surgery procedures require specific skills, such as eye-hand coordination to be developed. Current education programs are facing with problems to provide appropriate skill improvement and assessment methods in this field. This study aims to propose objec-tive metrics for hand-movement skills and assess eye-hand coordination. An experimental study is conducted with 15 surgical residents to test the newly proposed measures. Two computer-based both-handed endoscopic surgery practice scenarios are developed in a simulation environment to gather the participants' eye-gaze data with the help of an eye tracker as well as the related hand movement data through haptic interfaces. Additionally, participants' eye-hand coordination skills are analyzed. The results indicate higher correla-tions in the intermediates' eye-hand movements compared to the novices. An increase in intermediates' visual concentration leads to smoother hand movements. Similarly, the novices' hand movements are shown to remain at a standstill. After the first round of practice, all participants' eye-hand coordination skills are improved on the specific task targeted in this study. According to these results, it can be concluded that the proposed metrics can potentially provide some additional insights about trainees' eye-hand coordi-nation skills and help instructional system designers to better address training requirements.
Collapse
|
65
|
|
66
|
Hooge ITC, Niehorster DC, Nyström M, Andersson R, Hessels RS. Is human classification by experienced untrained observers a gold standard in fixation detection? Behav Res Methods 2018; 50:1864-1881. [PMID: 29052166 PMCID: PMC7875941 DOI: 10.3758/s13428-017-0955-x] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Manual classification is still a common method to evaluate event detection algorithms. The procedure is often as follows: Two or three human coders and the algorithm classify a significant quantity of data. In the gold standard approach, deviations from the human classifications are considered to be due to mistakes of the algorithm. However, little is known about human classification in eye tracking. To what extent do the classifications from a larger group of human coders agree? Twelve experienced but untrained human coders classified fixations in 6 min of adult and infant eye-tracking data. When using the sample-based Cohen's kappa, the classifications of the humans agreed near perfectly. However, we found substantial differences between the classifications when we examined fixation duration and number of fixations. We hypothesized that the human coders applied different (implicit) thresholds and selection rules. Indeed, when spatially close fixations were merged, most of the classification differences disappeared. On the basis of the nature of these intercoder differences, we concluded that fixation classification by experienced untrained human coders is not a gold standard. To bridge the gap between agreement measures (e.g., Cohen's kappa) and eye movement parameters (fixation duration, number of fixations), we suggest the use of the event-based F1 score and two new measures: the relative timing offset (RTO) and the relative timing deviation (RTD).
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Helgonabacken 12, 22362, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Helgonabacken 12, 22362, Lund, Sweden
| | - Richard Andersson
- Eye Information Group, IT University of Copenhagen, Copenhagen, Denmark
- Department of Philosophy and Cognitive Sciences, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
- Developmental Psychology, Utrecht University, Helgonabacken 1, 3584 CS, Utrecht, The Netherlands
| |
Collapse
|
67
|
Hessels RS, Niehorster DC, Nyström M, Andersson R, Hooge ITC. Is the eye-movement field confused about fixations and saccades? A survey among 124 researchers. ROYAL SOCIETY OPEN SCIENCE 2018; 5:180502. [PMID: 30225041 PMCID: PMC6124022 DOI: 10.1098/rsos.180502] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Accepted: 08/06/2018] [Indexed: 06/08/2023]
Abstract
Eye movements have been extensively studied in a wide range of research fields. While new methods such as mobile eye tracking and eye tracking in virtual/augmented realities are emerging quickly, the eye-movement terminology has scarcely been revised. We assert that this may cause confusion about two of the main concepts: fixations and saccades. In this study, we assessed the definitions of fixations and saccades held in the eye-movement field, by surveying 124 eye-movement researchers. These eye-movement researchers held a variety of definitions of fixations and saccades, of which the breadth seems even wider than what is reported in the literature. Moreover, these definitions did not seem to be related to researcher background or experience. We urge researchers to make their definitions more explicit by specifying all the relevant components of the eye movement under investigation: (i) the oculomotor component: e.g. whether the eye moves slow or fast; (ii) the functional component: what purposes does the eye movement (or lack thereof) serve; (iii) the coordinate system used: relative to what does the eye move; (iv) the computational definition: how is the event represented in the eye-tracker signal. This should enable eye-movement researchers from different fields to have a discussion without misunderstandings.
Collapse
Affiliation(s)
- Roy S. Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Developmental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Diederick C. Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | | | - Ignace T. C. Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
68
|
Abstract
Event detection is a challenging stage in eye movement data analysis. A major drawback of current event detection methods is that parameters have to be adjusted based on eye movement data quality. Here we show that a fully automated classification of raw gaze samples as belonging to fixations, saccades, or other oculomotor events can be achieved using a machine-learning approach. Any already manually or algorithmically detected events can be used to train a classifier to produce similar classification of other data without the need for a user to set parameters. In this study, we explore the application of random forest machine-learning technique for the detection of fixations, saccades, and post-saccadic oscillations (PSOs). In an effort to show practical utility of the proposed method to the applications that employ eye movement classification algorithms, we provide an example where the method is employed in an eye movement-driven biometric application. We conclude that machine-learning techniques lead to superior detection compared to current state-of-the-art event detection algorithms and can reach the performance of manual coding.
Collapse
|
69
|
Nij Bijvank JA, Petzold A, Balk LJ, Tan HS, Uitdehaag BMJ, Theodorou M, van Rijn LJ. A standardized protocol for quantification of saccadic eye movements: DEMoNS. PLoS One 2018; 13:e0200695. [PMID: 30011322 PMCID: PMC6047815 DOI: 10.1371/journal.pone.0200695] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Accepted: 07/02/2018] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVE Quantitative saccadic testing is a non-invasive method of evaluating the neural networks involved in the control of eye movements. The aim of this study is to provide a standardized and reproducible protocol for infrared oculography measurements of eye movements and analysis, which can be applied for various diseases in a multicenter setting. METHODS Development of a protocol to Demonstrate Eye Movement Networks with Saccades (DEMoNS) using infrared oculography. Automated analysis methods were used to calculate parameters describing the characteristics of the saccadic eye movements. The two measurements of the subjects were compared with descriptive and reproducibility statistics. RESULTS Infrared oculography measurements of all subjects were performed using the DEMoNS protocol and various saccadic parameters were calculated automatically from 28 subjects. Saccadic parameters such as: peak velocity, latency and saccade pair ratios showed excellent reproducibility (intra-class correlation coefficients > 0.9). Parameters describing performance of more complex tasks showed moderate to good reproducibility (intra-class correlation coefficients 0.63-0.78). CONCLUSIONS This study provides a standardized and transparent protocol for measuring and analyzing saccadic eye movements in a multicenter setting. The DEMoNS protocol details outcome measures for treatment trial which are of excellent reproducibility. The DEMoNS protocol can be applied to the study of saccadic eye movements in various neurodegenerative and motor diseases.
Collapse
Affiliation(s)
- J. A. Nij Bijvank
- Department of Ophthalmology, Neuro-ophthalmology Expertise Center, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
- Department of Neurology, MS Center and Neuro-ophthalmology Expertise Center, Neuroscience Amsterdam, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
- * E-mail:
| | - A. Petzold
- Department of Ophthalmology, Neuro-ophthalmology Expertise Center, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
- Department of Neurology, MS Center and Neuro-ophthalmology Expertise Center, Neuroscience Amsterdam, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
- Moorfields Eye Hospital and The National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - L. J. Balk
- Department of Neurology, MS Center and Neuro-ophthalmology Expertise Center, Neuroscience Amsterdam, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
| | - H. S. Tan
- Department of Ophthalmology, Neuro-ophthalmology Expertise Center, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
| | - B. M. J. Uitdehaag
- Department of Neurology, MS Center and Neuro-ophthalmology Expertise Center, Neuroscience Amsterdam, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
| | - M. Theodorou
- Moorfields Eye Hospital and The National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - L. J. van Rijn
- Department of Ophthalmology, Neuro-ophthalmology Expertise Center, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
| |
Collapse
|
70
|
Abstract
The study of eye movements has become popular in many fields of science. However, using the preprocessed output of an eye tracker without scrutiny can lead to low-quality or even erroneous data. For example, the sampling rate of the eye tracker influences saccadic peak velocity, while inadequate filters fail to suppress noise or introduce artifacts. Despite previously published guiding values, most filter choices still seem motivated by a trial-and-error approach, and a thorough analysis of filter effects is missing. Therefore, we developed a simple and easy-to-use saccade model that incorporates measured amplitude-velocity main sequences and produces saccades with a similar frequency content to real saccades. We also derived a velocity divergence measure to rate deviations between velocity profiles. In total, we simulated 155 saccades ranging from 0.5° to 60° and subjected them to different sampling rates, noise compositions, and various filter settings. The final goal was to compile a list with the best filter settings for each of these conditions. Replicating previous findings, we observed reduced peak velocities at lower sampling rates. However, this effect was highly non-linear over amplitudes and increasingly stronger for smaller saccades. Interpolating the data to a higher sampling rate significantly reduced this effect. We hope that our model and the velocity divergence measure will be used to provide a quickly accessible ground truth without the need for recording and manually labeling saccades. The comprehensive list of filters allows one to choose the correct filter for analyzing saccade data without resorting to trial-and-error methods.
Collapse
|
71
|
Abstract
We demonstrate the use of different visual aggregation techniques to obtain non-cluttered visual representations of scanpaths. First, fixation points are clustered using the mean-shift algorithm. Second, saccades are aggregated using the Attribute-Driven Edge Bundling (ADEB) algorithm that handles a saccades direction, onset timestamp, magnitude or their combination for the edge compatibility criterion. Flow direction maps, computed during bundling, can be visualized separately (vertical or horizontal components) or as a single image using the Oriented Line Integral Convolution (OLIC) algorithm. Furthermore, cosine similarity between two flow direction maps provides a similarity map to compare two scanpaths. Last, we provide examples of basic patterns, visual search task, and art perception. Used together, these techniques provide valuable insights about scanpath exploration and informative illustrations of the eye movement data.
Collapse
|
72
|
A new and general approach to signal denoising and eye movement classification based on segmented linear regression. Sci Rep 2017; 7:17726. [PMID: 29255207 PMCID: PMC5735175 DOI: 10.1038/s41598-017-17983-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Accepted: 12/04/2017] [Indexed: 11/15/2022] Open
Abstract
We introduce a conceptually novel method for eye-movement signal analysis. The method is general in that it does not place severe restrictions on sampling frequency, measurement noise or subject behavior. Event identification is based on segmentation that simultaneously denoises the signal and determines event boundaries. The full gaze position time-series is segmented into an approximately optimal piecewise linear function in O(n) time. Gaze feature parameters for classification into fixations, saccades, smooth pursuits and post-saccadic oscillations are derived from human labeling in a data-driven manner. The range of oculomotor events identified and the powerful denoising performance make the method useable for both low-noise controlled laboratory settings and high-noise complex field experiments. This is desirable for harmonizing the gaze behavior (in the wild) and oculomotor event identification (in the laboratory) approaches to eye movement behavior. Denoising and classification performance are assessed using multiple datasets. Full open source implementation is included.
Collapse
|
73
|
Hessels RS, Niehorster DC, Kemner C, Hooge ITC. Noise-robust fixation detection in eye movement data: Identification by two-means clustering (I2MC). Behav Res Methods 2017; 49:1802-1823. [PMID: 27800582 PMCID: PMC5628191 DOI: 10.3758/s13428-016-0822-1] [Citation(s) in RCA: 78] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Eye-tracking research in infants and older children has gained a lot of momentum over the last decades. Although eye-tracking research in these participant groups has become easier with the advance of the remote eye-tracker, this often comes at the cost of poorer data quality than in research with well-trained adults (Hessels, Andersson, Hooge, Nyström, & Kemner Infancy, 20, 601-633, 2015; Wass, Forssman, & Leppänen Infancy, 19, 427-460, 2014). Current fixation detection algorithms are not built for data from infants and young children. As a result, some researchers have even turned to hand correction of fixation detections (Saez de Urabain, Johnson, & Smith Behavior Research Methods, 47, 53-72, 2015). Here we introduce a fixation detection algorithm-identification by two-means clustering (I2MC)-built specifically for data across a wide range of noise levels and when periods of data loss may occur. We evaluated the I2MC algorithm against seven state-of-the-art event detection algorithms, and report that the I2MC algorithm's output is the most robust to high noise and data loss levels. The algorithm is automatic, works offline, and is suitable for eye-tracking data recorded with remote or tower-mounted eye-trackers using static stimuli. In addition to application of the I2MC algorithm in eye-tracking research with infants, school children, and certain patient groups, the I2MC algorithm also may be useful when the noise and data loss levels are markedly different between trials, participants, or time points (e.g., longitudinal research).
Collapse
Affiliation(s)
- Roy S Hessels
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
- Department of Developmental Psychology, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Humanities Laboratory and Department of Psychology, Lund University, Lund, Sweden
- Institute for Psychology, University of Muenster, Muenster, Germany
| | - Chantal Kemner
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Department of Developmental Psychology, Utrecht University, Utrecht, The Netherlands
- Brain Center Rudolf Magnus, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Ignace T C Hooge
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
74
|
Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades. J Vis 2017; 17:10. [PMID: 28813566 PMCID: PMC5852949 DOI: 10.1167/17.9.10] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter.
Collapse
Affiliation(s)
- Weiwei Dai
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - Ivan Selesnick
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - John-Ross Rizzo
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| | - Janet Rucker
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| | - Todd Hudson
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| |
Collapse
|
75
|
Abstract
Recent years have witnessed a remarkable growth in the way mathematics, informatics, and computer science can process data. In disciplines such as machine learning,
pattern recognition, computer vision, computational neurology, molecular biology,
information retrieval, etc., many new methods have been developed to cope with the
ever increasing amount and complexity of the data. These new methods offer interesting possibilities for processing, classifying and interpreting eye-tracking data. The
present paper exemplifies the application of topological arguments to improve the
evaluation of eye-tracking data. The task of classifying raw eye-tracking data into
saccades and fixations, with a single, simple as well as intuitive argument, described
as coherence of spacetime, is discussed, and the hierarchical ordering of the fixations
into dwells is shown. The method, namely identification by topological characteristics
(ITop), is parameter-free and needs no pre-processing and post-processing of the raw
data. The general and robust topological argument is easy to expand into complex
settings of higher visual tasks, making it possible to identify visual strategies.
Collapse
Affiliation(s)
- Oliver Hein
- Neurological University Clinic Hamburg UKE, Germany
| | | |
Collapse
|