1
|
Nolte D, Vidal De Palol M, Keshava A, Madrid-Carvajal J, Gert AL, von Butler EM, Kömürlüoğlu P, König P. Combining EEG and eye-tracking in virtual reality: Obtaining fixation-onset event-related potentials and event-related spectral perturbations. Atten Percept Psychophys 2024:10.3758/s13414-024-02917-3. [PMID: 38977612 DOI: 10.3758/s13414-024-02917-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/03/2024] [Indexed: 07/10/2024]
Abstract
Extensive research conducted in controlled laboratory settings has prompted an inquiry into how results can be generalized to real-world situations influenced by the subjects' actions. Virtual reality lends itself ideally to investigating complex situations but requires accurate classification of eye movements, especially when combining it with time-sensitive data such as EEG. We recorded eye-tracking data in virtual reality and classified it into gazes and saccades using a velocity-based classification algorithm, and we cut the continuous data into smaller segments to deal with varying noise levels, as introduced in the REMoDNav algorithm. Furthermore, we corrected for participants' translational movement in virtual reality. Various measures, including visual inspection, event durations, and the velocity and dispersion distributions before and after gaze onset, indicate that we can accurately classify the continuous, free-exploration data. Combining the classified eye-tracking with the EEG data, we generated fixation-onset event-related potentials (ERPs) and event-related spectral perturbations (ERSPs), providing further evidence for the quality of the eye-movement classification and timing of the onset of events. Finally, investigating the correlation between single trials and the average ERP and ERSP identified that fixation-onset ERSPs are less time sensitive, require fewer repetitions of the same behavior, and are potentially better suited to study EEG signatures in naturalistic settings. We modified, designed, and tested an algorithm that allows the combination of EEG and eye-tracking data recorded in virtual reality.
Collapse
Affiliation(s)
- Debora Nolte
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany.
| | - Marc Vidal De Palol
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Ashima Keshava
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - John Madrid-Carvajal
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Anna L Gert
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Eva-Marie von Butler
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Pelin Kömürlüoğlu
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
2
|
Ibragimov B, Mello-Thoms C. The Use of Machine Learning in Eye Tracking Studies in Medical Imaging: A Review. IEEE J Biomed Health Inform 2024; 28:3597-3612. [PMID: 38421842 PMCID: PMC11262011 DOI: 10.1109/jbhi.2024.3371893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2024]
Abstract
Machine learning (ML) has revolutionized medical image-based diagnostics. In this review, we cover a rapidly emerging field that can be potentially significantly impacted by ML - eye tracking in medical imaging. The review investigates the clinical, algorithmic, and hardware properties of the existing studies. In particular, it evaluates 1) the type of eye-tracking equipment used and how the equipment aligns with study aims; 2) the software required to record and process eye-tracking data, which often requires user interface development, and controller command and voice recording; 3) the ML methodology utilized depending on the anatomy of interest, gaze data representation, and target clinical application. The review concludes with a summary of recommendations for future studies, and confirms that the inclusion of gaze data broadens the ML applicability in Radiology from computer-aided diagnosis (CAD) to gaze-based image annotation, physicians' error detection, fatigue recognition, and other areas of potentially high research and clinical impact.
Collapse
|
3
|
Pradeep V, Jayachandra AB, Askar SS, Abouhawwash M. Hyperparameter tuning using Lévy flight and interactive crossover-based reptile search algorithm for eye movement event classification. Front Physiol 2024; 15:1366910. [PMID: 38812881 PMCID: PMC11134024 DOI: 10.3389/fphys.2024.1366910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Accepted: 04/10/2024] [Indexed: 05/31/2024] Open
Abstract
Introduction: Eye movement is one of the cues used in human-machine interface technologies for predicting the intention of users. The developing application in eye movement event detection is the creation of assistive technologies for paralyzed patients. However, developing an effective classifier is one of the main issues in eye movement event detection. Methods: In this paper, bidirectional long short-term memory (BILSTM) is proposed along with hyperparameter tuning for achieving effective eye movement event classification. The Lévy flight and interactive crossover-based reptile search algorithm (LICRSA) is used for optimizing the hyperparameters of BILSTM. The issues related to overfitting are avoided by using fuzzy data augmentation (FDA), and a deep neural network, namely, VGG-19, is used for extracting features from eye movements. Therefore, the optimization of hyperparameters using LICRSA enhances the classification of eye movement events using BILSTM. Results and Discussion: The proposed BILSTM-LICRSA is evaluated by using accuracy, precision, sensitivity, F1-score, area under the receiver operating characteristic (AUROC) curve measure, and area under the precision-recall curve (AUPRC) measure for four datasets, namely, Lund2013, collected dataset, GazeBaseR, and UTMultiView. The gazeNet, human manual classification (HMC), and multi-source information-embedded approach (MSIEA) are used for comparison with the BILSTM-LICRSA. The F1-score of BILSTM-LICRSA for the GazeBaseR dataset is 98.99%, which is higher than that of the MSIEA.
Collapse
Affiliation(s)
- V. Pradeep
- Department of Information Science and Engineering, Alva’s Institute of Engineering and Technology, Mangaluru, India
| | - Ananda Babu Jayachandra
- Department of Information Science and Engineering, Malnad College of Engineering, Hassan, India
| | - S. S. Askar
- Department of Statistics and Operations Research, College of Science, King Saud University, Riyadh, Saudi Arabia
| | - Mohamed Abouhawwash
- Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, Egypt
| |
Collapse
|
4
|
Johari K, Bhardwaj R, Kim JJ, Yow WQ, Tan UX. Eye movement analysis for real-world settings using segmented linear regression. Comput Biol Med 2024; 174:108364. [PMID: 38599067 DOI: 10.1016/j.compbiomed.2024.108364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 03/02/2024] [Accepted: 03/21/2024] [Indexed: 04/12/2024]
Abstract
Eye movement analysis is critical to studying human brain phenomena such as perception, cognition, and behavior. However, under uncontrolled real-world settings, the recorded gaze coordinates (commonly used to track eye movements) are typically noisy and make it difficult to track change in the state of each phenomenon precisely, primarily because the expected change is usually a slower transient process. This paper proposes an approach, Improved Naive Segmented linear regression (INSLR), which approximates the gaze coordinates with a piecewise linear function (PLF) referred to as a hypothesis. INSLR improves the existing NSLR approach by employing a hypotheses clustering algorithm, which redefines the final hypothesis estimation in two steps: (1) At each time-stamp, measure the likelihood of each hypothesis in the candidate list of hypotheses by using the least square fit score and its distance from the k-means of the hypotheses in the list. (2) Filter hypothesis based on a pre-defined threshold. We demonstrate the significance of the INSLR method in addressing the challenges of uncontrolled real-world settings such as gaze denoising and minimizing gaze prediction errors from cost-effective devices like webcams. Experiment results show INSLR consistently outperforms the baseline NSLR in denoising noisy signals from three eye movement datasets and minimizes the error in gaze prediction from a low precision device for 71.1% samples. Furthermore, this improvement in denoising quality is further validated by the improved accuracy of the oculomotor event classifier called NSLR-HMM and enhanced sensitivity in detecting variations in attention induced by distractor during online lecture.
Collapse
Affiliation(s)
- Kritika Johari
- Engineering Product Development Pillar, Singapore University of Technology and Design, Singapore.
| | - Rishabh Bhardwaj
- Information Systems Technology and Design Pillar, Singapore University of Technology and Design, Singapore
| | - Jung-Jae Kim
- Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore
| | - Wei Quin Yow
- Humanities, Arts and Social Sciences, Singapore University of Technology and Design, Singapore
| | - U-Xuan Tan
- Engineering Product Development Pillar, Singapore University of Technology and Design, Singapore
| |
Collapse
|
5
|
Melnyk K, Friedman L, Komogortsev OV. What can entropy metrics tell us about the characteristics of ocular fixation trajectories? PLoS One 2024; 19:e0291823. [PMID: 38166054 PMCID: PMC10760742 DOI: 10.1371/journal.pone.0291823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 09/06/2023] [Indexed: 01/04/2024] Open
Abstract
In this study, we provide a detailed analysis of entropy measures calculated for fixation eye movement trajectories from the three different datasets. We employed six key metrics (Fuzzy, Increment, Sample, Gridded Distribution, Phase, and Spectral Entropies). We calculate these six metrics on three sets of fixations: (1) fixations from the GazeCom dataset, (2) fixations from what we refer to as the "Lund" dataset, and (3) fixations from our own research laboratory ("OK Lab" dataset). For each entropy measure, for each dataset, we closely examined the 36 fixations with the highest entropy and the 36 fixations with the lowest entropy. From this, it was clear that the nature of the information from our entropy metrics depended on which dataset was evaluated. These entropy metrics found various types of misclassified fixations in the GazeCom dataset. Two entropy metrics also detected fixation with substantial linear drift. For the Lund dataset, the only finding was that low spectral entropy was associated with what we call "bumpy" fixations. These are fixations with low-frequency oscillations. For the OK Lab dataset, three entropies found fixations with high-frequency noise which probably represent ocular microtremor. In this dataset, one entropy found fixations with linear drift. The between-dataset results are discussed in terms of the number of fixations in each dataset, the different eye movement stimuli employed, and the method of eye movement classification.
Collapse
Affiliation(s)
- Kateryna Melnyk
- Department of Computer Science, Texas State University, San Marcos, TX, United States of America
| | - Lee Friedman
- Department of Computer Science, Texas State University, San Marcos, TX, United States of America
| | - Oleg V. Komogortsev
- Department of Computer Science, Texas State University, San Marcos, TX, United States of America
| |
Collapse
|
6
|
D'Aquino A, Frank C, Hagan JE, Schack T. Eye movements during motor imagery and execution reveal different visuomotor control strategies in manual interception. Psychophysiology 2023; 60:e14401. [PMID: 37515410 DOI: 10.1111/psyp.14401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 07/06/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023]
Abstract
Previous research has investigated the degree of congruency in gaze metrics between action execution (AE) and motor imagery (MI) for similar manual tasks. Although eye movement dynamics seem to be limited to relatively simple actions toward static objects, there is little evidence of how gaze parameters change during imagery as a function of more dynamic spatial and temporal task demands. This study examined the similarities and differences in eye movements during AE and MI for an interception task. Twenty-four students were asked to either mentally simulate or physically intercept a moving target on a computer display. Smooth pursuit, saccades, and response time were compared between the two conditions. The results show that MI was characterized by higher smooth pursuit gain and duration while no meaningful differences were found in the other parameters. The findings indicate that eye movements during imagery are not simply a duplicate of what happens during actual performance. Instead, eye movements appear to vary as a function of the interaction between visuomotor control strategies and task demands.
Collapse
Affiliation(s)
- Alessio D'Aquino
- Neurocognition and Action Biomechanics Group, Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Cornelia Frank
- Institute for Sport and Movement Science, Osnabrück University, Osnabrück, Germany
| | - John Elvis Hagan
- Neurocognition and Action Biomechanics Group, Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Thomas Schack
- Neurocognition and Action Biomechanics Group, Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- Research Institute for Cognition and Robotics (CoR-Lab), Bielefeld University, Bielefeld, Germany
| |
Collapse
|
7
|
Mai X, Sheng X, Shu X, Ding Y, Zhu X, Meng J. A Calibration-Free Hybrid Approach Combining SSVEP and EOG for Continuous Control. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3480-3491. [PMID: 37610901 DOI: 10.1109/tnsre.2023.3307814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
While SSVEP-BCI has been widely developed to control external devices, most of them rely on the discrete control strategy. The continuous SSVEP-BCI enables users to continuously deliver commands and receive real-time feedback from the devices, but it suffers from the transition state problem, a period the erroneous recognition, when users shift their gazes between targets. To resolve this issue, we proposed a novel calibration-free Bayesian approach by hybridizing SSVEP and electrooculography (EOG). First, canonical correlation analysis (CCA) was applied to detect the evoked SSVEPs, and saccade during the gaze shift was detected by EOG data using an adaptive threshold method. Then, the new target after the gaze shift was recognized based on a Bayesian optimization approach, which combined the detection of SSVEP and saccade together and calculated the optimized probability distribution of the targets. Eighteen healthy subjects participated in the offline and online experiments. The offline experiments showed that the proposed hybrid BCI had significantly higher overall continuous accuracy and shorter gaze-shifting time compared to FBCCA, CCA, MEC, and PSDA. In online experiments, the proposed hybrid BCI significantly outperformed CCA-based SSVEP-BCI in terms of continuous accuracy (77.61 ± 1.36%vs. 68.86 ± 1.08% and gaze-shifting time (0.93 ± 0.06s vs. 1.94 ± 0.08s). Additionally, participants also perceived a significant improvement over the CCA-based SSVEP-BCI when the newly proposed decoding approach was used. These results validated the efficacy of the proposed hybrid Bayesian approach for the BCI continuous control without any calibration. This study provides an effective framework for combining SSVEP and EOG, and promotes the potential applications of plug-and-play BCIs in continuous control.
Collapse
|
8
|
Vinuela-Navarro V, Goset J, Aldaba M, Mestre C, Rovira-Gay C, Cano N, Ariza M, Delàs B, Garolera M, Vilaseca M. Eye movements in patients with post-COVID condition. BIOMEDICAL OPTICS EXPRESS 2023; 14:3936-3949. [PMID: 37799689 PMCID: PMC10549724 DOI: 10.1364/boe.489037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 06/08/2023] [Accepted: 06/08/2023] [Indexed: 10/07/2023]
Abstract
Eye movement control is impaired in some neurological conditions, but the impact of COVID-19 on eye movements remains unknown. This study aims to investigate differences in oculomotor function and pupil response in individuals who suffer post-COVID-19 condition (PCC) with cognitive deficits. Saccades, smooth pursuit, fixation, vergence and pupillary response were recorded using an eye tracker. Eye movements and pupil response parameters were computed. Data from 16 controls, 38 COVID mild (home recovery) and 19 COVID severe (hospital admission) participants were analyzed. Saccadic latencies were shorter in controls (183 ± 54 ms) than in COVID mild (236 ± 83 ms) and COVID severe (227 ± 42 ms) participants (p = 0.017). Fixation stability was poorer in COVID mild participants (Bivariate Contour Ellipse Area of 0.80 ± 1.61°2 vs 0.36 ± 0.65 °2 for controls, p = 0.019), while percentage of pupil area reduction/enlargement was reduced in COVID severe participants (39.7 ± 12.7%/31.6 ± 12.7% compared to 51.7 ± 22.0%/49.1 ± 20.7% in controls, p < 0.015). The characteristics of oculomotor alterations found in PCC may be useful to understand different pathophysiologic mechanisms.
Collapse
Affiliation(s)
- Valldeflors Vinuela-Navarro
- Center for Sensors, Instruments and Systems Development,
Universitat Politècnica de Catalunya, Rambla Sant Nebridi 10, Terrassa 08222 (Barcelona), Spain
| | - Joan Goset
- Center for Sensors, Instruments and Systems Development,
Universitat Politècnica de Catalunya, Rambla Sant Nebridi 10, Terrassa 08222 (Barcelona), Spain
| | - Mikel Aldaba
- Center for Sensors, Instruments and Systems Development,
Universitat Politècnica de Catalunya, Rambla Sant Nebridi 10, Terrassa 08222 (Barcelona), Spain
| | - Clara Mestre
- Center for Sensors, Instruments and Systems Development,
Universitat Politècnica de Catalunya, Rambla Sant Nebridi 10, Terrassa 08222 (Barcelona), Spain
| | - Cristina Rovira-Gay
- Center for Sensors, Instruments and Systems Development,
Universitat Politècnica de Catalunya, Rambla Sant Nebridi 10, Terrassa 08222 (Barcelona), Spain
| | - Neus Cano
- Clinical Research Group for Brain, Cognition and Behavior, Consorci Sanitari de Terrassa (CST), Terrassa, Spain
- Department de Ciències Bàsiques. Universitat Internacional de Catalunya, Sant Cugat del Vallès, Spain
| | - Mar Ariza
- Clinical Research Group for Brain, Cognition and Behavior, Consorci Sanitari de Terrassa (CST), Terrassa, Spain
| | - Bàrbara Delàs
- Servei d’Oftalmologia. Consorci Sanitari de Terrassa (CST), Terrassa, Spain
| | - Maite Garolera
- Clinical Research Group for Brain, Cognition and Behavior, Consorci Sanitari de Terrassa (CST), Terrassa, Spain
- Neuropsychology Unit, Hospital de Terrassa, Consorci Sanitari de Terrassa (CST), Terrassa, Spain
| | - Meritxell Vilaseca
- Center for Sensors, Instruments and Systems Development,
Universitat Politècnica de Catalunya, Rambla Sant Nebridi 10, Terrassa 08222 (Barcelona), Spain
| |
Collapse
|
9
|
Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, Drieghe D, Dunn MJ, Ettinger U, Fiedler S, Foulsham T, van der Geest JN, Hansen DW, Hutton SB, Kasneci E, Kingstone A, Knox PC, Kok EM, Lee H, Lee JY, Leppänen JM, Macknik S, Majaranta P, Martinez-Conde S, Nuthmann A, Nyström M, Orquin JL, Otero-Millan J, Park SY, Popelka S, Proudlock F, Renkewitz F, Roorda A, Schulte-Mecklenbeck M, Sharif B, Shic F, Shovman M, Thomas MG, Venrooij W, Zemblys R, Hessels RS. Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2023; 55:364-416. [PMID: 35384605 PMCID: PMC9535040 DOI: 10.3758/s13428-021-01762-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Department of Psychology, Nicolaus Copernicus University, Torun, Poland.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Department of Psychology, Regensburg University, Regensburg, Germany.
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Social, Health and Organizational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | | | - Lewis L Chuang
- Department of Ergonomics, Leibniz Institute for Working Environments and Human Factors, Dortmund, Germany
- Institute of Informatics, LMU Munich, Munich, Germany
| | | | - Denis Drieghe
- School of Psychology, University of Southampton, Southampton, UK
| | - Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | - Susann Fiedler
- Vienna University of Economics and Business, Vienna, Austria
| | - Tom Foulsham
- Department of Psychology, University of Essex, Essex, UK
| | | | - Dan Witzner Hansen
- Machine Learning Group, Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | | | - Enkelejda Kasneci
- Human-Computer Interaction, University of Tübingen, Tübingen, Germany
| | | | - Paul C Knox
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Ellen M Kok
- Department of Education and Pedagogy, Division Education, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, The Netherlands
| | - Helena Lee
- University of Southampton, Southampton, UK
| | - Joy Yeonjoo Lee
- School of Health Professions Education, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Jukka M Leppänen
- Department of Psychology and Speed-Language Pathology, University of Turku, Turku, Finland
| | - Stephen Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Päivi Majaranta
- TAUCHI Research Center, Computing Sciences, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jacob L Orquin
- Department of Management, Aarhus University, Aarhus, Denmark
- Center for Research in Marketing and Consumer Psychology, Reykjavik University, Reykjavik, Iceland
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, Vienna, Austria
| | - Stanislav Popelka
- Department of Geoinformatics, Palacký University Olomouc, Olomouc, Czech Republic
| | - Frank Proudlock
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Frank Renkewitz
- Department of Psychology, University of Erfurt, Erfurt, Germany
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of General Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Mark Shovman
- Eyeviation Systems, Herzliya, Israel
- Department of Industrial Design, Bezalel Academy of Arts and Design, Jerusalem, Israel
| | - Mervyn G Thomas
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Ward Venrooij
- Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Enschede, The Netherlands
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
10
|
Friedman L, Prokopenko V, Djanian S, Katrychuk D, Komogortsev OV. Factors affecting inter-rater agreement in human classification of eye movements: a comparison of three datasets. Behav Res Methods 2023; 55:417-427. [PMID: 35411475 DOI: 10.3758/s13428-021-01782-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/22/2021] [Indexed: 11/08/2022]
Abstract
Manual classification of eye-movements is used in research and as a basis for comparison with automatic algorithms in the development phase. However, human classification will not be useful if it is unreliable and unrepeatable. Therefore, it is important to know what factors might influence and enhance the accuracy and reliability of human classification of eye-movements. In this report we compare three datasets of human manual classification, two from earlier datasets and one, our own dataset, which we present here for the first time. For inter-rater reliability, we assess both the event-level F1-score and sample-level Cohen's κ, across groups of raters. The report points to several possible influences on human classification reliability: eye-tracker quality, use of head restraint, characteristics of the recorded subjects, the availability of detailed scoring rules, and the characteristics and training of the raters.
Collapse
Affiliation(s)
- Lee Friedman
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA.
| | - Vladyslav Prokopenko
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
| | - Shagen Djanian
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
- Department of Computer Science, Aalborg University, Selma Lagerlofs Vej 300, 9220, Aalborg East, Denmark
| | - Dmytro Katrychuk
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
| | - Oleg V Komogortsev
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
| |
Collapse
|
11
|
Wagner AR, Grove CR, Loyd BJ, Dibble LE, Schubert MC. Compensatory saccades differ between those with vestibular hypofunction and multiple sclerosis pointing to unique roles for peripheral and central vestibular inputs. J Neurophysiol 2022; 128:934-945. [PMID: 36069428 PMCID: PMC9550564 DOI: 10.1152/jn.00220.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 08/22/2022] [Accepted: 09/03/2022] [Indexed: 11/22/2022] Open
Abstract
Individuals with peripheral or central vestibular dysfunction recruit compensatory saccades (CSs) in response to high acceleration, yaw head impulses. Although CSs have been shown to be an effective strategy for reducing gaze position error (GPE) in individuals with peripheral hypofunction, for individuals with central vestibular dysfunction, the effectiveness of CS is unknown. The purpose of our study was to compare the effectiveness of CS, defined as the ability to compensate for head velocity and eye position errors, between persons with central and peripheral vestibular dysfunction. We compared oculomotor responses during video head impulse testing between individuals with unilateral peripheral vestibular deafferentation, a disorder of the peripheral vestibular afferents, and individuals with multiple sclerosis, a condition affecting the central vestibular pathways. We hypothesized that relative to individuals with peripheral lesions, individuals with central dysfunction would recruit CSs that were delayed and inappropriately scaled to head velocity and GPE. We show that CSs recruited by persons with central vestibular pathology were not uniformly deficient but instead were of a sufficient velocity to compensate for reductions in VOR gain. Compared to those with peripheral vestibular lesions, individuals with central pathology also recruited earlier covert CS with amplitudes that were better corrected for GPE. Conversely, those with central lesions showed greater variability in the amplitude of overt CS relative to GPE. These data point to a unique role for peripheral and central vestibular inputs in the recruitment of CS and suggest that covert CSs are an effective oculomotor strategy for individuals with multiple sclerosis.NEW & NOTEWORTHY Compensatory saccades (CSs) are recruited by individuals with unilateral vestibular deafferentation (UVD) to compensate for an impaired vestibulo-ocular reflex (VOR). The effectiveness of CS in multiple sclerosis (MS), a central vestibular impairment, is unknown. We show that in UVD and in MS, covert CSs compensate for reduced VOR gain and minimize gaze position error (GPE), yet in >50% of individuals with MS, overt CS worsened GPE, suggesting unique roles for peripheral and central vestibular inputs.
Collapse
Affiliation(s)
- Andrew R Wagner
- Otolaryngology - Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio
- School of Health and Rehabilitation Sciences, The Ohio State University, Columbus, Ohio
| | - Colin R Grove
- Laboratory of Vestibular NeuroAdaptation, Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins University, Baltimore, Maryland
| | - Brian J Loyd
- School of Physical Therapy and Rehabilitation Sciences, University of Montana, Missoula, Montana
| | - Leland E Dibble
- Department of Physical Therapy and Athletic Training, University of Utah, Salt Lake City, Utah
| | - Michael C Schubert
- Laboratory of Vestibular NeuroAdaptation, Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins University, Baltimore, Maryland
- Department of Physical Medicine and Rehabilitation, Johns Hopkins University, Baltimore, Maryland
| |
Collapse
|
12
|
D'Aquino A, Frank C, Hagan JE, Schack T. Imagining interceptions: Eye movements as an online indicator of covert motor processes during motor imagery. Front Neurosci 2022; 16:940772. [PMID: 35968367 PMCID: PMC9372347 DOI: 10.3389/fnins.2022.940772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 07/13/2022] [Indexed: 11/21/2022] Open
Abstract
The analysis of eye movements during motor imagery has been used to understand the influence of covert motor processes on visual-perceptual activity. There is evidence showing that gaze metrics seem to be affected by motor planning often dependent on the spatial and temporal characteristics of a task. However, previous research has focused on simulated actions toward static targets with limited empirical evidence of how eye movements change in more dynamic environments. The study examined the characteristics of eye movements during motor imagery for an interception task. Twenty-four participants were asked to track a moving target over a computer display and either mentally simulate an interception or rest. The results showed that smooth pursuit variables, such as duration and gain, were lower during motor imagery when compared to passive observation. These findings indicate that motor plans integrate visual-perceptual information based on task demands and that eye movements during imagery reflect such constraint.
Collapse
Affiliation(s)
- Alessio D'Aquino
- Faculty of Psychology and Sports Science, Neurocognition and Action Biomechanics Group, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Cornelia Frank
- Institute for Sport and Movement Science, Osnabrück University, Osnabrück, Germany
| | - John Elvis Hagan
- Faculty of Psychology and Sports Science, Neurocognition and Action Biomechanics Group, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Thomas Schack
- Faculty of Psychology and Sports Science, Neurocognition and Action Biomechanics Group, Bielefeld University, Bielefeld, Germany
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- Research Institute for Cognition and Robotics (CoR-Lab), Bielefeld University, Bielefeld, Germany
| |
Collapse
|
13
|
Evaluating Eye Movement Event Detection: A Review of the State of the Art. Behav Res Methods 2022:10.3758/s13428-021-01763-7. [PMID: 35715615 DOI: 10.3758/s13428-021-01763-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/27/2021] [Indexed: 11/08/2022]
Abstract
Detecting eye movements in raw eye tracking data is a well-established research area by itself, as well as a common pre-processing step before any subsequent analysis. As in any field, however, progress and successful collaboration can only be achieved provided a shared understanding of the pursued goal. This is often formalised via defining metrics that express the quality of an approach to solving the posed problem. Both the big-picture intuition behind the evaluation strategies and seemingly small implementation details influence the resulting measures, making even studies with outwardly similar procedures essentially incomparable, impeding a common understanding. In this review, we systematically describe and analyse evaluation methods and measures employed in the eye movement event detection field to date. While recently developed evaluation strategies tend to quantify the detector's mistakes at the level of whole eye movement events rather than individual gaze samples, they typically do not separate establishing correspondences between true and predicted events from the quantification of the discovered errors. In our analysis we separate these two steps where possible, enabling their almost arbitrary combinations in an evaluation pipeline. We also present the first large-scale empirical analysis of event matching strategies in the literature, examining these various combinations both in practice and theoretically. We examine the particular benefits and downsides of the evaluation methods, providing recommendations towards more intuitive and informative assessment. We implemented the evaluation strategies on which this work focuses in a single publicly available library: https://github.com/r-zemblys/EM-event-detection-evaluation .
Collapse
|
14
|
Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels. SENSORS 2021; 21:s21144686. [PMID: 34300425 PMCID: PMC8309511 DOI: 10.3390/s21144686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 06/28/2021] [Accepted: 07/06/2021] [Indexed: 11/17/2022]
Abstract
Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult to gain insight through visualization. To avoid the complication, we often employ fixation identification algorithms for more abstract visualizations. In the past, many scientists have focused on gaze data abstraction with the attention map and analyzed detail gaze movement patterns with the scanpath visualization. Abstract eye movement patterns change dramatically depending on fixation identification algorithms in the preprocessing. However, it is difficult to find out how fixation identification algorithms affect gaze movement pattern visualizations. Additionally, scientists often spend much time on adjusting parameters manually in the fixation identification algorithms. In this paper, we propose a gaze behavior-based data processing method for abstract gaze data visualization. The proposed method classifies raw gaze data using machine learning models for image classification, such as CNN, AlexNet, and LeNet. Additionally, we compare the velocity-based identification (I-VT), dispersion-based identification (I-DT), density-based fixation identification, velocity and dispersion-based (I-VDT), and machine learning based and behavior-based modelson various visualizations at each abstraction level, such as attention map, scanpath, and abstract gaze movement visualization.
Collapse
|
15
|
Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. Detection of normal and slow saccades using implicit piecewise polynomial approximation. J Vis 2021; 21:8. [PMID: 34125160 PMCID: PMC8212426 DOI: 10.1167/jov.21.6.8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The quantitative analysis of saccades in eye movement data unveils information associated with intention, cognition, and health status. Abnormally slow saccades are indicative of neurological disorders and often imply a specific pathological disturbance. However, conventional saccade detection algorithms are not designed to detect slow saccades, and are correspondingly unreliable when saccades are unusually slow. In this article, we propose an algorithm that is effective for the detection of both normal and slow saccades. The proposed algorithm is partly based on modeling saccadic waveforms as piecewise-quadratic signals. The algorithm first decreases noise in acquired eye-tracking data using optimization to minimize a prescribed objective function, then uses velocity thresholding to detect saccades. Using both simulated saccades and real saccades generated by healthy subjects and patients, we evaluate the performance of the proposed algorithm and 10 other detection algorithms. We show the proposed algorithm is more accurate in detecting both normal and slow saccades than other algorithms.
Collapse
Affiliation(s)
- Weiwei Dai
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - Ivan Selesnick
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - John-Ross Rizzo
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Janet Rucker
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Todd Hudson
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| |
Collapse
|
16
|
Agtzidis I, Meyhöfer I, Dorr M, Lencer R. Following Forrest Gump: Smooth pursuit related brain activation during free movie viewing. Neuroimage 2020; 216:116491. [DOI: 10.1016/j.neuroimage.2019.116491] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 12/13/2019] [Accepted: 12/22/2019] [Indexed: 10/25/2022] Open
|
17
|
Abstract
Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources.
Collapse
|
18
|
Agtzidis I, Startsev M, Dorr M. Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching. J Eye Mov Res 2020; 13. [PMID: 33828806 PMCID: PMC8005322 DOI: 10.16910/jemr.13.4.5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In this short article we present our manual annotation of the eye movement events in a
subset of the large-scale eye tracking data set Hollywood2. Our labels include fixations,
saccades, and smooth pursuits, as well as a noise event type (the latter representing either
blinks, loss of tracking, or physically implausible signals). In order to achieve more
consistent annotations, the gaze samples were labelled by a novice rater based on
rudimentary algorithmic suggestions, and subsequently corrected by an expert rater.
Overall, we annotated eye movement events in the recordings corresponding to 50
randomly selected test set clips and 6 training set clips from Hollywood2, which were
viewed by 16 observers and amount to a total of approximately 130 minutes of gaze data.
In these labels, 62.4% of the samples were attributed to fixations, 9.1% – to saccades, and,
notably, 24.2% – to pursuit (the remainder marked as noise). After evaluation of 15
published eye movement classification algorithms on our newly collected annotated data
set, we found that the most recent algorithms perform very well on average, and even
reach human-level labelling quality for fixations and saccades, but all have a much larger
room for improvement when it comes to smooth pursuit classification. The data set is
made available at https://gin.g-node.org/ioannis.agtzidis/hollywood2_em.
Collapse
|
19
|
Evaluating three approaches to binary event-level agreement scoring. A reply to Friedman (2020). Behav Res Methods 2020; 53:325-334. [PMID: 32705657 PMCID: PMC7880951 DOI: 10.3758/s13428-020-01425-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
20
|
Voloh B, Watson MR, König S, Womelsdorf T. MAD saccade: statistically robust saccade threshold estimation via the median absolute deviation. J Eye Mov Res 2020; 12:10.16910/jemr.12.8.3. [PMID: 33828776 PMCID: PMC7881893 DOI: 10.16910/jemr.12.8.3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Saccade detection is a critical step in the analysis of gaze data. A common method for saccade detection is to use a simple threshold for velocity or acceleration values, which can be estimated from the data using the mean and standard deviation. However, this method has the downside of being influenced by the very signal it is trying to detect, the outlying velocities or accelerations that occur during saccades. We propose instead to use the median absolute deviation (MAD), a robust estimator of dispersion that is not influenced by outliers. We modify an algorithm proposed by Nyström and colleagues, and quantify saccade detection performance in both simulated and human data. Our modified algorithm shows a significant and marked improvement in saccade detection - showing both more true positives and less false negatives - especially under higher noise levels. We conclude that robust estimators can be widely adopted in other common, automatic gaze classification algorithms due to their ease of implementation.
Collapse
|
21
|
Brief communication: Three errors and two problems in a recent paper: gazeNet: End-to-end eye-movement event detection with deep neural networks (Zemblys, Niehorster, and Holmqvist, 2019). Behav Res Methods 2020; 52:1671-1680. [PMID: 32291731 DOI: 10.3758/s13428-019-01342-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Zemblys et al. (Behavior Research Methods, 51(2), 840-864, 2019) reported on a method for the classification of eye-movements ("gazeNet"). I have found three errors and two problems with that paper that are explained herein. Error 1: The gazeNet classification method was built assuming that a hand-scored dataset from Lund University was all collected at 500 Hz, but in fact, six of the 34 recording files were actually collected at 200 Hz. Of the six datasets that were used as the training set for the gazeNet algorithm, two were actually collected at 200 Hz. Problem 1 has to do with the fact that even among the 500 Hz data, the inter-timestamp intervals varied widely. Problem 2 is that there are many unusual discontinuities in the saccade trajectories from the Lund University dataset that make it a very poor choice for the construction of an automatic classification method. Error 2 The gazeNet algorithm was trained on the Lund dataset, and then compared to other methods, not trained on this dataset, in terms of performance on this dataset. This is an inherently unfair comparison, and yet nowhere in the gazeNet paper is this unfairness mentioned. Error 3 arises out of the novel event-related agreement analysis employed by the gazeNet authors. Although the authors intended to classify unmatched events as either false positives or false negatives, many are actually being classified as true negatives. True negatives are not errors, and any unmatched event misclassified as a true negative is actually driving kappa higher, whereas unmatched events should be driving kappa lower.
Collapse
|
22
|
Wadehn F, Weber T, Mack DJ, Heldt T, Loeliger HA. Model-Based Separation, Detection, and Classification of Eye Movements. IEEE Trans Biomed Eng 2020; 67:588-600. [PMID: 31150326 DOI: 10.1109/tbme.2019.2918986] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE We present a physiologically motivated eye movement analysis framework for model-based separation, detection, and classification (MBSDC) of eye movements. By estimating kinematic and neural controller signals for saccades, smooth pursuit, and fixational eye movements in a mechanistic model of the oculomotor system we are able to separate and analyze these eye movements independently. METHODS We extended an established oculomotor model for horizontal eye movements by neural controller signals and by a blink artifact model. To estimate kinematic (position, velocity, acceleration, forces) and neural controller signals from eye position data, we employ Kalman smoothing and sparse input estimation techniques. The estimated signals are used for detecting saccade start and end points, and for classifying the recording into saccades, smooth pursuit, fixations, post-saccadic oscillations, and blinks. RESULTS On simulated data, the reconstruction error of the velocity profiles is about half the error value obtained by the commonly employed approach of filtering and numerical differentiation. In experiments with smooth pursuit data from human subjects, we observe an accurate signal separation. In addition, in neural recordings from non-human primates, the estimated neural controller signals match the real recordings strikingly well. SIGNIFICANCE The MBSDC framework enables the analysis of multi-type eye movement recordings and provides a physiologically motivated approach to study motor commands and might aid the discovery of new digital biomarkers. CONCLUSION The proposed framework provides a model-based approach for a wide variety of eye movement analysis tasks.
Collapse
|
23
|
Imaoka Y, Flury A, de Bruin ED. Assessing Saccadic Eye Movements With Head-Mounted Display Virtual Reality Technology. Front Psychiatry 2020; 11:572938. [PMID: 33093838 PMCID: PMC7527608 DOI: 10.3389/fpsyt.2020.572938] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 08/18/2020] [Indexed: 12/16/2022] Open
Abstract
As our society is ageing globally, neurodegenerative disorders are becoming a relevant issue. Assessment of saccadic eye movement could provide objective values to help to understand the symptoms of disorders. HTC Corporation launched a new virtual reality (VR) headset, VIVE Pro Eye, implementing an infrared-based eye tracking technique together with VR technology. The purpose of this study is to evaluate whether the device can be used as an assessment tool of saccadic eye movement and to investigate the technical features of eye tracking. We developed a measurement system of saccadic eye movement with a simple VR environment on Unity VR design platform, following an internationally proposed standard saccade measurement protocol. We then measured the saccadic eye movement of seven healthy young adults to analyze the oculo-metrics of latency, peak velocity, and error rate of pro- and anti-saccade tasks: 120 trials in each task. We calculated these parameters based on the saccade detection algorithm that we have developed following previous studies. Consequently, our results revealed latency of 220.40 ± 43.16 ms, peak velocity of 357.90 ± 111.99°/s, and error rate of 0.24 ± 0.41% for the pro-saccade task, and latency of 343.35 ± 76.42 ms, peak velocity of 318.79 ± 116.69°/s, and error rate of 0.66 ± 0.76% for the anti-saccade task. In addition, we observed pupil diameter of 4.30 ± 1.15 mm (left eye) and 4.29 ± 1.08 mm (right eye) for the pro-saccade task, and of 4.21 ± 1.04 mm (left eye) and 4.22 ± 0.97 mm (right eye) for the anti-saccade task. Comparing between the descriptive statistics of previous studies and our results suggests that VIVE Pro Eye can function as an assessment tool of saccadic eye movement since our results are in the range of or close to the results of previous studies. Nonetheless, we found technical limitations especially about time-related measurement parameters. Further improvements in software and hardware of the device and measurement protocol, and more measurements with diverse age-groups and people with different health conditions are warranted to enhance the whole assessment system of saccadic eye movement.
Collapse
Affiliation(s)
- Yu Imaoka
- Motor Control & Learning Laboratory, Institute of Human Movement Sciences and Sport, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Andri Flury
- Motor Control & Learning Laboratory, Institute of Human Movement Sciences and Sport, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Eling D de Bruin
- Motor Control & Learning Laboratory, Institute of Human Movement Sciences and Sport, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland.,Division of Physiotherapy, Department of Neurobiology, Care Sciences and Society, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
24
|
Startsev M, Agtzidis I, Dorr M. Characterizing and automatically detecting smooth pursuit in a large-scale ground-truth data set of dynamic natural scenes. J Vis 2019; 19:10. [DOI: 10.1167/19.14.10] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Mikhail Startsev
- Human-Machine Communication, Technical University of Munich, Munich, Germany
| | - Ioannis Agtzidis
- Human-Machine Communication, Technical University of Munich, Munich, Germany
| | - Michael Dorr
- Human-Machine Communication, Technical University of Munich, Munich, Germany
| |
Collapse
|
25
|
Peng H, Li B, He D, Wang J. Identification of fixations, saccades and smooth pursuits based on segmentation and clustering. INTELL DATA ANAL 2019. [DOI: 10.3233/ida-184184] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
26
|
Watson MR, Voloh B, Thomas C, Hasan A, Womelsdorf T. USE: An integrative suite for temporally-precise psychophysical experiments in virtual environments for human, nonhuman, and artificially intelligent agents. J Neurosci Methods 2019; 326:108374. [PMID: 31351974 DOI: 10.1016/j.jneumeth.2019.108374] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Revised: 06/24/2019] [Accepted: 07/24/2019] [Indexed: 11/30/2022]
Abstract
BACKGROUND There is a growing interest in complex, active, and immersive behavioral neuroscience tasks. However, the development and control of such tasks present unique challenges. NEW METHOD The Unified Suite for Experiments (USE) is an integrated set of hardware and software tools for the design and control of behavioral neuroscience experiments. The software, developed using the Unity video game engine, supports both active tasks in immersive 3D environments and static 2D tasks used in more traditional visual experiments. The custom USE SyncBox hardware, based around an Arduino Mega2560 board, integrates and synchronizes multiple data streams from different pieces of experimental hardware. The suite addresses three key issues with developing cognitive neuroscience experiments in Unity: tight experimental control, accurate sub-ms timing, and accurate gaze target identification. RESULTS USE is a flexible framework to realize experiments, enabling (i) nested control over complex tasks, (ii) flexible use of 3D or 2D scenes and objects, (iii) touchscreen-, button-, joystick- and gaze-based interaction, and (v) complete offline reconstruction of experiments for post-processing and temporal alignment of data streams. COMPARISON WITH EXISTING METHODS Most existing experiment-creation tools are not designed to support the development of video-game-like tasks. Those that do use older or less popular video game engines as their base, and are not as feature-rich or enable as precise control over timing as USE. CONCLUSIONS USE provides an integrated, open source framework for a wide variety of active behavioral neuroscience experiments using human and nonhuman participants, and artificially-intelligent agents.
Collapse
Affiliation(s)
- Marcus R Watson
- Department of Biology, Centre for Vision Research, York University, Toronto, ON, M6J1P3, Canada.
| | - Benjamin Voloh
- Department of Psychology, Vanderbilt University, Nashville, TN, 37240 United States
| | - Christopher Thomas
- Department of Psychology, Vanderbilt University, Nashville, TN, 37240 United States
| | - Asif Hasan
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37240 United States
| | - Thilo Womelsdorf
- Department of Biology, Centre for Vision Research, York University, Toronto, ON, M6J1P3, Canada; Department of Psychology, Vanderbilt University, Nashville, TN, 37240 United States; Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37240 United States
| |
Collapse
|
27
|
Doucet G, Gulli RA, Corrigan BW, Duong LR, Martinez-Trujillo JC. Modulation of local field potentials and neuronal activity in primate hippocampus during saccades. Hippocampus 2019; 30:192-209. [PMID: 31339193 DOI: 10.1002/hipo.23140] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Revised: 06/26/2019] [Accepted: 06/28/2019] [Indexed: 01/15/2023]
Abstract
Primates use saccades to gather information about objects and their relative spatial arrangement, a process essential for visual perception and memory. It has been proposed that signals linked to saccades reset the phase of local field potential (LFP) oscillations in the hippocampus, providing a temporal window for visual signals to activate neurons in this region and influence memory formation. We investigated this issue by measuring hippocampal LFPs and spikes in two macaques performing different tasks with unconstrained eye movements. We found that LFP phase clustering (PC) in the alpha/beta (8-16 Hz) frequencies followed foveation onsets, while PC in frequencies lower than 8 Hz followed spontaneous saccades, even on a homogeneous background. Saccades to a solid grey background were not followed by increases in local neuronal firing, whereas saccades toward appearing visual stimuli were. Finally, saccade parameters correlated with LFPs phase and amplitude: saccade direction correlated with delta (≤4 Hz) phase, and saccade amplitude with theta (4-8 Hz) power. Our results suggest that signals linked to saccades reach the hippocampus, producing synchronization of delta/theta LFPs without a general activation of local neurons. Moreover, some visual inputs co-occurring with saccades produce LFP synchronization in the alpha/beta bands and elevated neuronal firing. Our findings support the hypothesis that saccade-related signals enact sensory input-dependent plasticity and therefore memory formation in the primate hippocampus.
Collapse
Affiliation(s)
- Guillaume Doucet
- The Ottawa Hospital Research Institute, Ottawa, Ontario, Canada.,Department of Physiology, McGill University, Montreal, Quebec, Canada.,Department of Physiology and Pharmacology, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada.,Robarts Research Institute, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Roberto A Gulli
- Department of Physiology and Pharmacology, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada.,Robarts Research Institute, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada.,Integrated Program in Neuroscience, McGill University, Montreal, Quebec, Canada.,Department of Neuroscience, Columbia University, New York, New York
| | - Benjamin W Corrigan
- Department of Physiology and Pharmacology, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada.,Robarts Research Institute, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Lyndon R Duong
- Department of Physiology and Pharmacology, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada.,Robarts Research Institute, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada.,Center for Neural Science, New York University, New York, New York
| | - Julio C Martinez-Trujillo
- Department of Physiology and Pharmacology, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada.,Robarts Research Institute, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada.,Department of Psychiatry, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada.,Brain and Mind Institute, Western University, London, Ontario, Canada
| |
Collapse
|
28
|
Integration of past and current visual information during eye movements in amblyopia. PROGRESS IN BRAIN RESEARCH 2019. [PMID: 31239145 DOI: 10.1016/bs.pbr.2019.04.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register]
Abstract
Combination of signals based on their reliability is an increasingly popular model for sensorimotor processing. However, how reliability is estimated, or how such estimation is affected by prolonged exposure to noisy inputs, is still unknown. In this study, we compare patients with unilateral functional amblyopia with control subjects tracking either a reliable target, or a blurry, unreliable target, in a task of repeated, sustained smooth pursuit. We provide evidence for a lower weight of visual information during smooth pursuit in amblyopic and control subjects tracking a blurry target, with no significant difference of prior information weight. In contrast, we found no evidence of lower visual information weight in the catch-up saccades of amblyopic subjects. We conclude that oculomotor performance in unilateral amblyopia mostly lays within the continuum between our control groups, without significant differences in the relative weights of prior and visual information. However, smooth pursuit exhibits additional deficits that might result from abnormal visual development.
Collapse
|
29
|
Stuart S, Hickey A, Vitorio R, Welman K, Foo S, Keen D, Godfrey A. Eye-tracker algorithms to detect saccades during static and dynamic tasks: a structured review. Physiol Meas 2019; 40:02TR01. [DOI: 10.1088/1361-6579/ab02ab] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
30
|
Harezlak K, Augustyn DR, Kasprowski P. An Analysis of Entropy-Based Eye Movement Events Detection. ENTROPY 2019; 21:e21020107. [PMID: 33266823 PMCID: PMC7514590 DOI: 10.3390/e21020107] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2018] [Revised: 01/15/2019] [Accepted: 01/21/2019] [Indexed: 11/16/2022]
Abstract
Analysis of eye movement has attracted a lot of attention recently in terms of exploring areas of people’s interest, cognitive ability, and skills. The basis for eye movement usage in these applications is the detection of its main components—namely, fixations and saccades, which facilitate understanding of the spatiotemporal processing of a visual scene. In the presented research, a novel approach for the detection of eye movement events is proposed, based on the concept of approximate entropy. By using the multiresolution time-domain scheme, a structure entitled the Multilevel Entropy Map was developed for this purpose. The dataset was collected during an experiment utilizing the “jumping point” paradigm. Eye positions were registered with a 1000 Hz sampling rate. For event detection, the knn classifier was applied. The best classification efficiency in recognizing the saccadic period ranged from 83% to 94%, depending on the sample size used. These promising outcomes suggest that the proposed solution may be used as a potential method for describing eye movement dynamics.
Collapse
|
31
|
Bellet ME, Bellet J, Nienborg H, Hafed ZM, Berens P. Human-level saccade detection performance using deep neural networks. J Neurophysiol 2018; 121:646-661. [PMID: 30565968 DOI: 10.1152/jn.00601.2018] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Saccades are ballistic eye movements that rapidly shift gaze from one location of visual space to another. Detecting saccades in eye movement recordings is important not only for studying the neural mechanisms underlying sensory, motor, and cognitive processes, but also as a clinical and diagnostic tool. However, automatically detecting saccades can be difficult, particularly when such saccades are generated in coordination with other tracking eye movements, like smooth pursuits, or when the saccade amplitude is close to eye tracker noise levels, like with microsaccades. In such cases, labeling by human experts is required, but this is a tedious task prone to variability and error. We developed a convolutional neural network to automatically detect saccades at human-level accuracy and with minimal training examples. Our algorithm surpasses state of the art according to common performance metrics and could facilitate studies of neurophysiological processes underlying saccade generation and visual processing. NEW & NOTEWORTHY Detecting saccades in eye movement recordings can be a difficult task, but it is a necessary first step in many applications. We present a convolutional neural network that can automatically identify saccades with human-level accuracy and with minimal training examples. We show that our algorithm performs better than other available algorithms, by comparing performance on a wide range of data sets. We offer an open-source implementation of the algorithm as well as a web service.
Collapse
Affiliation(s)
- Marie E Bellet
- Institute for Ophthalmic Research, University of Tübingen , Tübingen , Germany
| | - Joachim Bellet
- Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen , Tübingen , Germany.,International Max Planck Research School for Cognitive and Systems Neuroscience , Tübingen , Germany.,Hertie Institute for Clinical Brain Research, University of Tübingen , Tübingen , Germany
| | - Hendrikje Nienborg
- Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen , Tübingen , Germany
| | - Ziad M Hafed
- Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen , Tübingen , Germany.,Hertie Institute for Clinical Brain Research, University of Tübingen , Tübingen , Germany
| | - Philipp Berens
- Institute for Ophthalmic Research, University of Tübingen , Tübingen , Germany.,Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen , Tübingen , Germany.,Bernstein Center for Computational Neuroscience , Tübingen , Germany
| |
Collapse
|
32
|
Wadehn F, Mack DJ, Weber T, Loeliger HA. Estimation of Neural Inputs and Detection of Saccades and Smooth Pursuit Eye Movements by Sparse Bayesian Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:2619-2622. [PMID: 30440945 DOI: 10.1109/embc.2018.8512758] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Eye movements reveal a great wealth of information about the visual system and the brain. Therefore, eye movements can serve as diagnostic markers for various neurological disorders. For an objective analysis, it is crucial to have an automatic and robust procedure to extract relevant eye movement parameters. An essential step towards this goal is to detect and separate different types of eye movements such as fixations, saccades and smooth pursuit. We have developed a model-based approach to perform signal detection and separation on eye movement recordings, using source separation techniques from sparse Bayesian learning. The key idea is to model the oculomotor system with a state space model and to perform signal separation in the neural domain by estimating sparse inputs which trigger saccades. The algorithm was evaluated on synthetic data, neural recordings from rhesus monkeys and on manually annotated human eye movement recordings with different smooth pursuit paradigms. The developed approach shows a high noise-robustness, provides saccade and smooth pursuit parameters, as well as estimates of the position, velocity and acceleration profiles. In addition, by estimating the input to the oculomotor system, we obtain an estimate of the neural inputs to the oculomotor muscles.
Collapse
|
33
|
1D CNN with BLSTM for automated classification of fixations, saccades, and smooth pursuits. Behav Res Methods 2018; 51:556-572. [DOI: 10.3758/s13428-018-1144-2] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
34
|
|
35
|
Hooge ITC, Niehorster DC, Nyström M, Andersson R, Hessels RS. Is human classification by experienced untrained observers a gold standard in fixation detection? Behav Res Methods 2018; 50:1864-1881. [PMID: 29052166 PMCID: PMC7875941 DOI: 10.3758/s13428-017-0955-x] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Manual classification is still a common method to evaluate event detection algorithms. The procedure is often as follows: Two or three human coders and the algorithm classify a significant quantity of data. In the gold standard approach, deviations from the human classifications are considered to be due to mistakes of the algorithm. However, little is known about human classification in eye tracking. To what extent do the classifications from a larger group of human coders agree? Twelve experienced but untrained human coders classified fixations in 6 min of adult and infant eye-tracking data. When using the sample-based Cohen's kappa, the classifications of the humans agreed near perfectly. However, we found substantial differences between the classifications when we examined fixation duration and number of fixations. We hypothesized that the human coders applied different (implicit) thresholds and selection rules. Indeed, when spatially close fixations were merged, most of the classification differences disappeared. On the basis of the nature of these intercoder differences, we concluded that fixation classification by experienced untrained human coders is not a gold standard. To bridge the gap between agreement measures (e.g., Cohen's kappa) and eye movement parameters (fixation duration, number of fixations), we suggest the use of the event-based F1 score and two new measures: the relative timing offset (RTO) and the relative timing deviation (RTD).
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Helgonabacken 12, 22362, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Helgonabacken 12, 22362, Lund, Sweden
| | - Richard Andersson
- Eye Information Group, IT University of Copenhagen, Copenhagen, Denmark
- Department of Philosophy and Cognitive Sciences, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
- Developmental Psychology, Utrecht University, Helgonabacken 1, 3584 CS, Utrecht, The Netherlands
| |
Collapse
|
36
|
Zargari Marandi R, Madeleine P, Omland Ø, Vuillerme N, Samani A. Eye movement characteristics reflected fatigue development in both young and elderly individuals. Sci Rep 2018; 8:13148. [PMID: 30177693 PMCID: PMC6120880 DOI: 10.1038/s41598-018-31577-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2018] [Accepted: 08/22/2018] [Indexed: 12/17/2022] Open
Abstract
Fatigue can develop during prolonged computer work, particularly in elderly individuals. This study investigated eye movement characteristics in relation to fatigue development. Twenty young and 18 elderly healthy adults were recruited to perform a prolonged functional computer task while their eye movements were recorded. The task lasted 40 minutes involving 240 cycles divided into 12 segments. Each cycle consisted of a sequence involving memorization of a pattern, a washout period, and replication of the pattern using a computer mouse. The participants rated their perceived fatigue after each segment. The mean values of blink duration (BD) and frequency (BF), saccade duration (SCD) and peak velocity (SPV), pupil dilation range (PDR), and fixation duration (FD) along with the task performance based on clicking speed and accuracy, were computed for each task segment. An increased subjective evaluation of fatigue suggested the development of fatigue. BD, BF, and PDR increased whereas SPV and SCD decreased over time in the young and elderly groups. Longer FD, shorter SCD, and lower task performance were observed in the elderly compared with the young group. The present findings provide a viable approach to develop a computational model based on oculometrics to track fatigue development during computer work.
Collapse
Affiliation(s)
- Ramtin Zargari Marandi
- Sport Sciences, Department of Health Science and Technology, Faculty of Medicine, Aalborg University, Aalborg, Denmark.,Univ. Grenoble Alpes, AGEIS, Grenoble, France
| | - Pascal Madeleine
- Sport Sciences, Department of Health Science and Technology, Faculty of Medicine, Aalborg University, Aalborg, Denmark
| | - Øyvind Omland
- Sport Sciences, Department of Health Science and Technology, Faculty of Medicine, Aalborg University, Aalborg, Denmark.,Department of Occupational and Environmental Medicine, Danish Ramazzini Center, Aalborg University Hospital, Aalborg, Denmark
| | - Nicolas Vuillerme
- Sport Sciences, Department of Health Science and Technology, Faculty of Medicine, Aalborg University, Aalborg, Denmark.,Univ. Grenoble Alpes, AGEIS, Grenoble, France.,Institut Universitaire de France, Paris, France
| | - Afshin Samani
- Sport Sciences, Department of Health Science and Technology, Faculty of Medicine, Aalborg University, Aalborg, Denmark.
| |
Collapse
|
37
|
Abstract
Event detection is a challenging stage in eye movement data analysis. A major drawback of current event detection methods is that parameters have to be adjusted based on eye movement data quality. Here we show that a fully automated classification of raw gaze samples as belonging to fixations, saccades, or other oculomotor events can be achieved using a machine-learning approach. Any already manually or algorithmically detected events can be used to train a classifier to produce similar classification of other data without the need for a user to set parameters. In this study, we explore the application of random forest machine-learning technique for the detection of fixations, saccades, and post-saccadic oscillations (PSOs). In an effort to show practical utility of the proposed method to the applications that employ eye movement classification algorithms, we provide an example where the method is employed in an eye movement-driven biometric application. We conclude that machine-learning techniques lead to superior detection compared to current state-of-the-art event detection algorithms and can reach the performance of manual coding.
Collapse
|
38
|
Nij Bijvank JA, Petzold A, Balk LJ, Tan HS, Uitdehaag BMJ, Theodorou M, van Rijn LJ. A standardized protocol for quantification of saccadic eye movements: DEMoNS. PLoS One 2018; 13:e0200695. [PMID: 30011322 PMCID: PMC6047815 DOI: 10.1371/journal.pone.0200695] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Accepted: 07/02/2018] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVE Quantitative saccadic testing is a non-invasive method of evaluating the neural networks involved in the control of eye movements. The aim of this study is to provide a standardized and reproducible protocol for infrared oculography measurements of eye movements and analysis, which can be applied for various diseases in a multicenter setting. METHODS Development of a protocol to Demonstrate Eye Movement Networks with Saccades (DEMoNS) using infrared oculography. Automated analysis methods were used to calculate parameters describing the characteristics of the saccadic eye movements. The two measurements of the subjects were compared with descriptive and reproducibility statistics. RESULTS Infrared oculography measurements of all subjects were performed using the DEMoNS protocol and various saccadic parameters were calculated automatically from 28 subjects. Saccadic parameters such as: peak velocity, latency and saccade pair ratios showed excellent reproducibility (intra-class correlation coefficients > 0.9). Parameters describing performance of more complex tasks showed moderate to good reproducibility (intra-class correlation coefficients 0.63-0.78). CONCLUSIONS This study provides a standardized and transparent protocol for measuring and analyzing saccadic eye movements in a multicenter setting. The DEMoNS protocol details outcome measures for treatment trial which are of excellent reproducibility. The DEMoNS protocol can be applied to the study of saccadic eye movements in various neurodegenerative and motor diseases.
Collapse
Affiliation(s)
- J. A. Nij Bijvank
- Department of Ophthalmology, Neuro-ophthalmology Expertise Center, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
- Department of Neurology, MS Center and Neuro-ophthalmology Expertise Center, Neuroscience Amsterdam, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
- * E-mail:
| | - A. Petzold
- Department of Ophthalmology, Neuro-ophthalmology Expertise Center, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
- Department of Neurology, MS Center and Neuro-ophthalmology Expertise Center, Neuroscience Amsterdam, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
- Moorfields Eye Hospital and The National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - L. J. Balk
- Department of Neurology, MS Center and Neuro-ophthalmology Expertise Center, Neuroscience Amsterdam, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
| | - H. S. Tan
- Department of Ophthalmology, Neuro-ophthalmology Expertise Center, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
| | - B. M. J. Uitdehaag
- Department of Neurology, MS Center and Neuro-ophthalmology Expertise Center, Neuroscience Amsterdam, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
| | - M. Theodorou
- Moorfields Eye Hospital and The National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - L. J. van Rijn
- Department of Ophthalmology, Neuro-ophthalmology Expertise Center, Amsterdam UMC - VUmc, Amsterdam, The Netherlands
| |
Collapse
|
39
|
Korda AI, Asvestas PA, Matsopoulos GK, Ventouras EM, Smyrnis N. Automatic identification of eye movements using the largest lyapunov exponent. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2017.11.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
40
|
One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms. Behav Res Methods 2017; 49:616-637. [PMID: 27193160 DOI: 10.3758/s13428-016-0738-9] [Citation(s) in RCA: 113] [Impact Index Per Article: 16.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.
Collapse
|
41
|
A new and general approach to signal denoising and eye movement classification based on segmented linear regression. Sci Rep 2017; 7:17726. [PMID: 29255207 PMCID: PMC5735175 DOI: 10.1038/s41598-017-17983-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Accepted: 12/04/2017] [Indexed: 11/15/2022] Open
Abstract
We introduce a conceptually novel method for eye-movement signal analysis. The method is general in that it does not place severe restrictions on sampling frequency, measurement noise or subject behavior. Event identification is based on segmentation that simultaneously denoises the signal and determines event boundaries. The full gaze position time-series is segmented into an approximately optimal piecewise linear function in O(n) time. Gaze feature parameters for classification into fixations, saccades, smooth pursuits and post-saccadic oscillations are derived from human labeling in a data-driven manner. The range of oculomotor events identified and the powerful denoising performance make the method useable for both low-noise controlled laboratory settings and high-noise complex field experiments. This is desirable for harmonizing the gaze behavior (in the wild) and oculomotor event identification (in the laboratory) approaches to eye movement behavior. Denoising and classification performance are assessed using multiple datasets. Full open source implementation is included.
Collapse
|
42
|
Unsupervised parsing of gaze data with a beta-process vector auto-regressive hidden Markov model. Behav Res Methods 2017; 50:2074-2096. [DOI: 10.3758/s13428-017-0974-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
43
|
Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades. J Vis 2017; 17:10. [PMID: 28813566 PMCID: PMC5852949 DOI: 10.1167/17.9.10] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter.
Collapse
Affiliation(s)
- Weiwei Dai
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - Ivan Selesnick
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - John-Ross Rizzo
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| | - Janet Rucker
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| | - Todd Hudson
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| |
Collapse
|
44
|
Naicker P, Anoopkumar-Dukie S, Grant GD, Kavanagh JJ. Anticholinergic activity in the nervous system: Consequences for visuomotor function. Physiol Behav 2016; 170:6-11. [PMID: 27965143 DOI: 10.1016/j.physbeh.2016.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2016] [Revised: 12/08/2016] [Accepted: 12/08/2016] [Indexed: 12/16/2022]
Abstract
Acetylcholine is present in the peripheral and central nervous system, where it is involved in a number of fundamental physiological and biochemical processes. In particular, interaction with muscarinic receptors can cause adverse effects such as dry mouth, drowsiness, mydriasis and cognitive dysfunction. Despite the knowledge that exists regarding these common side-effects, little is known about how anticholinergic medications influence central motor processes and fine motor control in healthy individuals. This paper reviews critical visuomotor processes that operate in healthy individuals, and how controlling these motor processes are influenced by medications that interfere with central cholinergic neurotransmission. An overview of receptor function and neurotransmitter interaction following the ingestion or administration of anticholinergics is provided, before exploring how visuomotor performance is affected by anticholinergic medications. In particular, this review will focus on the effects that anticholinergic medications have on fixation stability, saccadic eye movements, smooth pursuit eye movements, and general pupil dynamics.
Collapse
Affiliation(s)
- Preshanta Naicker
- Menzies Health Institute Queensland, Griffith University, Gold Coast, Queensland, Australia; School of Pharmacy, Griffith University, Gold Coast, Queensland, Australia
| | - Shailendra Anoopkumar-Dukie
- Menzies Health Institute Queensland, Griffith University, Gold Coast, Queensland, Australia; School of Pharmacy, Griffith University, Gold Coast, Queensland, Australia
| | - Gary D Grant
- Menzies Health Institute Queensland, Griffith University, Gold Coast, Queensland, Australia; School of Pharmacy, Griffith University, Gold Coast, Queensland, Australia
| | - Justin J Kavanagh
- Menzies Health Institute Queensland, Griffith University, Gold Coast, Queensland, Australia; School of Allied Health Sciences, Griffith University, Gold Coast, Queensland, Australia.
| |
Collapse
|
45
|
Singh T, Perry CM, Herter TM. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment. J Neuroeng Rehabil 2016; 13:10. [PMID: 26812907 PMCID: PMC4728792 DOI: 10.1186/s12984-015-0107-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2015] [Accepted: 12/08/2015] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. RESULTS Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. CONCLUSIONS The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.
Collapse
Affiliation(s)
- Tarkeshwar Singh
- Department of Exercise Science, Arnold School of Public Health, University of South Carolina, 921 Assembly Street, Columbia, SC-29208, USA.
| | - Christopher M Perry
- Department of Exercise Science, Arnold School of Public Health, University of South Carolina, 921 Assembly Street, Columbia, SC-29208, USA.
| | - Troy M Herter
- Department of Exercise Science, Arnold School of Public Health, University of South Carolina, 921 Assembly Street, Columbia, SC-29208, USA.
| |
Collapse
|
46
|
Ranjbaran M, Smith HLH, Galiana HL. Automatic Classification of the Vestibulo-Ocular Reflex Nystagmus: Integration of Data Clustering and System Identification. IEEE Trans Biomed Eng 2015; 63:850-8. [PMID: 26357393 DOI: 10.1109/tbme.2015.2477038] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The vestibulo-ocular reflex (VOR) plays an important role in our daily activities by enabling us to fixate on objects during head movements. Modeling and identification of the VOR improves our insight into the system behavior and improves diagnosis of various disorders. However, the switching nature of eye movements (nystagmus), including the VOR, makes dynamic analysis challenging. The first step in such analysis is to segment data into its subsystem responses (here slow and fast segment intervals). Misclassification of segments results in biased analysis of the system of interest. Here, we develop a novel three-step algorithm to classify the VOR data into slow and fast intervals automatically. The proposed algorithm is initialized using a K-means clustering method. The initial classification is then refined using system identification approaches and prediction error statistics. The performance of the algorithm is evaluated on simulated and experimental data. It is shown that the new algorithm performance is much improved over the previous methods, in terms of higher specificity.
Collapse
|
47
|
Abstract
Fixation durations (FD) have been used widely as a measurement of information processing and attention. However, issues like data quality can seriously influence the accuracy of the fixation detection methods and, thus, affect the validity of our results (Holmqvist, Nyström, & Mulvey, 2012). This is crucial when studying special populations such as infants, where common issues with testing (e.g., high degree of movement, unreliable eye detection, low spatial precision) result in highly variable data quality and render existing FD detection approaches highly time consuming (hand-coding) or imprecise (automatic detection). To address this problem, we present GraFIX, a novel semiautomatic method consisting of a two-step process in which eye-tracking data is initially parsed by using velocity-based algorithms whose input parameters are adapted by the user and then manipulated using the graphical interface, allowing accurate and rapid adjustments of the algorithms' outcome. The present algorithms (1) smooth the raw data, (2) interpolate missing data points, and (3) apply a number of criteria to automatically evaluate and remove artifactual fixations. The input parameters (e.g., velocity threshold, interpolation latency) can be easily manually adapted to fit each participant. Furthermore, the present application includes visualization tools that facilitate the manual coding of fixations. We assessed this method by performing an intercoder reliability analysis in two groups of infants presenting low- and high-quality data and compared it with previous methods. Results revealed that our two-step approach with adaptable FD detection criteria gives rise to more reliable and stable measures in low- and high-quality data.
Collapse
|
48
|
The art of braking: Post saccadic oscillations in the eye tracker signal decrease with increasing saccade size. Vision Res 2015; 112:55-67. [PMID: 25982715 DOI: 10.1016/j.visres.2015.03.015] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2014] [Revised: 03/18/2015] [Accepted: 03/23/2015] [Indexed: 11/21/2022]
Abstract
Recent research has shown that the pupil signal from video-based eye trackers contains post saccadic oscillations (PSOs). These reflect pupil motion relative to the limbus (Nyström, Hooge, & Holmqvist, 2013). More knowledge about video-based eye tracker signals is essential to allow comparison between the findings obtained from modern systems, and those of older eye tracking technologies (e.g. coils and measurement of the Dual Purkinje Image-DPI). We investigated PSOs in horizontal and vertical saccades of different sizes with two high quality video eye trackers. PSOs were very similar within observers, but not between observers. PSO amplitude decreased with increasing saccade size, and this effect was even stronger in vertical saccades; PSOs were almost absent in large vertical saccades. Based on this observation we conclude that the occurrence of PSOs is related to deceleration at the end of a saccade. That PSOs are saccade size dependent and idiosyncratic is a problem for algorithmic determination of saccade endings. Careful description of the eye tracker, its signal, and the procedure used to extract saccades is required to enable researchers to compare data from different eye trackers.
Collapse
|
49
|
Korda AI, Asvestas PA, Matsopoulos GK, Ventouras EM, Smyrnis NP. Automatic identification of oculomotor behavior using pattern recognition techniques. Comput Biol Med 2015; 60:151-62. [PMID: 25836568 DOI: 10.1016/j.compbiomed.2015.03.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2014] [Revised: 02/09/2015] [Accepted: 03/03/2015] [Indexed: 10/23/2022]
|
50
|
Larsson L, Nyström M, Andersson R, Stridh M. Detection of fixations and smooth pursuit movements in high-speed eye-tracking data. Biomed Signal Process Control 2015. [DOI: 10.1016/j.bspc.2014.12.008] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|