1
|
Dunn MJ, Alexander RG, Amiebenomo OM, Arblaster G, Atan D, Erichsen JT, Ettinger U, Giardini ME, Gilchrist ID, Hamilton R, Hessels RS, Hodgins S, Hooge ITC, Jackson BS, Lee H, Macknik SL, Martinez-Conde S, Mcilreavy L, Muratori LM, Niehorster DC, Nyström M, Otero-Millan J, Schlüssel MM, Self JE, Singh T, Smyrnis N, Sprenger A. Minimal reporting guideline for research involving eye tracking (2023 edition). Behav Res Methods 2024; 56:4351-4357. [PMID: 37507649 PMCID: PMC11225961 DOI: 10.3758/s13428-023-02187-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/28/2023] [Indexed: 07/30/2023]
Abstract
A guideline is proposed that comprises the minimum items to be reported in research studies involving an eye tracker and human or non-human primate participant(s). This guideline was developed over a 3-year period using a consensus-based process via an open invitation to the international eye tracking community. This guideline will be reviewed at maximum intervals of 4 years.
Collapse
Affiliation(s)
- Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK.
| | - Robert G Alexander
- Departments of Ophthalmology, Neurology, and Physiology/Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Gemma Arblaster
- Health Sciences School, University of Sheffield, Sheffield, UK
- Orthoptic Department, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | - Denize Atan
- Bristol Medical School, University of Bristol, Bristol, UK
| | | | | | - Mario E Giardini
- Department of Biomedical Engineering, University of Strathclyde, Glasgow, UK
| | - Iain D Gilchrist
- School of Psychological Science, University of Bristol, Bristol, UK
| | - Ruth Hamilton
- Department of Clinical Physics & Bioengineering, Royal Hospital for Children, NHS Greater Glasgow & Clyde, Glasgow, UK
- College of Medical, Veterinary & Life Sciences, University of Glasgow, Glasgow, UK
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | | | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Brooke S Jackson
- Department of Psychology, University of Georgia, Athens, GA, USA
| | - Helena Lee
- Clinical and Experimental Sciences, University of Southampton, Southampton, UK
| | - Stephen L Macknik
- Departments of Ophthalmology, Neurology, and Physiology/Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Susana Martinez-Conde
- Departments of Ophthalmology, Neurology, and Physiology/Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Lee Mcilreavy
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | - Lisa M Muratori
- Department of Physical Therapy, School of Health Professions, Stony Brook University, Stony Brook, NY, USA
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
- Department of Neurology, Johns Hopkins University, Baltimore, MD, USA
| | - Michael M Schlüssel
- UK EQUATOR Centre, Centre for Statistics in Medicine (CSM), Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences (NDORMS), University of Oxford, Oxford, UK
| | - Jay E Self
- Clinical and Experimental Sciences, University of Southampton, Southampton, UK
| | - Tarkeshwar Singh
- Department of Kinesiology, Pennsylvania State University, University Park, PA, USA
| | - Nikolaos Smyrnis
- 2nd Department of Psychiatry, National and Kapodistrian University of Athens, Medical School, General University Hospital Attikon, Athens, Greece
| | - Andreas Sprenger
- Department of Neurology and Institute of Psychology II, Center of Brain, Behavior and Metabolism (CBBM), University of Luebeck, Luebeck, Germany
| |
Collapse
|
2
|
Cade A, Turnbull PRK. Classification of short and long term mild traumatic brain injury using computerized eye tracking. Sci Rep 2024; 14:12686. [PMID: 38830966 PMCID: PMC11148176 DOI: 10.1038/s41598-024-63540-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 05/29/2024] [Indexed: 06/05/2024] Open
Abstract
Accurate, and objective diagnosis of brain injury remains challenging. This study evaluated useability and reliability of computerized eye-tracker assessments (CEAs) designed to assess oculomotor function, visual attention/processing, and selective attention in recent mild traumatic brain injury (mTBI), persistent post-concussion syndrome (PPCS), and controls. Tests included egocentric localisation, fixation-stability, smooth-pursuit, saccades, Stroop, and the vestibulo-ocular reflex (VOR). Thirty-five healthy adults performed the CEA battery twice to assess useability and test-retest reliability. In separate experiments, CEA data from 55 healthy, 20 mTBI, and 40 PPCS adults were used to train a machine learning model to categorize participants into control, mTBI, or PPCS classes. Intraclass correlation coefficients demonstrated moderate (ICC > .50) to excellent (ICC > .98) reliability (p < .05) and satisfactory CEA compliance. Machine learning modelling categorizing participants into groups of control, mTBI, and PPCS performed reasonably (balanced accuracy control: 0.83, mTBI: 0.66, and PPCS: 0.76, AUC-ROC: 0.82). Key outcomes were the VOR (gaze stability), fixation (vertical error), and pursuit (total error, vertical gain, and number of saccades). The CEA battery was reliable and able to differentiate healthy, mTBI, and PPCS patients reasonably well. While promising, the diagnostic model accuracy should be improved with a larger training dataset before use in clinical environments.
Collapse
Affiliation(s)
- Alice Cade
- School of Optometry and Vision Science, The University of Auckland, Private Bag 92019, Auckland, 1023, New Zealand.
- New Zealand College of Chiropractic, Auckland, New Zealand.
| | - Philip R K Turnbull
- School of Optometry and Vision Science, The University of Auckland, Private Bag 92019, Auckland, 1023, New Zealand
| |
Collapse
|
3
|
Johari K, Bhardwaj R, Kim JJ, Yow WQ, Tan UX. Eye movement analysis for real-world settings using segmented linear regression. Comput Biol Med 2024; 174:108364. [PMID: 38599067 DOI: 10.1016/j.compbiomed.2024.108364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 03/02/2024] [Accepted: 03/21/2024] [Indexed: 04/12/2024]
Abstract
Eye movement analysis is critical to studying human brain phenomena such as perception, cognition, and behavior. However, under uncontrolled real-world settings, the recorded gaze coordinates (commonly used to track eye movements) are typically noisy and make it difficult to track change in the state of each phenomenon precisely, primarily because the expected change is usually a slower transient process. This paper proposes an approach, Improved Naive Segmented linear regression (INSLR), which approximates the gaze coordinates with a piecewise linear function (PLF) referred to as a hypothesis. INSLR improves the existing NSLR approach by employing a hypotheses clustering algorithm, which redefines the final hypothesis estimation in two steps: (1) At each time-stamp, measure the likelihood of each hypothesis in the candidate list of hypotheses by using the least square fit score and its distance from the k-means of the hypotheses in the list. (2) Filter hypothesis based on a pre-defined threshold. We demonstrate the significance of the INSLR method in addressing the challenges of uncontrolled real-world settings such as gaze denoising and minimizing gaze prediction errors from cost-effective devices like webcams. Experiment results show INSLR consistently outperforms the baseline NSLR in denoising noisy signals from three eye movement datasets and minimizes the error in gaze prediction from a low precision device for 71.1% samples. Furthermore, this improvement in denoising quality is further validated by the improved accuracy of the oculomotor event classifier called NSLR-HMM and enhanced sensitivity in detecting variations in attention induced by distractor during online lecture.
Collapse
Affiliation(s)
- Kritika Johari
- Engineering Product Development Pillar, Singapore University of Technology and Design, Singapore.
| | - Rishabh Bhardwaj
- Information Systems Technology and Design Pillar, Singapore University of Technology and Design, Singapore
| | - Jung-Jae Kim
- Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore
| | - Wei Quin Yow
- Humanities, Arts and Social Sciences, Singapore University of Technology and Design, Singapore
| | - U-Xuan Tan
- Engineering Product Development Pillar, Singapore University of Technology and Design, Singapore
| |
Collapse
|
4
|
Maniarasu P, Shasane PH, Pai VH, Kuzhuppilly NIR, Ve RS, Ballae Ganeshrao S. Does the sampling frequency of an eye tracker affect the detection of glaucomatous visual field loss? Ophthalmic Physiol Opt 2024; 44:378-387. [PMID: 38149468 DOI: 10.1111/opo.13267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 12/12/2023] [Accepted: 12/13/2023] [Indexed: 12/28/2023]
Abstract
PURPOSE Evidence suggests that eye movements have potential as a tool for detecting glaucomatous visual field defects. This study evaluated the influence of sampling frequency on eye movement parameters in detecting glaucomatous visual field defects during a free-viewing task. METHODS We investigated eye movements in two sets of experiments: (a) young adults with and without simulated visual field defects and (b) glaucoma patients and age-matched controls. In Experiment 1, we recruited 30 healthy volunteers. Among these, 10 performed the task with a gaze-contingent superior arcuate (SARC) scotoma, 10 performed the task with a gaze-contingent biarcuate (BARC) scotoma and 10 performed the task without a simulated scotoma (NoSim). The experimental task involved participants freely exploring 100 images, each for 4 s. Eye movements were recorded using the LiveTrack Lightning eye-tracker (500 Hz). In Experiment 2, we recruited 20 glaucoma patients and 16 age-matched controls. All participants underwent similar experimental tasks as in Experiment 1, except only 37 images were shown for exploration. To analyse the effect of sampling frequency, data were downsampled to 250, 120 and 60 Hz. Eye movement parameters, such as the number of fixations, fixation duration, saccadic amplitude and bivariate contour ellipse area (BCEA), were computed across various sampling frequencies. RESULTS Two-way ANOVA revealed no significant effects of sampling frequency on fixation duration (simulation, p = 0.37; glaucoma patients, p = 0.95) and BCEA (simulation, p = 0.84; glaucoma patients: p = 0.91). BCEA showed good distinguishability in differentiating groups across different sampling frequencies, whereas fixation duration failed to distinguish between glaucoma patients and controls. Number of fixations and saccade amplitude showed variations with sampling frequency in both simulations and glaucoma patients. CONCLUSION In both simulations and glaucoma patients, BCEA consistently differentiated them from controls across various sampling frequencies.
Collapse
Affiliation(s)
- Priyanka Maniarasu
- Department of Optometry, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, India
| | - Prathamesh Harshad Shasane
- Department of Optometry, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, India
| | - Vijaya H Pai
- Department of Ophthalmology, Kasturba Medical College Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Neetha I R Kuzhuppilly
- Department of Ophthalmology, Kasturba Medical College Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Ramesh S Ve
- Department of Optometry, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, India
| | - Shonraj Ballae Ganeshrao
- Department of Optometry, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, India
| |
Collapse
|
5
|
Guo Y, Pannasch S, Helmert JR, Kaszowska A. Ambient and focal attention during complex problem-solving: preliminary evidence from real-world eye movement data. Front Psychol 2024; 15:1217106. [PMID: 38425554 PMCID: PMC10902451 DOI: 10.3389/fpsyg.2024.1217106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 01/31/2024] [Indexed: 03/02/2024] Open
Abstract
Time course analysis of eye movements during free exploration of real-world scenes often reveals an increase in fixation durations together with a decrease in saccade amplitudes, which has been explained within the two visual systems approach, i.e., a transition from ambient to focal. Short fixations and long saccades during early viewing periods are classified as ambient mode of vision, which is concerned with spatial orientation and is related to simple visual properties such as motion, contrast, and location. Longer fixations and shorter saccades during later viewing periods are classified as focal mode of vision, which is concentrated in the foveal projection and is capable of object identification and its semantic categorization. While these findings are mainly obtained in the context of image exploration, the present study endeavors to investigate whether the same pattern of interplay between ambient and focal visual attention is deployed when people work on complex real-world tasks-and if so, when? Based on a re-analysis of existing data that integrates concurrent think aloud and eye tracking protocols, the present study correlated participants' internal thinking models to the parameters of their eye movements when they planned solutions to an open-ended design problem in a real-world setting. We hypothesize that switching between ambient and focal attentional processing is useful when solvers encounter difficulty compelling them to shift their conceptual direction to adjust the solution path. Individuals may prefer different attentional strategies for information-seeking behavior, such as ambient-to-focal or focal-to-ambient. The observed increase in fixation durations and decrease in saccade amplitudes during the periods around shifts in conceptual direction lends support to the postulation of the ambient-to-focal processing; however, focal-to-ambient processing is not evident. Furthermore, our data demonstrate that the beginning of a shift in conceptual direction is observable in eye movement behavior with a significant prolongation of fixation. Our findings add to the conclusions drawn from laboratory settings by providing preliminary evidence for ambient and focal processing characteristics in real-world problem-solving.
Collapse
Affiliation(s)
- Yuxuan Guo
- Institute of Psychology III, Engineering Psychology and Applied Cognitive Research, Technische Universität Dresden, Dresden, Germany
| | - Sebastian Pannasch
- Institute of Psychology III, Engineering Psychology and Applied Cognitive Research, Technische Universität Dresden, Dresden, Germany
| | - Jens R. Helmert
- Institute of Psychology III, Engineering Psychology and Applied Cognitive Research, Technische Universität Dresden, Dresden, Germany
| | | |
Collapse
|
6
|
Huang Z, Duan X, Zhu G, Zhang S, Wang R, Wang Z. Assessing the data quality of AdHawk MindLink eye-tracking glasses. Behav Res Methods 2024:10.3758/s13428-023-02310-2. [PMID: 38168041 DOI: 10.3758/s13428-023-02310-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/30/2023] [Indexed: 01/05/2024]
Abstract
Most commercially available eye-tracking devices rely on video cameras and image processing algorithms to track gaze. Despite this, emerging technologies are entering the field, making high-speed, cameraless eye-tracking more accessible. In this study, a series of tests were conducted to compare the data quality of MEMS-based eye-tracking glasses (AdHawk MindLink) with three widely used camera-based eye-tracking devices (EyeLink Portable Duo, Tobii Pro Glasses 2, and SMI Eye Tracking Glasses 2). The data quality measures assessed in these tests included accuracy, precision, data loss, and system latency. The results suggest that, overall, the data quality of the eye-tracking glasses was lower compared to that of a desktop EyeLink Portable Duo eye-tracker. Among the eye-tracking glasses, the accuracy and precision of the MindLink eye-tracking glasses were either higher or on par with those of Tobii Pro Glasses 2 and SMI Eye Tracking Glasses 2. The system latency of MindLink was approximately 9 ms, significantly lower than that of camera-based eye-tracking devices found in VR goggles. These results suggest that the MindLink eye-tracking glasses show promise for research applications where high sampling rates and low latency are preferred.
Collapse
Affiliation(s)
- Zehao Huang
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Xiaoting Duan
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Gancheng Zhu
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Shuai Zhang
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Rong Wang
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Zhiguo Wang
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China.
| |
Collapse
|
7
|
Le Cunff AL, Dommett E, Giampietro V. Neurophysiological measures and correlates of cognitive load in attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD) and dyslexia: A scoping review and research recommendations. Eur J Neurosci 2024; 59:256-282. [PMID: 38109476 DOI: 10.1111/ejn.16201] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 10/27/2023] [Accepted: 11/06/2023] [Indexed: 12/20/2023]
Abstract
Working memory is integral to a range of critical cognitive functions such as reasoning and decision-making. Although alterations in working memory have been observed in neurodivergent populations, there has been no review mapping how cognitive load is measured in common neurodevelopmental conditions such as attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD) and dyslexia. This scoping review explores the neurophysiological measures used to study cognitive load in these specific populations. Our findings highlight that electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) are the most frequently used methods, with a limited number of studies employing functional near-infrared spectroscopy (fNIRs), magnetoencephalography (MEG) or eye-tracking. Notably, eye-related measures are less commonly used, despite their prominence in cognitive load research among neurotypical individuals. The review also highlights potential correlates of cognitive load, such as neural oscillations in the theta and alpha ranges for EEG studies, blood oxygenation level-dependent (BOLD) responses in lateral and medial frontal brain regions for fMRI and fNIRS studies and eye-related measures such as pupil dilation and blink rate. Finally, critical issues for future studies are discussed, including the technical challenges associated with multimodal approaches, the possible impact of atypical features on cognitive load measures and balancing data richness with participant well-being. These insights contribute to a more nuanced understanding of cognitive load measurement in neurodivergent populations and point to important methodological considerations for future neuroscientific research in this area.
Collapse
Affiliation(s)
- Anne-Laure Le Cunff
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | - Eleanor Dommett
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | - Vincent Giampietro
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| |
Collapse
|
8
|
Guadron L, Titchener SA, Abbott CJ, Ayton LN, van Opstal J, Petoe MA, Goossens J. The Saccade Main Sequence in Patients With Retinitis Pigmentosa and Advanced Age-Related Macular Degeneration. Invest Ophthalmol Vis Sci 2023; 64:1. [PMID: 36857076 PMCID: PMC9983702 DOI: 10.1167/iovs.64.3.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2023] Open
Abstract
Purpose Most eye-movement studies in patients with visual field defects have examined the strategies that patients use while exploring a visual scene, but they have not investigated saccade kinematics. In healthy vision, saccade trajectories follow the remarkably stereotyped "main sequence": saccade duration increases linearly with saccade amplitude; peak velocity also increases linearly for small amplitudes, but approaches a saturation limit for large amplitudes. Recent theories propose that these relationships reflect the brain's attempt to optimize vision when planning eye movements. Therefore, in patients with bilateral retinal damage, saccadic behavior might differ to optimize vision under the constraints imposed by the visual field defects. Methods We compared saccadic behavior of patients with central vision loss, due to age-related macular degeneration (AMD), and patients with peripheral vision loss, due to retinitis pigmentosa (RP), to that of controls with normal vision (NV) using a horizontal saccade task. Results Both patient groups demonstrated deficits in saccade reaction times and target localization behavior, as well as altered saccade kinematics. Saccades were generally slower and the shape of the velocity profiles were often atypical, especially in the patients with RP. In the patients with AMD, the changes were far less dramatic. For both groups, saccade kinematics were affected most when the target was in the subjects' blind field. Conclusions We conclude that defects of the central and peripheral retina have distinct effects on the saccade main sequence, and that visual inputs play an important role in planning the kinematics of a saccade.
Collapse
Affiliation(s)
- Leslie Guadron
- Department of Cognitive Neuroscience, Donders Institute for Brain, Cognition and Behaviour, Radboudumc, Nijmegen, The Netherlands
| | - Samuel A. Titchener
- Bionics Institute, East Melbourne, Victoria, Australia,Medical Bionics Department, University of Melbourne, Melbourne, Victoria, Australia
| | - Carla J. Abbott
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, Victoria, Australia,Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, Victoria, Australia
| | - Lauren N. Ayton
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, Victoria, Australia,Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, Victoria, Australia,Department of Optometry and Vision Sciences, University of Melbourne, Melbourne, Victoria, Australia
| | - John van Opstal
- Department of Biophysics, Donders Institute for Brain Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Matthew A. Petoe
- Bionics Institute, East Melbourne, Victoria, Australia,Medical Bionics Department, University of Melbourne, Melbourne, Victoria, Australia
| | - Jeroen Goossens
- Department of Cognitive Neuroscience, Donders Institute for Brain, Cognition and Behaviour, Radboudumc, Nijmegen, The Netherlands
| |
Collapse
|
9
|
Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, Drieghe D, Dunn MJ, Ettinger U, Fiedler S, Foulsham T, van der Geest JN, Hansen DW, Hutton SB, Kasneci E, Kingstone A, Knox PC, Kok EM, Lee H, Lee JY, Leppänen JM, Macknik S, Majaranta P, Martinez-Conde S, Nuthmann A, Nyström M, Orquin JL, Otero-Millan J, Park SY, Popelka S, Proudlock F, Renkewitz F, Roorda A, Schulte-Mecklenbeck M, Sharif B, Shic F, Shovman M, Thomas MG, Venrooij W, Zemblys R, Hessels RS. Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2023; 55:364-416. [PMID: 35384605 PMCID: PMC9535040 DOI: 10.3758/s13428-021-01762-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Department of Psychology, Nicolaus Copernicus University, Torun, Poland.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Department of Psychology, Regensburg University, Regensburg, Germany.
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Social, Health and Organizational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | | | - Lewis L Chuang
- Department of Ergonomics, Leibniz Institute for Working Environments and Human Factors, Dortmund, Germany
- Institute of Informatics, LMU Munich, Munich, Germany
| | | | - Denis Drieghe
- School of Psychology, University of Southampton, Southampton, UK
| | - Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | - Susann Fiedler
- Vienna University of Economics and Business, Vienna, Austria
| | - Tom Foulsham
- Department of Psychology, University of Essex, Essex, UK
| | | | - Dan Witzner Hansen
- Machine Learning Group, Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | | | - Enkelejda Kasneci
- Human-Computer Interaction, University of Tübingen, Tübingen, Germany
| | | | - Paul C Knox
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Ellen M Kok
- Department of Education and Pedagogy, Division Education, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, The Netherlands
| | - Helena Lee
- University of Southampton, Southampton, UK
| | - Joy Yeonjoo Lee
- School of Health Professions Education, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Jukka M Leppänen
- Department of Psychology and Speed-Language Pathology, University of Turku, Turku, Finland
| | - Stephen Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Päivi Majaranta
- TAUCHI Research Center, Computing Sciences, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jacob L Orquin
- Department of Management, Aarhus University, Aarhus, Denmark
- Center for Research in Marketing and Consumer Psychology, Reykjavik University, Reykjavik, Iceland
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, Vienna, Austria
| | - Stanislav Popelka
- Department of Geoinformatics, Palacký University Olomouc, Olomouc, Czech Republic
| | - Frank Proudlock
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Frank Renkewitz
- Department of Psychology, University of Erfurt, Erfurt, Germany
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of General Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Mark Shovman
- Eyeviation Systems, Herzliya, Israel
- Department of Industrial Design, Bezalel Academy of Arts and Design, Jerusalem, Israel
| | - Mervyn G Thomas
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Ward Venrooij
- Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Enschede, The Netherlands
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
10
|
Eye movement behavior in a real-world virtual reality task reveals ADHD in children. Sci Rep 2022; 12:20308. [PMID: 36434040 PMCID: PMC9700686 DOI: 10.1038/s41598-022-24552-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 11/16/2022] [Indexed: 11/26/2022] Open
Abstract
Eye movements and other rich data obtained in virtual reality (VR) environments resembling situations where symptoms are manifested could help in the objective detection of various symptoms in clinical conditions. In the present study, 37 children with attention deficit hyperactivity disorder and 36 typically developing controls (9-13 y.o) played a lifelike prospective memory game using head-mounted display with inbuilt 90 Hz eye tracker. Eye movement patterns had prominent group differences, but they were dispersed across the full performance time rather than associated with specific events or stimulus features. A support vector machine classifier trained on eye movement data showed excellent discrimination ability with 0.92 area under curve, which was significantly higher than for task performance measures or for eye movements obtained in a visual search task. We demonstrated that a naturalistic VR task combined with eye tracking allows accurate prediction of attention deficits, paving the way for precision diagnostics.
Collapse
|
11
|
Deng CL, Tian CY, Kuai SG. A combination of eye-gaze and head-gaze interactions improves efficiency and user experience in an object positioning task in virtual environments. APPLIED ERGONOMICS 2022; 103:103785. [PMID: 35490546 DOI: 10.1016/j.apergo.2022.103785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 04/19/2022] [Accepted: 04/21/2022] [Indexed: 06/14/2023]
Abstract
Eye-gaze and head-gaze are two hands-free interaction modes in virtual reality, each of which has demonstrated different strengths. Selecting suitable interaction modes in different scenarios is important to achieve efficient interaction in virtual scenes. This study compared the movement time in an object positioning task by examining eye-gaze interaction and head-gaze interaction in various conditions. In turn, it identified the superior zones for each mode, respectively. Based on this information, we designed a combination mode - utilizing eye-gaze interaction at the acceleration phase and deceleration phase and head-gaze interaction at the correction phase - to achieve the optimal interaction mode, which has allowed us to obtain higher efficiency and subjective satisfaction. This study provides a comprehensive analysis of the characteristics of the eye-gaze and head-gaze interaction modes and provides valuable insights into selecting the appropriate interaction modes for virtual reality applications.
Collapse
Affiliation(s)
- Cheng-Long Deng
- Institute of Brain and Education Innovation, Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, 200062, China
| | - Chen-Yu Tian
- Institute of Brain and Education Innovation, Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, 200062, China
| | - Shu-Guang Kuai
- Institute of Brain and Education Innovation, Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, 200062, China; Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai, 200031, China; NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, 200062, China.
| |
Collapse
|
12
|
Wong WW, Rangaprakash D, Diaz-Fong JP, Rotstein NM, Hellemann GS, Feusner JD. Neural and behavioral effects of modification of visual attention in body dysmorphic disorder. Transl Psychiatry 2022; 12:325. [PMID: 35948537 PMCID: PMC9365821 DOI: 10.1038/s41398-022-02099-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 07/22/2022] [Accepted: 07/28/2022] [Indexed: 11/12/2022] Open
Abstract
In individuals with body dysmorphic disorder (BDD), perceptual appearance distortions may be related to selective attention biases and aberrant visual scanning, contributing to imbalances in global vs. detailed visual processing. Treatments for the core symptom of perceptual distortions are underexplored in BDD; yet understanding their mechanistic effects on brain function is critical for rational treatment development. This study tested a behavioral strategy of visual-attention modification on visual system brain connectivity and eye behaviors. We acquired fMRI data in 37 unmedicated adults with BDD and 30 healthy controls. Participants viewed their faces naturalistically (naturalistic viewing), and holding their gaze on the image center (modulated viewing), monitored with an eye-tracking camera. We analyzed dynamic effective connectivity and visual fixation duration. Modulated viewing resulted in longer mean visual fixation duration compared to during naturalistic viewing, across groups. Further, modulated viewing resulted in stronger connectivity from occipital to parietal dorsal visual stream regions, also evident during the subsequent naturalistic viewing, compared with the initial naturalistic viewing, in BDD. Longer fixation duration was associated with a trend for stronger connectivity during modulated viewing. Those with more severe BDD symptoms had weaker dorsal visual stream connectivity during naturalistic viewing, and those with more negative appearance evaluations had weaker connectivity during modulated viewing. In sum, holding a constant gaze on a non-concerning area of one's face may confer increased communication in the occipital/parietal dorsal visual stream, facilitating global/holistic visual processing. This effect shows persistence during subsequent naturalistic viewing. Results have implications for perceptual retraining treatment designs.
Collapse
Affiliation(s)
- Wan-Wa Wong
- Centre for Addiction and Mental Health, Toronto, ON, Canada
| | - D Rangaprakash
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
| | - Joel P Diaz-Fong
- Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Natalie M Rotstein
- Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Gerhard S Hellemann
- Department of Biostatistics, School of Public Health, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Jamie D Feusner
- Centre for Addiction and Mental Health, Toronto, ON, Canada.
- Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA.
- Department of Psychiatry, Division of Neurosciences & Clinical Translation, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada.
- Department of Women's and Children's Health, Karolinska Institutet, Stockholm, Sweden.
| |
Collapse
|
13
|
Williams syndrome: reduced orienting to other's eyes in a hypersocial phenotype. J Autism Dev Disord 2022:10.1007/s10803-022-05563-6. [PMID: 35445369 PMCID: PMC9020553 DOI: 10.1007/s10803-022-05563-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/31/2022] [Indexed: 11/08/2022]
Abstract
Williams syndrome (WS) is a rare genetic condition associated with high sociability, intellectual disability, and social cognitive challenges. Attention to others’ eyes is crucial for social understanding. Orienting to, and from other’s eyes was studied in WS (n = 37, mean age = 23, age range 9–53). The WS group was compared to a typically developing comparison participants (n = 167) in stratified age groups from infancy to adulthood. Typically developing children and adults were quicker and more likely to orient to eyes than the mouth. This bias was absent in WS. The WS group had reduced peak saccadic velocities, indicating hypo-arousal. The current study indicates reduced orienting to others’ eyes in WS, which may affect social interaction skills.
Collapse
|
14
|
Ryerson MS, Long CS, Fichman M, Davidson JH, Scudder KN, Kim M, Katti R, Poon G, Harris MD. Evaluating cyclist biometrics to develop urban transportation safety metrics. ACCIDENT; ANALYSIS AND PREVENTION 2021; 159:106287. [PMID: 34256314 DOI: 10.1016/j.aap.2021.106287] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 06/17/2021] [Accepted: 06/28/2021] [Indexed: 06/13/2023]
Abstract
The transportation safety paradigm for urban transportation - particularly safety for those walking and cycling - relies on counting crashes to parameterize safety. These objective measures of safety are spatially static and reflective of past events: they can be enriched by including the human response to risk at diverse infrastructure designs. This perceived risk has been well captured qualitatively in the transportation safety literature; in the following study, we seek to develop a quantitative methodology that captures perceived risk as a continuous measure of human biometrics. Building on diverse safety-critical fields, we hypothesize that the perception of safety can be measured proactively with traveler biometrics, including eye and head movements, such that high readings of biometric indicators correlate with less safe areas. We collect biometric data from cyclists traversing an urban corridor with a protected, yet not continuously, cycle lane. By isolating and correlating peaks in cyclist biometric measures with infrastructure design, we develop a set of continuous variables - lateral head movements, gaze velocity, and off-mean gaze distance, both independently and as a vector - that allow for the evaluation of urban infrastructure based on perceived risk. The results reflect that higher biometric readings correspond to less safe (i.e., unprotected) areas, indicating that perceived risk can be measured proactively with biometric data.
Collapse
Affiliation(s)
- Megan S Ryerson
- Department of City and Regional Planning, Weitzman School of Design, University of Pennsylvania, Philadelphia, PA, USA; Department of Electrical and Systems Engineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA.
| | - Carrie S Long
- Department of City and Regional Planning, Weitzman School of Design, University of Pennsylvania, Philadelphia, PA, USA
| | - Michael Fichman
- PennPraxis, Weitzman School of Design, University of Pennsylvania, Philadelphia, PA, USA
| | - Joshua H Davidson
- Department of City and Regional Planning, Weitzman School of Design, University of Pennsylvania, Philadelphia, PA, USA
| | - Kristen N Scudder
- Department of City and Regional Planning, Weitzman School of Design, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Radhika Katti
- Department of Civil and Environmental Engineering, College of Engineering, Carnegie Mellon University, USA
| | - George Poon
- Department of Electrical and Systems Engineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Matthew D Harris
- Department of City and Regional Planning, Weitzman School of Design, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
15
|
Koohi N, Bancroft MJ, Patel J, Castro P, Akram H, Warner TT, Kaski D. Saccadic Bradykinesia in Parkinson's Disease: Preliminary Observations. Mov Disord 2021; 36:1729-1731. [PMID: 33822392 DOI: 10.1002/mds.28609] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 03/12/2021] [Accepted: 03/15/2021] [Indexed: 11/09/2022] Open
Affiliation(s)
- Nehzat Koohi
- The Ear Institute, University College London, London, United Kingdom.,Department of Clinical and Movement Neurosciences, Centre for Vestibular and Behavioural Neuroscience, Institute of Neurology, University College London, London, United Kingdom.,Department of Neuro-otology, Royal National ENT and Eastman Dental Hospitals UCLH, London, United Kingdom
| | - Matthew J Bancroft
- Department of Clinical and Movement Neurosciences, Centre for Vestibular and Behavioural Neuroscience, Institute of Neurology, University College London, London, United Kingdom
| | - Jay Patel
- Department of Neuro-otology, Royal National ENT and Eastman Dental Hospitals UCLH, London, United Kingdom
| | - Patricia Castro
- Department of Neuro-otology, Royal National ENT and Eastman Dental Hospitals UCLH, London, United Kingdom
| | - Harry Akram
- Department of Neuro-otology, Royal National ENT and Eastman Dental Hospitals UCLH, London, United Kingdom
| | - Thomas T Warner
- Reta Lila Weston Institute, Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, London, United Kingdom.,Queen Square Brain Bank for Neurological Disorders, UCL Queen Square Institute of Neurology, London, United Kingdom
| | - Diego Kaski
- Department of Clinical and Movement Neurosciences, Centre for Vestibular and Behavioural Neuroscience, Institute of Neurology, University College London, London, United Kingdom.,Department of Neuro-otology, Royal National ENT and Eastman Dental Hospitals UCLH, London, United Kingdom
| |
Collapse
|
16
|
Swan G, Goldstein RB, Savage SW, Zhang L, Ahmadi A, Bowers AR. Automatic processing of gaze movements to quantify gaze scanning behaviors in a driving simulator. Behav Res Methods 2021; 53:487-506. [PMID: 32748237 PMCID: PMC7854873 DOI: 10.3758/s13428-020-01427-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Eye and head movements are used to scan the environment when driving. In particular, when approaching an intersection, large gaze scans to the left and right, comprising head and multiple eye movements, are made. We detail an algorithm called the gaze scan algorithm that automatically quantifies the magnitude, duration, and composition of such large lateral gaze scans. The algorithm works by first detecting lateral saccades, then merging these lateral saccades into gaze scans, with the start and end points of each gaze scan marked in time and eccentricity. We evaluated the algorithm by comparing gaze scans generated by the algorithm to manually marked "consensus ground truth" gaze scans taken from gaze data collected in a high-fidelity driving simulator. We found that the gaze scan algorithm successfully marked 96% of gaze scans and produced magnitudes and durations close to ground truth. Furthermore, the differences between the algorithm and ground truth were similar to the differences found between expert coders. Therefore, the algorithm may be used in lieu of manual marking of gaze data, significantly accelerating the time-consuming marking of gaze movement data in driving simulator studies. The algorithm also complements existing eye tracking and mobility research by quantifying the number, direction, magnitude, and timing of gaze scans and can be used to better understand how individuals scan their environment.
Collapse
Affiliation(s)
- Garrett Swan
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA, 02114, USA.
| | - Robert B Goldstein
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA, 02114, USA
| | - Steven W Savage
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA, 02114, USA
| | - Lily Zhang
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA, 02114, USA
| | - Aliakbar Ahmadi
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA, 02114, USA
- Department of Mechanical Engineering, Technical University of Munich, Munich, Germany
| | - Alex R Bowers
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA, 02114, USA
| |
Collapse
|
17
|
Feller CN, Goldenberg M, Asselin PD, Merchant-Borna K, Abar B, Jones CMC, Mannix R, Kawata K, Bazarian JJ. Classification of Comprehensive Neuro-Ophthalmologic Measures of Postacute Concussion. JAMA Netw Open 2021; 4:e210599. [PMID: 33656530 PMCID: PMC7930925 DOI: 10.1001/jamanetworkopen.2021.0599] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
IMPORTANCE Symptom-based methods of concussion diagnosis in contact sports result in underdiagnosis and repeated head injury exposure, increasing the risk of long-term disability. Measures of neuro-ophthalmologic (NO) function have the potential to serve as objective aids, but their diagnostic utility is unknown. OBJECTIVE To identify NO measures that accurately differentiate athletes with and without concussion. DESIGN, SETTING, AND PARTICIPANTS This cohort study was conducted among athletes with and without concussion who were aged 17 to 22 years between 2016 and 2017. Eye movements and cognitive function were measured a median of 19 days after injury among patients who had an injury meeting the study definition of concussion while playing a sport (retrospectively selected from a concussion clinic), then compared with a control group of participants without concussion (enrolled from 104 noncontact collegiate athlete volunteers without prior head injury). Data analysis was conducted from November 2019 through May 2020. EXPOSURE Concussion. MAIN OUTCOMES AND MEASURES Classification accuracy of clinically important discriminator eye-tracking (ET) metrics. Participants' eye movements were evaluated with a 12-minute ET procedure, yielding 42 metrics related to smooth pursuit eye movement (SPEM), saccades, dynamic visual acuity, and reaction time. Clinically important discriminator metrics were defined as those with significantly different group differences and area under the receiver operator characteristic curves (AUROCs) of at least 0.70. RESULTS A total of 34 participants with concussions (mean [SD] age, 19.7 [2.4] years; 20 [63%] men) and 54 participants without concussions (mean [SD] age, 20.8 [2.2] years; 31 [57%] men) completed the study. Six ET metrics (ie, simple reaction time, discriminate reaction time, discriminate visual reaction speed, choice visual reaction speed, and reaction time on 2 measures of dynamic visual acuity 2) were found to be clinically important; all were measures of reaction time, and none were related to SPEM. Combined, these 6 metrics had an AUROC of 0.90 (95% CI, 0.80-0.99), a sensitivity of 77.8%, and a specificity of 92.6%. The 6 metrics remained significant on sensitivity testing. CONCLUSIONS AND RELEVANCE In this study, ET measures of slowed visual reaction time had high classification accuracy for concussion. Accurate, objective measures of NO function have the potential to improve concussion recognition and reduce the disability associated with underdiagnosis.
Collapse
Affiliation(s)
- Christina N. Feller
- University of Rochester School of Medicine and Dentistry, Rochester, New York
- Medical College of Wisconsin, Milwaukee
| | | | - Patrick D. Asselin
- University of Rochester School of Medicine and Dentistry, Rochester, New York
- Department of Pediatrics, Boston Children’s Hospital, Harvard Medical School, Boston, Massachusetts
| | - Kian Merchant-Borna
- Department of Emergency Medicine, University of Rochester School of Medicine and Dentistry, Rochester, New York
| | - Beau Abar
- Department of Emergency Medicine, University of Rochester School of Medicine and Dentistry, Rochester, New York
| | - Courtney Marie Cora Jones
- Department of Emergency Medicine, University of Rochester School of Medicine and Dentistry, Rochester, New York
| | - Rebekah Mannix
- Department of Pediatrics, Boston Children’s Hospital, Harvard Medical School, Boston, Massachusetts
| | - Keisuke Kawata
- Department of Kinesiology, Indiana University, Bloomington
| | - Jeffrey J. Bazarian
- Department of Emergency Medicine, University of Rochester School of Medicine and Dentistry, Rochester, New York
| |
Collapse
|
18
|
Abstract
The magnitude of variation in the gaze position signals recorded by an eye tracker, also known as its precision, is an important aspect of an eye tracker’s data quality. However, data quality of eye-tracking signals is still poorly understood. In this paper, we therefore investigate the following: (1) How do the various available measures characterizing eye-tracking data during fixation relate to each other? (2) How are they influenced by signal type? (3) What type of noise should be used to augment eye-tracking data when evaluating eye-movement analysis methods? To support our analysis, this paper presents new measures to characterize signal type and signal magnitude based on RMS-S2S and STD, two established measures of precision. Simulations are performed to investigate how each of these measures depends on the number of gaze position samples over which they are calculated, and to reveal how RMS-S2S and STD relate to each other and to measures characterizing the temporal spectrum composition of the recorded gaze position signal. Further empirical investigations were performed using gaze position data recorded with five eye trackers from human and artificial eyes. We found that although the examined eye trackers produce gaze position signals with different characteristics, the relations between precision measures derived from simulations are borne out by the data. We furthermore conclude that data with a range of signal type values should be used to assess the robustness of eye-movement analysis methods. We present a method for generating artificial eye-tracker noise of any signal type and magnitude.
Collapse
|
19
|
Stein N. A Comparison of Eye Tracking Latencies Among Several Commercial Head-Mounted Displays. Iperception 2021; 12:2041669520983338. [PMID: 33628410 PMCID: PMC7883159 DOI: 10.1177/2041669520983338] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 11/16/2020] [Indexed: 11/15/2022] Open
Abstract
A number of virtual reality head-mounted displays (HMDs) with integrated eye trackers have recently become commercially available. If their eye tracking latency is low and reliable enough for gaze-contingent rendering, this may open up many interesting opportunities for researchers. We measured eye tracking latencies for the Fove-0, the Varjo VR-1, and the High Tech Computer Corporation (HTC) Vive Pro Eye using simultaneous electrooculography measurements. We determined the time from the occurrence of an eye position change to its availability as a data sample from the eye tracker (delay) and the time from an eye position change to the earliest possible change of the display content (latency). For each test and each device, participants performed 60 saccades between two targets 20° of visual angle apart. The targets were continuously visible in the HMD, and the saccades were instructed by an auditory cue. Data collection and eye tracking calibration were done using the recommended scripts for each device in Unity3D. The Vive Pro Eye was recorded twice, once using the SteamVR SDK and once using the Tobii XR SDK. Our results show clear differences between the HMDs. Delays ranged from 15 ms to 52 ms, and the latencies ranged from 45 ms to 81 ms. The Fove-0 appears to be the fastest device and best suited for gaze-contingent rendering.
Collapse
Affiliation(s)
- Niklas Stein
- Institute for Psychology, University of Muenster, Muenster, Germany
| |
Collapse
|
20
|
Voloh B, Watson MR, König S, Womelsdorf T. MAD saccade: statistically robust saccade threshold estimation via the median absolute deviation. J Eye Mov Res 2020; 12:10.16910/jemr.12.8.3. [PMID: 33828776 PMCID: PMC7881893 DOI: 10.16910/jemr.12.8.3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Saccade detection is a critical step in the analysis of gaze data. A common method for saccade detection is to use a simple threshold for velocity or acceleration values, which can be estimated from the data using the mean and standard deviation. However, this method has the downside of being influenced by the very signal it is trying to detect, the outlying velocities or accelerations that occur during saccades. We propose instead to use the median absolute deviation (MAD), a robust estimator of dispersion that is not influenced by outliers. We modify an algorithm proposed by Nyström and colleagues, and quantify saccade detection performance in both simulated and human data. Our modified algorithm shows a significant and marked improvement in saccade detection - showing both more true positives and less false negatives - especially under higher noise levels. We conclude that robust estimators can be widely adopted in other common, automatic gaze classification algorithms due to their ease of implementation.
Collapse
|
21
|
Wadehn F, Weber T, Mack DJ, Heldt T, Loeliger HA. Model-Based Separation, Detection, and Classification of Eye Movements. IEEE Trans Biomed Eng 2020; 67:588-600. [PMID: 31150326 DOI: 10.1109/tbme.2019.2918986] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE We present a physiologically motivated eye movement analysis framework for model-based separation, detection, and classification (MBSDC) of eye movements. By estimating kinematic and neural controller signals for saccades, smooth pursuit, and fixational eye movements in a mechanistic model of the oculomotor system we are able to separate and analyze these eye movements independently. METHODS We extended an established oculomotor model for horizontal eye movements by neural controller signals and by a blink artifact model. To estimate kinematic (position, velocity, acceleration, forces) and neural controller signals from eye position data, we employ Kalman smoothing and sparse input estimation techniques. The estimated signals are used for detecting saccade start and end points, and for classifying the recording into saccades, smooth pursuit, fixations, post-saccadic oscillations, and blinks. RESULTS On simulated data, the reconstruction error of the velocity profiles is about half the error value obtained by the commonly employed approach of filtering and numerical differentiation. In experiments with smooth pursuit data from human subjects, we observe an accurate signal separation. In addition, in neural recordings from non-human primates, the estimated neural controller signals match the real recordings strikingly well. SIGNIFICANCE The MBSDC framework enables the analysis of multi-type eye movement recordings and provides a physiologically motivated approach to study motor commands and might aid the discovery of new digital biomarkers. CONCLUSION The proposed framework provides a model-based approach for a wide variety of eye movement analysis tasks.
Collapse
|
22
|
Hypomania and saccadic changes in Parkinson's disease: influence of D2 and D3 dopaminergic signalling. NPJ PARKINSONS DISEASE 2020; 6:5. [PMID: 31970287 PMCID: PMC6969176 DOI: 10.1038/s41531-019-0107-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 12/05/2019] [Indexed: 11/21/2022]
Abstract
In order to understand the influence of two dopaminergic signalling pathways, TaqIA rs1800497 (influencing striatal D2 receptor density) and Ser9Gly rs6280 (influencing the striatal D3 dopamine-binding affinity), on saccade generation and psychiatric comorbidities in Parkinson’s disease, this study aimed to investigate the association of saccadic performance in hypomanic or impulsive behaviour in parkinsonian patients; besides we questioned whether variants of D2 (A1+/A1−) and D3 (B1+/B1−) receptor polymorphism influence saccadic parameters differently, and if clinical parameters or brain connectivity changes modulate this association in the nigro-caudatal and nigro-collicular tract. Initially, patients and controls were compared regarding saccadic performance and differed in the parameter duration in memory-guided saccades (MGS) and visually guided saccades (VGS) trials (p < 0.0001) and in the MGS trial (p < 0.03). We were able to find associations between hypomanic behaviour (HPS) and saccade parameters (duration, latency, gain and amplitude) for both conditions [MGS (p = 0.036); VGS (p = 0.033)], but not for impulsive behaviour. For the A1 variant duration was significantly associated with HPS [VGS (p = 0.024); MGS (p = 0.033)]. In patients with the B1 variant, HPS scores were more consistently associated with duration [VGS (p = 0.005); MGS (p = 0.015), latency [VGS (p = 0.022)]] and amplitude [MGS (p = 0.006); VGS (p = 0.005)]. The mediation analysis only revealed a significant indirect effect for amplitude in the MGS modality for the variable UPDRS-ON (p < 0.05). All other clinical scales and brain connectivity parameters were not associated with behavioural traits. Collectively, our findings stress the role of striatal D2 and D3 signalling mechanisms in saccade generation and suggest that saccadic performance is associated with the clinical psychiatric state in Parkinson’s disease.
Collapse
|
23
|
Ratnam K, Konrad R, Lanman D, Zannoli M. Retinal image quality in near-eye pupil-steered systems. OPTICS EXPRESS 2019; 27:38289-38311. [PMID: 31878599 DOI: 10.1364/oe.27.038289] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
State-of-the-art near-eye displays often compromise on eye box size to maintain a wide field of view, necessitating a means for steering the eye box to maintain alignment with a moving eye. The design space of such pupil-steered systems is not well defined and the implications of imperfect steering on the perceived image are not well understood. To better characterize the pupil steering design space, we introduce a generalized taxonomy of pupil-steered architectures that considers both system and ocular factors that affect steering performance. We also develop an optical model of a generalized pupil-steered system with a wide-field schematic eye to simulate the retinal image. Using this framework, we systematically characterize retinal image quality for different combinations of design parameters. The results of these simulations provide an overview of the pupil steering design space and help determine relevant psychophysical experiments for further evaluation.
Collapse
|
24
|
Abstract
Recent applications of eye tracking for diagnosis, prognosis and follow-up of therapy in age-related neurological or psychological deficits have been reviewed. The review is focused on active aging, neurodegeneration and cognitive impairments. The potential impacts and current limitations of using characterizing features of eye movements and pupillary responses (oculometrics) as objective biomarkers in the context of aging are discussed. A closer look into the findings, especially with respect to cognitive impairments, suggests that eye tracking is an invaluable technique to study hidden aspects of aging that have not been revealed using any other noninvasive tool. Future research should involve a wider variety of oculometrics, in addition to saccadic metrics and pupillary responses, including nonlinear and combinatorial features as well as blink- and fixation-related metrics to develop biomarkers to trace age-related irregularities associated with cognitive and neural deficits.
Collapse
Affiliation(s)
- Ramtin Z Marandi
- Department of Health Science & Technology, Aalborg University, Aalborg E 9220, Denmark
| | - Parisa Gazerani
- Department of Health Science & Technology, Aalborg University, Aalborg E 9220, Denmark
| |
Collapse
|
25
|
Lai HY, Saavedra-Pena G, Sodini CG, Sze V, Heldt T. Measuring Saccade Latency Using Smartphone Cameras. IEEE J Biomed Health Inform 2019; 24:885-897. [PMID: 31056528 DOI: 10.1109/jbhi.2019.2913846] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Accurate quantification of neurodegenerative disease progression is an ongoing challenge that complicates efforts to understand and treat these conditions. Clinical studies have shown that eye movement features may serve as objective biomarkers to support diagnosis and tracking of disease progression. Here, we demonstrate that saccade latency-an eye movement measure of reaction time-can be measured robustly outside of the clinical environment with a smartphone camera. METHODS To enable tracking of saccade latency in large cohorts of patients and control subjects, we combined a deep convolutional neural network for gaze estimation with a model-based approach for saccade onset determination that provides automated signal-quality quantification and artifact rejection. RESULTS Simultaneous recordings with a smartphone and a high-speed camera resulted in negligible differences in saccade latency distributions. Furthermore, we demonstrated that the constraint of chinrest support can be removed when recording healthy subjects. Repeat smartphone-based measurements of saccade latency in 11 self-reported healthy subjects resulted in an intraclass correlation coefficient of 0.76, showing our approach has good to excellent test-retest reliability. Additionally, we conducted more than 19 000 saccade latency measurements in 29 self-reported healthy subjects and observed significant intra- and inter-subject variability, which highlights the importance of individualized tracking. Lastly, we showed that with around 65 measurements we can estimate mean saccade latency to within less-than-10-ms precision, which takes within 4 min with our setup. CONCLUSION AND SIGNIFICANCE By enabling repeat measurements of saccade latency and its distribution in individual subjects, our framework opens the possibility of quantifying patient state on a finer timescale in a broader population than previously possible.
Collapse
|
26
|
Multi-modal indicators for estimating perceived cognitive load in post-editing of machine translation. MACHINE TRANSLATION 2019. [DOI: 10.1007/s10590-019-09227-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
27
|
Abstract
Traditional video-based eyetrackers require participants to perform an individual calibration procedure, which involves the fixation of multiple points on a screen. However, certain participants (e.g., people with oculomotor and/or visual problems or infants) are unable to perform this task reliably. Previous work has shown that with two cameras one can estimate the orientation of the eyes' optical axis directly. Consequently, only one calibration point is needed to determine the deviation between an eye's optical and visual axes. We developed a stereo eyetracker with two USB 3.0 cameras and two infrared light sources that can track both eyes at ~ 350 Hz for eccentricities of up to 20°. A user interface allows for online monitoring and threshold adjustments of the pupil and corneal reflections. We validated this tracker by collecting eye movement data from nine healthy participants and compared these data to eye movement records obtained simultaneously with an established eyetracking system (EyeLink 1000 Plus). The results demonstrated that the two-dimensional accuracy of our portable system is better than 1°, allowing for at least ± 5-cm head motion. Its resolution is better than 0.2° (SD), and its sample-to-sample noise is less than 0.05° (RMS). We concluded that our stereo eyetracker is a valid instrument, especially in settings in which individual calibration is challenging.
Collapse
Affiliation(s)
- Annemiek D Barsingerhorn
- Department of Cognitive Neuroscience, Biophysics Section, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre Nijmegen, Nijmegen, The Netherlands.
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
| | - F Nienke Boonstra
- Department of Cognitive Neuroscience, Biophysics Section, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre Nijmegen, Nijmegen, The Netherlands
- Royal Dutch Visio-National Foundation for the Visually Impaired and Blind, Huizen, The Netherlands
- Bartiméus Institute for the Visually Impaired, Zeist, The Netherlands
| | - Jeroen Goossens
- Department of Cognitive Neuroscience, Biophysics Section, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
28
|
Wadehn F, Mack DJ, Weber T, Loeliger HA. Estimation of Neural Inputs and Detection of Saccades and Smooth Pursuit Eye Movements by Sparse Bayesian Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:2619-2622. [PMID: 30440945 DOI: 10.1109/embc.2018.8512758] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Eye movements reveal a great wealth of information about the visual system and the brain. Therefore, eye movements can serve as diagnostic markers for various neurological disorders. For an objective analysis, it is crucial to have an automatic and robust procedure to extract relevant eye movement parameters. An essential step towards this goal is to detect and separate different types of eye movements such as fixations, saccades and smooth pursuit. We have developed a model-based approach to perform signal detection and separation on eye movement recordings, using source separation techniques from sparse Bayesian learning. The key idea is to model the oculomotor system with a state space model and to perform signal separation in the neural domain by estimating sparse inputs which trigger saccades. The algorithm was evaluated on synthetic data, neural recordings from rhesus monkeys and on manually annotated human eye movement recordings with different smooth pursuit paradigms. The developed approach shows a high noise-robustness, provides saccade and smooth pursuit parameters, as well as estimates of the position, velocity and acceleration profiles. In addition, by estimating the input to the oculomotor system, we obtain an estimate of the neural inputs to the oculomotor muscles.
Collapse
|
29
|
Parametric Covariability in the Standard Model of the Saccadic Main Sequence. Optom Vis Sci 2018; 95:986-1003. [PMID: 30339645 DOI: 10.1097/opx.0000000000001291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
SIGNIFICANCE Saccades present a direct relationship between the size of the movement (SACSIZE) and its peak velocity (SACPEAK), the main sequence, which is traditionally quantified using the model SACPEAK = Vmax × (1 - e). This study shows that Vmax and SAT are not veridical indicators of saccadic dynamics. PURPOSE Alterations in saccadic dynamics are used as a diagnostic tool. Are the 95% reference ranges (RRs) of Vmax and SAT correctly quantifying the variability in saccadic dynamics of a population? METHODS Visually driven horizontal and vertical saccades were acquired from 116 normal subjects using the Neuro Kinetics Inc. Concussion Protocol with a 100-Hz I-Portal NOTC Vestibular System, and the main sequence models were computed. RESULTS The 95% RRs of Vmax, the asymptotic peak velocity, and SAT, the speed of the exponential rise toward Vmax, were quite large. The finding of a strong correlation between Vmax and SAT suggests that their variability might be, in part, a computational interaction. In fact, the interplay between the two parameters greatly reduced the actual peak velocity variability for saccades less than 15°. This correlation was not strong enough to support the adoption of a one-parameter model, where Vmax is estimated from SAT using the regression parameters. We also evaluated the effects of interpolating the position data to a simulated acquisition rate of 1 kHz. Interpolation had no effect on the population average of Vmax and brought a decrease of the average SAT by roughly 8%. CONCLUSIONS The 95% RRs of Vmax and SAT, treated as independent entities, are not a veridical representation of the variability in saccadic dynamics inside a population, especially for small saccades. We introduce a novel three-step method to determine if a data set is inside or outside a reference population that takes into account the correlation between Vmax and SAT.
Collapse
|
30
|
Halder S, Takano K, Kansaku K. Comparison of Four Control Methods for a Five-Choice Assistive Technology. Front Hum Neurosci 2018; 12:228. [PMID: 29928196 PMCID: PMC5997833 DOI: 10.3389/fnhum.2018.00228] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Accepted: 05/16/2018] [Indexed: 12/13/2022] Open
Abstract
Severe motor impairments can affect the ability to communicate. The ability to see has a decisive influence on the augmentative and alternative communication (AAC) systems available to the user. To better understand the initial impressions users have of AAC systems we asked naïve healthy participants to compare two visual (a visual P300 brain-computer interface (BCI) and an eye-tracker) and two non-visual systems (an auditory and a tactile P300 BCI). Eleven healthy participants performed 20 selections in a five choice task with each system. The visual P300 BCI used face stimuli, the auditory P300 BCI used Japanese Hiragana syllables and the tactile P300 BCI used a stimulator on the small left finger, middle left finger, right thumb, middle right finger and small right finger. The eye-tracker required a dwell time of 3 s on the target for selection. We calculated accuracies and information-transfer rates (ITRs) for each control method using the selection time that yielded the highest ITR and an accuracy above 70% for each system. Accuracies of 88% were achieved with the visual P300 BCI (4.8 s selection time, 20.9 bits/min), of 70% with the auditory BCI (19.9 s, 3.3 bits/min), of 71% with the tactile BCI (18 s, 3.4 bits/min) and of 100% with the eye-tracker (5.1 s, 28.2 bits/min). Performance between eye-tracker and visual BCI correlated strongly, correlation between tactile and auditory BCI performance was lower. Our data showed no advantage for either non-visual system in terms of ITR but a lower correlation of performance which suggests that choosing the system which suits a particular user is of higher importance for non-visual systems than visual systems.
Collapse
Affiliation(s)
- Sebastian Halder
- Systems Neuroscience Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Tokorozawa, Saitama, Japan
- Department of Molecular Medicine, University of Oslo, Oslo, Norway
| | - Kouji Takano
- Systems Neuroscience Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Tokorozawa, Saitama, Japan
| | - Kenji Kansaku
- Systems Neuroscience Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Tokorozawa, Saitama, Japan
- Brain Science Inspired Life Support Research Center, The University of Electro-Communications, Tokyo, Japan
- Department of Physiology and Biological Information, Dokkyo Medical University School of Medicine, Tochigi, Japan
| |
Collapse
|
31
|
A new and general approach to signal denoising and eye movement classification based on segmented linear regression. Sci Rep 2017; 7:17726. [PMID: 29255207 PMCID: PMC5735175 DOI: 10.1038/s41598-017-17983-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Accepted: 12/04/2017] [Indexed: 11/15/2022] Open
Abstract
We introduce a conceptually novel method for eye-movement signal analysis. The method is general in that it does not place severe restrictions on sampling frequency, measurement noise or subject behavior. Event identification is based on segmentation that simultaneously denoises the signal and determines event boundaries. The full gaze position time-series is segmented into an approximately optimal piecewise linear function in O(n) time. Gaze feature parameters for classification into fixations, saccades, smooth pursuits and post-saccadic oscillations are derived from human labeling in a data-driven manner. The range of oculomotor events identified and the powerful denoising performance make the method useable for both low-noise controlled laboratory settings and high-noise complex field experiments. This is desirable for harmonizing the gaze behavior (in the wild) and oculomotor event identification (in the laboratory) approaches to eye movement behavior. Denoising and classification performance are assessed using multiple datasets. Full open source implementation is included.
Collapse
|
32
|
Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades. J Vis 2017; 17:10. [PMID: 28813566 PMCID: PMC5852949 DOI: 10.1167/17.9.10] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter.
Collapse
Affiliation(s)
- Weiwei Dai
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - Ivan Selesnick
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - John-Ross Rizzo
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| | - Janet Rucker
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| | - Todd Hudson
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| |
Collapse
|