1
|
Ouchi T, Scholl LR, Rajeswaran P, Canfield RA, Smith LI, Orsborn AL. Mapping eye, arm, and reward information in frontal motor cortices using electrocorticography in non-human primates. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.13.607846. [PMID: 39185198 PMCID: PMC11343120 DOI: 10.1101/2024.08.13.607846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 08/27/2024]
Abstract
Goal-directed reaches give rise to dynamic neural activity across the brain as we move our eyes and arms, and process outcomes. High spatiotemporal resolution mapping of multiple cortical areas will improve our understanding of how these neural computations are spatially and temporally distributed across the brain. In this study, we used micro-electrocorticography (μECoG) recordings in two male monkeys performing visually guided reaches to map information related to eye movements, arm movements, and receiving rewards over a 1.37 cm2 area of frontal motor cortices (primary motor cortex, premotor cortex, frontal eye field, and dorsolateral pre-frontal cortex). Time-frequency and decoding analyses revealed that eye and arm movement information shifts across brain regions during a reach, likely reflecting shifts from planning to execution. We then used phase-based analyses to reveal potential overlaps of eye and arm information. We found that arm movement decoding performance was impacted by task-irrelevant eye movements, consistent with the presence of intermixed eye and arm information across much of motor cortices. Phase-based analyses also identified reward-related activity primarily around the principal sulcus in the pre-frontal cortex as well as near the arcuate sulcus in the premotor cortex. Our results demonstrate μECoG's strengths for functional mapping and provide further detail on the spatial distribution of eye, arm, and reward information processing distributed across frontal cortices during reaching. These insights advance our understanding of the overlapping neural computations underlying coordinated movements and reveal opportunities to leverage these signals to enhance future brain-computer interfaces.
Collapse
Affiliation(s)
- Tomohiro Ouchi
- University of Washington, Electrical and Computer Engineering, Seattle, 98115, USA
| | - Leo R Scholl
- University of Washington, Electrical and Computer Engineering, Seattle, 98115, USA
| | | | - Ryan A Canfield
- University of Washington, Bioengineering, Seattle, 98115, USA
| | - Lydia I Smith
- University of Washington, Electrical and Computer Engineering, Seattle, 98115, USA
| | - Amy L Orsborn
- University of Washington, Electrical and Computer Engineering, Seattle, 98115, USA
- University of Washington, Bioengineering, Seattle, 98115, USA
- Washington National Primate Research Center, Seattle, Washington, 98115, USA
| |
Collapse
|
2
|
Taore A, Tiang M, Dakin SC. (The limits of) eye-tracking with iPads. J Vis 2024; 24:1. [PMID: 38953861 PMCID: PMC11223623 DOI: 10.1167/jov.24.7.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 04/22/2024] [Indexed: 07/04/2024] Open
Abstract
Applications for eye-tracking-particularly in the clinic-are limited by a reliance on dedicated hardware. Here we compare eye-tracking implemented on an Apple iPad Pro 11" (third generation)-using the device's infrared head-tracking and front-facing camera-with a Tobii 4c infrared eye-tracker. We estimated gaze location using both systems while 28 observers performed a variety of tasks. For estimating fixation, gaze position estimates from the iPad were less accurate and precise than the Tobii (mean absolute error of 3.2° ± 2.0° compared with 0.75° ± 0.43°), but fixation stability estimates were correlated across devices (r = 0.44, p < 0.05). For tasks eliciting saccades >1.5°, estimated saccade counts (r = 0.4-0.73, all p < 0.05) were moderately correlated across devices. For tasks eliciting saccades >8° we observed moderate correlations in estimated saccade speed and amplitude (r = 0.4-0.53, all p < 0.05). We did, however, note considerable variation in the vertical component of estimated smooth pursuit speed from the iPad and a catastrophic failure of tracking on the iPad in 5% to 20% of observers (depending on the test). Our findings sound a note of caution to researchers seeking to use iPads for eye-tracking and emphasize the need to properly examine their eye-tracking data to remove artifacts and outliers.
Collapse
Affiliation(s)
- Aryaman Taore
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand
| | - Michelle Tiang
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand
| | - Steven C Dakin
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
| |
Collapse
|
3
|
Mark JA, Curtin A, Kraft AE, Ziegler MD, Ayaz H. Mental workload assessment by monitoring brain, heart, and eye with six biomedical modalities during six cognitive tasks. FRONTIERS IN NEUROERGONOMICS 2024; 5:1345507. [PMID: 38533517 PMCID: PMC10963413 DOI: 10.3389/fnrgo.2024.1345507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 02/15/2024] [Indexed: 03/28/2024]
Abstract
Introduction The efficiency and safety of complex high precision human-machine systems such as in aerospace and robotic surgery are closely related to the cognitive readiness, ability to manage workload, and situational awareness of their operators. Accurate assessment of mental workload could help in preventing operator error and allow for pertinent intervention by predicting performance declines that can arise from either work overload or under stimulation. Neuroergonomic approaches based on measures of human body and brain activity collectively can provide sensitive and reliable assessment of human mental workload in complex training and work environments. Methods In this study, we developed a new six-cognitive-domain task protocol, coupling it with six biomedical monitoring modalities to concurrently capture performance and cognitive workload correlates across a longitudinal multi-day investigation. Utilizing two distinct modalities for each aspect of cardiac activity (ECG and PPG), ocular activity (EOG and eye-tracking), and brain activity (EEG and fNIRS), 23 participants engaged in four sessions over 4 weeks, performing tasks associated with working memory, vigilance, risk assessment, shifting attention, situation awareness, and inhibitory control. Results The results revealed varying levels of sensitivity to workload within each modality. While certain measures exhibited consistency across tasks, neuroimaging modalities, in particular, unveiled meaningful differences between task conditions and cognitive domains. Discussion This is the first comprehensive comparison of these six brain-body measures across multiple days and cognitive domains. The findings underscore the potential of wearable brain and body sensing methods for evaluating mental workload. Such comprehensive neuroergonomic assessment can inform development of next generation neuroadaptive interfaces and training approaches for more efficient human-machine interaction and operator skill acquisition.
Collapse
Affiliation(s)
- Jesse A. Mark
- School of Biomedical Engineering, Science, and Health Systems, Drexel University, Philadelphia, PA, United States
| | - Adrian Curtin
- School of Biomedical Engineering, Science, and Health Systems, Drexel University, Philadelphia, PA, United States
| | - Amanda E. Kraft
- Advanced Technology Laboratories, Lockheed Martin, Arlington, VA, United States
| | - Matthias D. Ziegler
- Advanced Technology Laboratories, Lockheed Martin, Arlington, VA, United States
| | - Hasan Ayaz
- School of Biomedical Engineering, Science, and Health Systems, Drexel University, Philadelphia, PA, United States
- Department of Psychological and Brain Sciences, College of Arts and Sciences, Drexel University, Philadelphia, PA, United States
- Drexel Solutions Institute, Drexel University, Philadelphia, PA, United States
- A. J. Drexel Autism Institute, Drexel University, Philadelphia, PA, United States
- Department of Family and Community Health, University of Pennsylvania, Philadelphia, PA, United States
- Center for Injury Research and Prevention, Children's Hospital of Philadelphia, Philadelphia, PA, United States
| |
Collapse
|
4
|
Mai X, Sheng X, Shu X, Ding Y, Zhu X, Meng J. A Calibration-Free Hybrid Approach Combining SSVEP and EOG for Continuous Control. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3480-3491. [PMID: 37610901 DOI: 10.1109/tnsre.2023.3307814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
While SSVEP-BCI has been widely developed to control external devices, most of them rely on the discrete control strategy. The continuous SSVEP-BCI enables users to continuously deliver commands and receive real-time feedback from the devices, but it suffers from the transition state problem, a period the erroneous recognition, when users shift their gazes between targets. To resolve this issue, we proposed a novel calibration-free Bayesian approach by hybridizing SSVEP and electrooculography (EOG). First, canonical correlation analysis (CCA) was applied to detect the evoked SSVEPs, and saccade during the gaze shift was detected by EOG data using an adaptive threshold method. Then, the new target after the gaze shift was recognized based on a Bayesian optimization approach, which combined the detection of SSVEP and saccade together and calculated the optimized probability distribution of the targets. Eighteen healthy subjects participated in the offline and online experiments. The offline experiments showed that the proposed hybrid BCI had significantly higher overall continuous accuracy and shorter gaze-shifting time compared to FBCCA, CCA, MEC, and PSDA. In online experiments, the proposed hybrid BCI significantly outperformed CCA-based SSVEP-BCI in terms of continuous accuracy (77.61 ± 1.36%vs. 68.86 ± 1.08% and gaze-shifting time (0.93 ± 0.06s vs. 1.94 ± 0.08s). Additionally, participants also perceived a significant improvement over the CCA-based SSVEP-BCI when the newly proposed decoding approach was used. These results validated the efficacy of the proposed hybrid Bayesian approach for the BCI continuous control without any calibration. This study provides an effective framework for combining SSVEP and EOG, and promotes the potential applications of plug-and-play BCIs in continuous control.
Collapse
|
5
|
Dupre AE, Cronin MFM, Schmugge S, Tate S, Wack A, Prescott BR, Li C, Auerbach S, Suchdev K, Al-Faraj A, He W, Cervantes-Arslanian AM, Abdennadher M, Saxena A, Lehan W, Russo M, Pugsley B, Greer D, Shin M, Ong CJ. A machine learning eye movement detection algorithm using electrooculography. Sleep 2022; 46:6762708. [PMID: 36255119 DOI: 10.1093/sleep/zsac254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 08/25/2022] [Indexed: 12/04/2022] Open
Abstract
Abstract
Study Objectives
Eye movement quantification in polysomnograms (PSG) is difficult and resource intensive. Automated eye movement detection would enable further study of eye movement patterns in normal and abnormal sleep, which could be clinically diagnostic of neurologic disorders, or used to monitor potential treatments. We trained a long short-term memory (LSTM) algorithm that can identify eye movement occurrence with high sensitivity and specificity.
Methods
We conducted a retrospective, single-center study using one-hour PSG samples from 47 patients 18–90 years of age. Team members manually identified and trained an LSTM algorithm to detect eye movement presence, direction, and speed. We performed a 5-fold cross validation and implemented a “fuzzy” evaluation method to account for misclassification in the preceding and subsequent 1-second of gold standard manually labeled eye movements. We assessed G-means, discrimination, sensitivity, and specificity.
Results
Overall, eye movements occurred in 9.4% of the analyzed EOG recording time from 47 patients. Eye movements were present 3.2% of N2 (lighter stages of sleep) time, 2.9% of N3 (deep sleep), and 19.8% of REM sleep. Our LSTM model had average sensitivity of 0.88 and specificity of 0.89 in 5-fold cross validation, which improved to 0.93 and 0.92 respectively using the fuzzy evaluation scheme.
Conclusion
An automated algorithm can detect eye movements from EOG with excellent sensitivity and specificity. Noninvasive, automated eye movement detection has several potential clinical implications in improving sleep study stage classification and establishing normal eye movement distributions in healthy and unhealthy sleep, and in patients with and without brain injury.
Collapse
Affiliation(s)
- Alicia E Dupre
- Department of Neurology, Boston Medical Center , Boston, MA, 02118 , USA
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - Michael F M Cronin
- Department of Neurology, Boston Medical Center , Boston, MA, 02118 , USA
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - Stephen Schmugge
- Department of Computer Science, University of North Carolina , Charlotte, NC, 28223 , USA
| | - Samuel Tate
- Department of Computer Science, University of North Carolina , Charlotte, NC, 28223 , USA
| | - Audrey Wack
- Department of Neurology, Boston Medical Center , Boston, MA, 02118 , USA
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - Brenton R Prescott
- Department of Neurology, Boston Medical Center , Boston, MA, 02118 , USA
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - Cheyi Li
- Department of Neurology, Boston Medical Center , Boston, MA, 02118 , USA
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - Sanford Auerbach
- Department of Neurology, Boston Medical Center , Boston, MA, 02118 , USA
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - Kushak Suchdev
- Department of Neurology, Boston Medical Center , Boston, MA, 02118 , USA
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - Abrar Al-Faraj
- Department of Neurology, Boston Medical Center , Boston, MA, 02118 , USA
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - Wei He
- Department of Pulmonology and Critical Care Medicine, Tufts Medical Center , Boston, MA, 02111 , USA
| | - Anna M Cervantes-Arslanian
- Department of Neurology, Boston Medical Center , Boston, MA, 02118 , USA
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - Myriam Abdennadher
- Department of Neurology, Boston Medical Center , Boston, MA, 02118 , USA
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - Aneeta Saxena
- Department of Neurology, Boston Medical Center , Boston, MA, 02118 , USA
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - Walter Lehan
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - Mary Russo
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - Brian Pugsley
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - David Greer
- Department of Neurology, Boston Medical Center , Boston, MA, 02118 , USA
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| | - Min Shin
- Department of Computer Science, University of North Carolina , Charlotte, NC, 28223 , USA
| | - Charlene J Ong
- Department of Neurology, Boston Medical Center , Boston, MA, 02118 , USA
- Department of Neurology, Boston University School of Medicine , Boston , MA, 02118 , USA
| |
Collapse
|
6
|
Hecimovich M, King D, Murphy M, Koyama K. An investigation into the measurement properties of the King-Devick Eye Tracking system. JOURNAL OF CONCUSSION 2022. [DOI: 10.1177/20597002221082865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Objectives Eye tracking has been gaining increasing attention as a possible assessment and monitoring tool for concussion. The King-Devick test (K-DT) was expanded to include an infrared video-oculography-based eye tracker (K-D ET). Therefore, the aim was to provide evidence on the reliability of the K-D ET system under an exercise condition. Methods Participants (N = 61; 26 male, 35 female; age range 19-25) were allocated to an exercise or sedentary group. Both groups completed a baseline K-D ET measurement and then either two 10-min exercise or sedentary interventions with repeated K-D ET measurements between interventions. Results The test-retest reliability of the K-D ET ranged from good to excellent for the different variables measured. The mean ± SD of the differences for the total number of saccades was 1.04 ± 4.01 and there was an observable difference (p = 0.005) in the trial number. There were no observable differences for the intervention (p = 0.768), gender (p = 0.121) and trial (p = 0.777) for average saccade’s velocity. The mean ± SD of the difference of the total fixations before and after intervention across both trials was 1.04 ± 3.63 and there was an observable difference in the trial number (p = 0.025). The mean ± SD of the differences for the Inter-Saccadic Interval and the fixation polyarea before and after intervention across both trials were 1.86 ± 22.99 msec and 0.51 ± 59.11 mm2 and no observable differences for the intervention, gender and trial. Conclusion The results provide evidence on the reliability of the K-D ET, and the eye-tracking components and demonstrate the relationship between completion time and other variables of the K-D ET system. This is vital as the use of the K-DT may be increasing and the combination of the K-DT and eye tracking as one single package highlights the need to specifically measure the reliability of this combined unit.
Collapse
Affiliation(s)
- M. Hecimovich
- Division of Athletic Training, University of Northern Iowa, Cedar Falls, Iowa, USA
| | - D. King
- Sports Performance Research Institute New Zealand (SPRINZ) at AUT Millennium, Faculty of Health and Environmental Science, Auckland University of Technology, Auckland, New Zealand
- Traumatic Brain injury Network (TBIN), Auckland University of Technology, Auckland, New Zealand
- Department of Science and Technology, University of New England, Sydney, Australia
| | - M. Murphy
- School of Medical and Health Sciences, Edith Cowan University, Joondalup, Western Australia, Australia
- SportsMed Subiaco, St John of God Health Care, Subiaco, Western Australia, Australia
| | - K. Koyama
- Department of Rehabilitation Medicine, Gunma University Graduate School of Medicine
| |
Collapse
|
7
|
Parisot K, Zozor S, Guérin-Dugué A, Phlypo R, Chauvin A. Micro-pursuit: A class of fixational eye movements correlating with smooth, predictable, small-scale target trajectories. J Vis 2021; 21:9. [PMID: 33444434 PMCID: PMC7838552 DOI: 10.1167/jov.21.1.9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Humans generate ocular pursuit movements when a moving target is tracked throughout the visual field. In this article, we show that pursuit can be generated and measured at small amplitudes, at the scale of fixational eye movements, and tag these eye movements as micro-pursuits. During micro-pursuits, gaze direction correlates with a target's smooth, predictable target trajectory. We measure similarity between gaze and target trajectories using a so-called maximally projected correlation and provide results in three experimental data sets. A first observation of micro-pursuit is provided in an implicit pursuit task, where observers were tasked to maintain their gaze fixed on a static cross at the center of screen, while reporting changes in perception of an ambiguous, moving (Necker) cube. We then provide two experimental paradigms and their corresponding data sets: a first replicating micro-pursuits in an explicit pursuit task, where observers had to follow a moving fixation cross (Cross), and a second with an unambiguous square (Square). Individual and group analyses provide evidence that micro-pursuits exist in both the Necker and Cross experiments but not in the Square experiment. The interexperiment analysis results suggest that the manipulation of stimulus target motion, task, and/or the nature of the stimulus may play a role in the generation of micro-pursuits.
Collapse
Affiliation(s)
- Kevin Parisot
- CNRS, Institute of Engineering, GIPSA-lab & LPNC, University of Grenoble Alpes, Grenoble, France., https://scholar.google.fr/citations?user=WjGkMmIAAAAJ&hl=fr&oi=ao
| | - Steeve Zozor
- CNRS, Institute of Engineering, GIPSA-lab, University of Grenoble Alpes, Grenoble, France., http://www.gipsa-lab.grenoble-inp.fr/page_pro.php?vid=86
| | - Anne Guérin-Dugué
- CNRS, Institute of Engineering, GIPSA-lab, University of Grenoble Alpes, Grenoble, France., http://www.gipsa-lab.grenoble-inp.fr/page_pro.php?vid=71
| | - Ronald Phlypo
- CNRS, Institute of Engineering, GIPSA-lab, University of Grenoble Alpes, Grenoble, France., http://www.gipsa-lab.grenoble-inp.fr/page_pro.php?vid=2173
| | - Alan Chauvin
- CNRS, LPNC, University of Grenoble Alpes, Grenoble, France., https://lpnc.univ-grenoble-alpes.fr/Alan-Chauvin
| |
Collapse
|
8
|
Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. Detection of normal and slow saccades using implicit piecewise polynomial approximation. J Vis 2021; 21:8. [PMID: 34125160 PMCID: PMC8212426 DOI: 10.1167/jov.21.6.8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The quantitative analysis of saccades in eye movement data unveils information associated with intention, cognition, and health status. Abnormally slow saccades are indicative of neurological disorders and often imply a specific pathological disturbance. However, conventional saccade detection algorithms are not designed to detect slow saccades, and are correspondingly unreliable when saccades are unusually slow. In this article, we propose an algorithm that is effective for the detection of both normal and slow saccades. The proposed algorithm is partly based on modeling saccadic waveforms as piecewise-quadratic signals. The algorithm first decreases noise in acquired eye-tracking data using optimization to minimize a prescribed objective function, then uses velocity thresholding to detect saccades. Using both simulated saccades and real saccades generated by healthy subjects and patients, we evaluate the performance of the proposed algorithm and 10 other detection algorithms. We show the proposed algorithm is more accurate in detecting both normal and slow saccades than other algorithms.
Collapse
Affiliation(s)
- Weiwei Dai
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - Ivan Selesnick
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - John-Ross Rizzo
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Janet Rucker
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Todd Hudson
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| |
Collapse
|
9
|
Sipatchin A, Wahl S, Rifai K. Eye-Tracking for Clinical Ophthalmology with Virtual Reality (VR): A Case Study of the HTC Vive Pro Eye's Usability. Healthcare (Basel) 2021; 9:180. [PMID: 33572072 PMCID: PMC7914806 DOI: 10.3390/healthcare9020180] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 02/04/2021] [Accepted: 02/05/2021] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND A case study is proposed to empirically test and discuss the eye-tracking status-quo hardware capabilities and limitations of an off-the-shelf virtual reality (VR) headset with embedded eye-tracking for at-home ready-to-go online usability in ophthalmology applications. METHODS The eye-tracking status-quo data quality of the HTC Vive Pro Eye is investigated with novel testing specific to objective online VR perimetry. Testing was done across a wide visual field of the head-mounted-display's (HMD) screen and in two different moving conditions. A new automatic and low-cost Raspberry Pi system is introduced for VR temporal precision testing for assessing the usability of the HTC Vive Pro Eye as an online assistance tool for visual loss. RESULTS The target position on the screen and head movement evidenced limitations of the eye-tracker capabilities as a perimetry assessment tool. Temporal precision testing showed the system's latency of 58.1 milliseconds (ms), evidencing its good potential usage as a ready-to-go online assistance tool for visual loss. CONCLUSIONS The test of the eye-tracking data quality provides novel analysis useful for testing upcoming VR headsets with embedded eye-tracking and opens discussion regarding expanding future introduction of these HMDs into patients' homes for low-vision clinical usability.
Collapse
Affiliation(s)
- Alexandra Sipatchin
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; (S.W.); (K.R.)
| | - Siegfried Wahl
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; (S.W.); (K.R.)
- Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
| | - Katharina Rifai
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; (S.W.); (K.R.)
- Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
| |
Collapse
|
10
|
Belkhiria C, Peysakhovich V. Electro-Encephalography and Electro-Oculography in Aeronautics: A Review Over the Last Decade (2010-2020). FRONTIERS IN NEUROERGONOMICS 2020; 1:606719. [PMID: 38234309 PMCID: PMC10790927 DOI: 10.3389/fnrgo.2020.606719] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Accepted: 11/17/2020] [Indexed: 01/19/2024]
Abstract
Electro-encephalography (EEG) and electro-oculography (EOG) are methods of electrophysiological monitoring that have potentially fruitful applications in neuroscience, clinical exploration, the aeronautical industry, and other sectors. These methods are often the most straightforward way of evaluating brain oscillations and eye movements, as they use standard laboratory or mobile techniques. This review describes the potential of EEG and EOG systems and the application of these methods in aeronautics. For example, EEG and EOG signals can be used to design brain-computer interfaces (BCI) and to interpret brain activity, such as monitoring the mental state of a pilot in determining their workload. The main objectives of this review are to, (i) offer an in-depth review of literature on the basics of EEG and EOG and their application in aeronautics; (ii) to explore the methodology and trends of research in combined EEG-EOG studies over the last decade; and (iii) to provide methodological guidelines for beginners and experts when applying these methods in environments outside the laboratory, with a particular focus on human factors and aeronautics. The study used databases from scientific, clinical, and neural engineering fields. The review first introduces the characteristics and the application of both EEG and EOG in aeronautics, undertaking a large review of relevant literature, from early to more recent studies. We then built a novel taxonomy model that includes 150 combined EEG-EOG papers published in peer-reviewed scientific journals and conferences from January 2010 to March 2020. Several data elements were reviewed for each study (e.g., pre-processing, extracted features and performance metrics), which were then examined to uncover trends in aeronautics and summarize interesting methods from this important body of literature. Finally, the review considers the advantages and limitations of these methods as well as future challenges.
Collapse
|
11
|
Grillini A, Renken RJ, Vrijling ACL, Heutink J, Cornelissen FW. Eye Movement Evaluation in Multiple Sclerosis and Parkinson's Disease Using a Standardized Oculomotor and Neuro-Ophthalmic Disorder Assessment (SONDA). Front Neurol 2020; 11:971. [PMID: 33013643 PMCID: PMC7506055 DOI: 10.3389/fneur.2020.00971] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Accepted: 07/24/2020] [Indexed: 11/13/2022] Open
Abstract
Evaluating the state of the oculomotor system of a patient is one of the fundamental tests done in neuro-ophthalmology. However, up to date, very few quantitative standardized tests of eye movements' quality exist, limiting this assessment to confrontational tests reliant on subjective interpretation. Furthermore, quantitative tests relying on eye movement properties, such as pursuit gain and saccade dynamics are often insufficient to capture the complexity of the underlying disorders and are often (too) long and tiring. In this study, we present SONDA (Standardized Oculomotor and Neurological Disorder Assessment): this test is based on analyzing eye tracking recorded during a short and intuitive continuous tracking task. We tested patients affected by Multiple Sclerosis (MS) and Parkinson's Disease (PD) and find that: (1) the saccadic dynamics of the main sequence alone are not sufficient to separate patients from healthy controls; (2) the combination of spatio-temporal and statistical properties of saccades and saccadic dynamics enables an identification of oculomotor abnormalities in both MS and PD patients. We conclude that SONDA constitutes a powerful screening tool that allows an in-depth evaluation of (deviant) oculomotor behavior in a few minutes of non-invasive testing.
Collapse
Affiliation(s)
- Alessandro Grillini
- Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Remco J Renken
- Department of Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Anne C L Vrijling
- Royal Dutch Visio, Center of Expertise for Blind and Partially Sighted People, Huizen, Netherlands
| | - Joost Heutink
- Royal Dutch Visio, Center of Expertise for Blind and Partially Sighted People, Huizen, Netherlands.,Department of Clinical and Developmental Neuropsychology, University of Groningen, Groningen, Netherlands
| | - Frans W Cornelissen
- Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| |
Collapse
|
12
|
A Multimodal Analysis Combining Behavioral Experiments and Survey-Based Methods to Assess the Cognitive Effect of Video Game Playing: Good or Evil? SENSORS 2020; 20:s20113219. [PMID: 32517096 PMCID: PMC7308934 DOI: 10.3390/s20113219] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Revised: 06/01/2020] [Accepted: 06/03/2020] [Indexed: 11/16/2022]
Abstract
This study aims to bridge the gap between the discrepant views of existing studies in different modalities on the cognitive effect of video game play. To this end, we conducted a set of tests with different modalities within each participant: (1) Self-Reports Analyses (SRA) consisting of five popular self-report surveys, and (2) a standard Behavioral Experiment (BE) using pro- and antisaccade paradigms, and analyzed how their results vary between Video Game Player (VGP) and Non-Video Game Player (NVGP) participant groups. Our result showed that (1) VGP scored significantly lower in Behavioral Inhibition System (BIS) than NVGP (p = 0.023), and (2) VGP showed significantly higher antisaccade error rate than NVGP (p = 0.005), suggesting that results of both SRA and BE support the existing view that video game play has a maleficent impact on the cognition by increasing impulsivity. However, the following correlation analysis on the results across individual participants found no significant correlation between SRA and BE, indicating a complex nature of the cognitive effect of video game play.
Collapse
|
13
|
Abstract
Eye movements are an important index of the neural functions of visual information processing, decision making, visuomotor coordination, sports performance, and so forth. However, the available optical tracking methods are impractical in many situations, such as the wearing of eyeglasses or the presence of ophthalmic disorders, and this can be overcome by accurate recording of eye movements by electrooculography (EOG). In this study we recorded eye movements by EOG simultaneously with high-density electroencephalogram (EEG) recording using a 128-channel EGI electrode net at a 500-Hz sampling rate, including appropriate facial electrodes. The participants made eye movements over a calibration target consisting of a 5×5 grid of stimulus targets. The results showed that the EOG methodology allowed accurate analysis of the amplitude and direction of the fixation locations and saccadic dynamics with a temporal resolution of 500 Hz, under both cued and uncued analysis regimes. Blink responses could be identified separately and were shown to have a more complex source derivation than has previously been recognized. The results also showed that the EOG signals recorded through the EEG net can achieve results as accurate as typical optical eye-tracking devices, and also allow for simultaneous assessment of neural activity during all types of eye movements. Moreover, the EOG method effectively avoids the technical difficulties related to eye-tracker positioning and the synchronization between EEG and eye movements. We showed that simultaneous EOG/EEG recording is a convenient means of measuring eye movements, with an accuracy comparable to that of many specialized eye-tracking systems.
Collapse
|
14
|
Vinuela-Navarro V, Erichsen JT, Williams C, Woodhouse JM. Quantitative Characterization of Smooth Pursuit Eye Movements in School-Age Children Using a Child-Friendly Setup. Transl Vis Sci Technol 2019; 8:8. [PMID: 31588373 PMCID: PMC6753964 DOI: 10.1167/tvst.8.5.8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Accepted: 07/05/2019] [Indexed: 11/24/2022] Open
Abstract
PURPOSE It could be argued that current studies investigating smooth pursuit development in children do not provide an optimal measure of smooth pursuit characteristics, given that a significant number have failed to adjust their setup and procedures to the child population. This study aimed to characterize smooth pursuit in children using child-friendly stimuli and procedures. METHODS Eye movements were recorded in 169 children (4-11 years) and 10 adults, while a customized, animated stimulus was presented moving horizontally and vertically at 6°/s and 12°/s. Eye movement recordings from 43 children with delayed reading, two with nystagmus, two with strabismus, and two with unsuccessful calibration were excluded from the analysis. Velocity gain, proportion of smooth pursuit, and the number and amplitude of saccades during smooth pursuit were calculated for the remaining participants. Median and quartiles were calculated for each age group and pursuit condition. ANOVA was used to investigate the effect of age on smooth pursuit parameters. RESULTS Differences across ages were found in velocity gain (6°/s P < 0.01; 12°/s P < 0.05), as well as the number (12°/s P < 0.05) and amplitude of saccades (12°/s P < 0.05), for horizontal smooth pursuit. Post hoc tests showed that these parameters were different between children aged 7 or younger and adults. No significant differences were found across ages in any smooth pursuit parameter for the vertical direction (P > 0.05). CONCLUSIONS Using child-friendly methods, children over the age of 7 to 8 years demonstrated adultlike smooth pursuit. TRANSLATIONAL RELEVANCE Child-friendly procedures are critical for appropriately characterizing smooth pursuit eye movements in children.
Collapse
Affiliation(s)
| | | | - Cathy Williams
- Population Health Sciences, Bristol Medical School, Bristol University, Bristol, UK
| | | |
Collapse
|
15
|
Abstract
Eye tracking is a useful tool when studying the oscillatory eye movements associated with nystagmus. However, this oscillatory nature of nystagmus is problematic during calibration since it introduces uncertainty about where the person is actually looking. This renders comparisons between separate recordings unreliable. Still, the influence of the calibration protocol on eye movement data from people with nystagmus has not been thoroughly investigated. In this work, we propose a calibration method using Procrustes analysis in combination with an outlier correction algorithm, which is based on a model of the calibration data and on the geometry of the experimental setup. The proposed method is compared to previously used calibration polynomials in terms of accuracy, calibration plane distortion and waveform robustness. Six recordings of calibration data, validation data and optokinetic nystagmus data from people with nystagmus and seven recordings from a control group were included in the study. Fixation errors during the recording of calibration data from the healthy participants were introduced, simulating fixation errors caused by the oscillatory movements found in nystagmus data. The outlier correction algorithm improved the accuracy for all tested calibration methods. The accuracy and calibration plane distortion performance of the Procrustes analysis calibration method were similar to the top performing mapping functions for the simulated fixation errors. The performance in terms of waveform robustness was superior for the Procrustes analysis calibration compared to the other calibration methods. The overall performance of the Procrustes calibration methods was best for the datasets containing errors during the calibration.
Collapse
|
16
|
Favre-Félix A, Graversen C, Hietkamp RK, Dau T, Lunner T. Improving Speech Intelligibility by Hearing Aid Eye-Gaze Steering: Conditions With Head Fixated in a Multitalker Environment. Trends Hear 2018. [PMCID: PMC6291882 DOI: 10.1177/2331216518814388] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The behavior of a person during a conversation typically involves both auditory and visual attention. Visual attention implies that the person directs his or her eye gaze toward the sound target of interest, and hence, detection of the gaze may provide a steering signal for future hearing aids. The steering could utilize a beamformer or the selection of a specific audio stream from a set of remote microphones. Previous studies have shown that eye gaze can be measured through electrooculography (EOG). To explore the precision and real-time feasibility of the methodology, seven hearing-impaired persons were tested, seated with their head fixed in front of three targets positioned at −30°, 0°, and +30° azimuth. Each target presented speech from the Danish DAT material, which was available for direct input to the hearing aid using head-related transfer functions. Speech intelligibility was measured in three conditions: a reference condition without any steering, a condition where eye gaze was estimated from EOG measures to select the desired audio stream, and an ideal condition with steering based on an eye-tracking camera. The “EOG-steering” improved the sentence correct score compared with the “no-steering” condition, although the performance was still significantly lower than the ideal condition with the eye-tracking camera. In conclusion, eye-gaze steering increases speech intelligibility, although real-time EOG-steering still requires improvements of the signal processing before it is feasible for implementation in a hearing aid.
Collapse
Affiliation(s)
- Antoine Favre-Félix
- Eriksholm Research Centre, Snekkersten, Denmark
- Hearing Systems Group, Department of Electrical Engineering, Danish Technical University, Lyngby, Denmark
| | | | | | - Torsten Dau
- Hearing Systems Group, Department of Electrical Engineering, Danish Technical University, Lyngby, Denmark
| | - Thomas Lunner
- Eriksholm Research Centre, Snekkersten, Denmark
- Hearing Systems Group, Department of Electrical Engineering, Danish Technical University, Lyngby, Denmark
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Sweden
| |
Collapse
|
17
|
1D CNN with BLSTM for automated classification of fixations, saccades, and smooth pursuits. Behav Res Methods 2018; 51:556-572. [DOI: 10.3758/s13428-018-1144-2] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
18
|
Hládek Ľ, Porr B, Brimijoin WO. Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography. PLoS One 2018; 13:e0190420. [PMID: 29304120 PMCID: PMC5755791 DOI: 10.1371/journal.pone.0190420] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 12/14/2017] [Indexed: 12/04/2022] Open
Abstract
The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user's eye gaze.
Collapse
Affiliation(s)
- Ľuboš Hládek
- Medical Research Council/Chief Scientist Office Institute of Hearing Research - Scottish Section, Glasgow, United Kingdom
| | - Bernd Porr
- School of Engineering, University of Glasgow, Glasgow, United Kingdom
| | - W. Owen Brimijoin
- Medical Research Council/Chief Scientist Office Institute of Hearing Research - Scottish Section, Glasgow, United Kingdom
| |
Collapse
|
19
|
Abstract
The Tobii Eyex Controller is a new low-cost binocular eye tracker marketed for integration in gaming and consumer applications. The manufacturers claim that the system was conceived for natural eye gaze interaction, does not require continuous recalibration, and allows moderate head movements. The Controller is provided with a SDK to foster the development of new eye tracking applications. We review the characteristics of the device for its possible use in scientific research. We develop and evaluate an open source Matlab Toolkit that can be employed to interface with the EyeX device for gaze recording in behavioral experiments. The Toolkit provides calibration procedures tailored to both binocular and monocular experiments, as well as procedures to evaluate other eye tracking devices. The observed performance of the EyeX (i.e. accuracy < 0.6°, precision < 0.25°, latency < 50 ms and sampling frequency ≈55 Hz), is sufficient for some classes of research application. The device can be successfully employed to measure fixation parameters, saccadic, smooth pursuit and vergence eye movements. However, the relatively low sampling rate and moderate precision limit the suitability of the EyeX for monitoring micro-saccadic eye movements or for real-time gaze-contingent stimulus control. For these applications, research grade, high-cost eye tracking technology may still be necessary. Therefore, despite its limitations with respect to high-end devices, the EyeX has the potential to further the dissemination of eye tracking technology to a broad audience, and could be a valuable asset in consumer and gaming applications as well as a subset of basic and clinical research settings.
Collapse
|
20
|
|
21
|
Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades. J Vis 2017; 17:10. [PMID: 28813566 PMCID: PMC5852949 DOI: 10.1167/17.9.10] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter.
Collapse
Affiliation(s)
- Weiwei Dai
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - Ivan Selesnick
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - John-Ross Rizzo
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| | - Janet Rucker
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| | - Todd Hudson
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| |
Collapse
|
22
|
Vinuela-Navarro V, Erichsen JT, Williams C, Woodhouse JM. Saccades and fixations in children with delayed reading skills. Ophthalmic Physiol Opt 2017; 37:531-541. [DOI: 10.1111/opo.12392] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Accepted: 04/24/2017] [Indexed: 11/29/2022]
Affiliation(s)
| | | | - Cathy Williams
- School of Social and Community Medicine; University of Bristol; Bristol UK
| | | |
Collapse
|
23
|
Abstract
PURPOSE This study presents a two-degree customized animated stimulus developed to evaluate smooth pursuit in children and investigates the effect of its predetermined characteristics (stimulus type and size) in an adult population. Then, the animated stimulus is used to evaluate the impact of different pursuit motion paradigms in children. METHODS To study the effect of animating a stimulus, eye movement recordings were obtained from 20 young adults while the customized animated stimulus and a standard dot stimulus were presented moving horizontally at a constant velocity. To study the effect of using a larger stimulus size, eye movement recordings were obtained from 10 young adults while presenting a standard dot stimulus of different size (1° and 2°) moving horizontally at a constant velocity. Finally, eye movement recordings were obtained from 12 children while the 2° customized animated stimulus was presented after three different smooth pursuit motion paradigms. Performance parameters, including gains and number of saccades, were calculated for each stimulus condition. RESULTS The animated stimulus produced in young adults significantly higher velocity gain (mean: 0.93; 95% CI: 0.90-0.96; P = .014), position gain (0.93; 0.85-1; P = .025), proportion of smooth pursuit (0.94; 0.91-0.96, P = .002), and fewer saccades (5.30; 3.64-6.96, P = .008) than a standard dot (velocity gain: 0.87; 0.82-0.92; position gain: 0.82; 0.72-0.92; proportion smooth pursuit: 0.87; 0.83-0.90; number of saccades: 7.75; 5.30-10.46). In contrast, changing the size of a standard dot stimulus from 1° to 2° did not have an effect on smooth pursuit in young adults (P > .05). Finally, smooth pursuit performance did not significantly differ in children for the different motion paradigms when using the animated stimulus (P > .05). CONCLUSIONS Attention-grabbing and more dynamic stimuli, such as the developed animated stimulus, might potentially be useful for eye movement research. Finally, with such stimuli, children perform equally well irrespective of the motion paradigm used.
Collapse
|
24
|
Leube A, Rifai K, Rifai K. Sampling rate influences saccade detection in mobile eye tracking of a reading task. J Eye Mov Res 2017; 10. [PMID: 33828659 PMCID: PMC7141092 DOI: 10.16910/jemr.10.3.3] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The purpose of this study was to compare saccade detection characteristics in two mobile eye trackers with different sampling rates in a natural task. Gaze data of 11 participants were recorded in one 60 Hz and one 120 Hz mobile eye tracker and compared directly to the saccades detected by a 1000 HZ stationary tracker while a reading task was performed. Saccades and fixations were detected using a velocity based algorithm and their properties analyzed. Results showed that there was no significant difference in the number of detected fixations but mean fixation durations differed between the 60 Hz mobile and the stationary eye tracker. The 120 Hz mobile eye tracker showed a significant increase in the detection rate of saccades and an improved estimation of the mean saccade duration, compared to the 60 Hz eye tracker. To conclude, for the detection and analysis of fast eye movements, such as saccades, it is better to use a 120 Hz mobile eye tracker.
Collapse
Affiliation(s)
- Alexander Leube
- Institute for Ophthalmic ResearchUniversity of Tuebingen, Germany
| | - Katharina Rifai
- Institute for Ophthalmic ResearchUniversity of Tuebingen, Germany
| | - Katharina Rifai
- Institute for Ophthalmic ResearchUniversity of Tuebingen, Germany
| |
Collapse
|
25
|
Alvarez-Estevez D, van Velzen I, Ottolini-Capellen T, Kemp B. Derivation and modeling of two new features for the characterization of rapid and slow eye movements in electrooculographic sleep recordings. Biomed Signal Process Control 2017. [DOI: 10.1016/j.bspc.2017.02.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
26
|
Abstract
Recent years have witnessed a remarkable growth in the way mathematics, informatics, and computer science can process data. In disciplines such as machine learning,
pattern recognition, computer vision, computational neurology, molecular biology,
information retrieval, etc., many new methods have been developed to cope with the
ever increasing amount and complexity of the data. These new methods offer interesting possibilities for processing, classifying and interpreting eye-tracking data. The
present paper exemplifies the application of topological arguments to improve the
evaluation of eye-tracking data. The task of classifying raw eye-tracking data into
saccades and fixations, with a single, simple as well as intuitive argument, described
as coherence of spacetime, is discussed, and the hierarchical ordering of the fixations
into dwells is shown. The method, namely identification by topological characteristics
(ITop), is parameter-free and needs no pre-processing and post-processing of the raw
data. The general and robust topological argument is easy to expand into complex
settings of higher visual tasks, making it possible to identify visual strategies.
Collapse
Affiliation(s)
- Oliver Hein
- Neurological University Clinic Hamburg UKE, Germany
| | | |
Collapse
|
27
|
Malekshahi R, Seth A, Papanikolaou A, Mathews Z, Birbaumer N, Verschure PFMJ, Caria A. Differential neural mechanisms for early and late prediction error detection. Sci Rep 2016; 6:24350. [PMID: 27079423 PMCID: PMC4832139 DOI: 10.1038/srep24350] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 03/24/2016] [Indexed: 01/01/2023] Open
Abstract
Emerging evidence indicates that prediction, instantiated at different perceptual levels, facilitate visual processing and enable prompt and appropriate reactions. Until now, the mechanisms underlying the effect of predictive coding at different stages of visual processing have still remained unclear. Here, we aimed to investigate early and late processing of spatial prediction violation by performing combined recordings of saccadic eye movements and fast event-related fMRI during a continuous visual detection task. Psychophysical reverse correlation analysis revealed that the degree of mismatch between current perceptual input and prior expectations is mainly processed at late rather than early stage, which is instead responsible for fast but general prediction error detection. Furthermore, our results suggest that conscious late detection of deviant stimuli is elicited by the assessment of prediction error’s extent more than by prediction error per se. Functional MRI and functional connectivity data analyses indicated that higher-level brain systems interactions modulate conscious detection of prediction error through top-down processes for the analysis of its representational content, and possibly regulate subsequent adaptation of predictive models. Overall, our experimental paradigm allowed to dissect explicit from implicit behavioral and neural responses to deviant stimuli in terms of their reliance on predictive models.
Collapse
Affiliation(s)
- Rahim Malekshahi
- Institut für Medizinische Psychologie und Verhaltensneurobiologie, Universität Tübingen, Tübingen, Germany.,Graduate Training Centre of Neuroscience, International Max Planck Research School, Tübingen, Germany
| | - Anil Seth
- Sackler Centre for Consciousness Science and School of Informatics, University of Sussex, Brighton, UK
| | - Amalia Papanikolaou
- Graduate Training Centre of Neuroscience, International Max Planck Research School, Tübingen, Germany.,Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | | | - Niels Birbaumer
- Institut für Medizinische Psychologie und Verhaltensneurobiologie, Universität Tübingen, Tübingen, Germany
| | | | - Andrea Caria
- Institut für Medizinische Psychologie und Verhaltensneurobiologie, Universität Tübingen, Tübingen, Germany.,Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy
| |
Collapse
|
28
|
Singh T, Perry CM, Herter TM. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment. J Neuroeng Rehabil 2016; 13:10. [PMID: 26812907 PMCID: PMC4728792 DOI: 10.1186/s12984-015-0107-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2015] [Accepted: 12/08/2015] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. RESULTS Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. CONCLUSIONS The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.
Collapse
Affiliation(s)
- Tarkeshwar Singh
- Department of Exercise Science, Arnold School of Public Health, University of South Carolina, 921 Assembly Street, Columbia, SC-29208, USA.
| | - Christopher M Perry
- Department of Exercise Science, Arnold School of Public Health, University of South Carolina, 921 Assembly Street, Columbia, SC-29208, USA.
| | - Troy M Herter
- Department of Exercise Science, Arnold School of Public Health, University of South Carolina, 921 Assembly Street, Columbia, SC-29208, USA.
| |
Collapse
|
29
|
Ranjbaran M, Smith HLH, Galiana HL. Automatic Classification of the Vestibulo-Ocular Reflex Nystagmus: Integration of Data Clustering and System Identification. IEEE Trans Biomed Eng 2015; 63:850-8. [PMID: 26357393 DOI: 10.1109/tbme.2015.2477038] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The vestibulo-ocular reflex (VOR) plays an important role in our daily activities by enabling us to fixate on objects during head movements. Modeling and identification of the VOR improves our insight into the system behavior and improves diagnosis of various disorders. However, the switching nature of eye movements (nystagmus), including the VOR, makes dynamic analysis challenging. The first step in such analysis is to segment data into its subsystem responses (here slow and fast segment intervals). Misclassification of segments results in biased analysis of the system of interest. Here, we develop a novel three-step algorithm to classify the VOR data into slow and fast intervals automatically. The proposed algorithm is initialized using a K-means clustering method. The initial classification is then refined using system identification approaches and prediction error statistics. The performance of the algorithm is evaluated on simulated and experimental data. It is shown that the new algorithm performance is much improved over the previous methods, in terms of higher specificity.
Collapse
|
30
|
Phasic activation of individual neurons in the locus ceruleus/subceruleus complex of monkeys reflects rewarded decisions to go but not stop. J Neurosci 2015; 34:13656-69. [PMID: 25297093 DOI: 10.1523/jneurosci.2566-14.2014] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Neurons in the brainstem nucleus locus ceruleus (LC) often exhibit phasic activation in the context of simple sensory-motor tasks. The functional role of this activation, which leads to the release of norepinephrine throughout the brain, is not yet understood in part because the conditions under which it occurs remain in question. Early studies focused on the relationship of LC phasic activation to salient sensory events, whereas more recent work has emphasized its timing relative to goal-directed behavioral responses, possibly representing the end of a sensory-motor decision process. To better understand the relationship between LC phasic activation and sensory, motor, and decision processing, we recorded spiking activity of neurons in the LC+ (LC and the adjacent, norepinephrine-containing subceruleus nucleus) of monkeys performing a countermanding task. The task required the monkeys to occasionally withhold planned, saccadic eye movements to a visual target. We found that many well isolated LC+ units responded to both the onset of the visual cue instructing the monkey to initiate the saccade and again after saccade onset, even when it was initiated erroneously in the presence of a stop signal. Many of these neurons did not respond to saccades made outside of the task context. In contrast, neither the appearance of the stop signal nor the successful withholding of the saccade elicited an LC+ response. Therefore, LC+ phasic activation encodes sensory and motor events related to decisions to execute, but not withhold, movements, implying a functional role in goal-directed actions, but not necessarily more covert forms of processing.
Collapse
|
31
|
Bolte B, Lappe M. Subliminal Reorientation and Repositioning in Immersive Virtual Environments using Saccadic Suppression. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2015; 21:545-552. [PMID: 26357105 DOI: 10.1109/tvcg.2015.2391851] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Virtual reality strives to provide a user with an experience of a simulated world that feels as natural as the real world. Yet, to induce this feeling, sometimes it becomes necessary for technical reasons to deviate from a one-to-one correspondence between the real and the virtual world, and to reorient or reposition the user's viewpoint. Ideally, users should not notice the change of the viewpoint to avoid breaks in perceptual continuity. Saccades, the fast eye movements that we make in order to switch gaze from one object to another, produce a visual discontinuity on the retina, but this is not perceived because the visual system suppresses perception during saccades. As a consequence, our perception fails to detect rotations of the visual scene during saccades. We investigated whether saccadic suppression of image displacement (SSID) can be used in an immersive virtual environment (VE) to unconsciously rotate and translate the observer's viewpoint. To do this, the scene changes have to be precisely time-locked to the saccade onset. We used electrooculography (EOG) for eye movement tracking and assessed the performance of two modified eye movement classification algorithms for the challenging task of online saccade detection that is fast enough for SSID. We investigated the sensitivity of participants to translations (forward/backward) and rotations (in the transverse plane) during trans-saccadic scene changes. We found that participants were unable to detect approximately ±0.5m translations along the line of gaze and ±5° rotations in the transverse plane during saccades with an amplitude of 15°. If the user stands still, our approach exploiting SSID thus provides the means to unconsciously change the user's virtual position and/or orientation. For future research and applications, exploiting SSID has the potential to improve existing redirected walking and change blindness techniques for unlimited navigation through arbitrarily-sized VEs by real walking.
Collapse
|
32
|
Pander T, Czabański R, Przybyła T, Pojda-Wilczek D. An automatic saccadic eye movement detection in an optokinetic nystagmus signal. ACTA ACUST UNITED AC 2014; 59:529-43. [DOI: 10.1515/bmt-2013-0137] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2013] [Accepted: 08/08/2014] [Indexed: 11/15/2022]
Abstract
AbstractA saccade is one of the characteristic types of eye movements. The accurate detection and location of saccades in the signal representing the movement activity of the eyes are essential in medical applications. The main purpose of this paper is to present a new, robust approach to the detection of saccadic eye movements. The procedure is based on a so-called detection function, which is the result of the electronystagmographic (ENG) myriad signal filtering, nonlinear operation, and fuzzy median clustering. Smooth peaks of the detection function waveform correspond to the location of saccades in the ENG signal. The median fuzzy clustering-based method allows for calculating the amplitude threshold of the detection function, which improves the accurate saccade recognition. Both of these robust methods provide a two-step protection against outliers. The proposed algorithm was tested using artificial as well as real optokinetic nystagmus signals under different noise conditions. The results show the usefulness of the procedure when the precise detection and location of saccades are necessary.
Collapse
|
33
|
Adaptation of visual tracking synchronization after one night of sleep deprivation. Exp Brain Res 2013; 232:121-31. [DOI: 10.1007/s00221-013-3725-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2013] [Accepted: 09/25/2013] [Indexed: 10/26/2022]
|
34
|
Detection of Saccades and Postsaccadic Oscillations in the Presence of Smooth Pursuit. IEEE Trans Biomed Eng 2013; 60:2484-93. [DOI: 10.1109/tbme.2013.2258918] [Citation(s) in RCA: 74] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
35
|
Wass SV, Smith TJ, Johnson MH. Parsing eye-tracking data of variable quality to provide accurate fixation duration estimates in infants and adults. Behav Res Methods 2013; 45:229-50. [PMID: 22956360 PMCID: PMC3578727 DOI: 10.3758/s13428-012-0245-6] [Citation(s) in RCA: 103] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Researchers studying infants' spontaneous allocation of attention have traditionally relied on hand-coding infants' direction of gaze from videos; these techniques have low temporal and spatial resolution and are labor intensive. Eye-tracking technology potentially allows for much more precise measurement of how attention is allocated at the subsecond scale, but a number of technical and methodological issues have given rise to caution about the quality and reliability of high temporal resolution data obtained from infants. We present analyses suggesting that when standard dispersal-based fixation detection algorithms are used to parse eye-tracking data obtained from infants, the results appear to be heavily influenced by interindividual variations in data quality. We discuss the causes of these artifacts, including fragmentary fixations arising from flickery or unreliable contact with the eyetracker and variable degrees of imprecision in reported position of gaze. We also present new algorithms designed to cope with these problems by including a number of new post hoc verification checks to identify and eliminate fixations that may be artifactual. We assess the results of our algorithms by testing their reliability using a variety of methods and on several data sets. We contend that, with appropriate data analysis methods, fixation duration can be a reliable and stable measure in infants. We conclude by discussing ways in which studying fixation durations during unconstrained orienting may offer insights into the relationship between attention and learning in naturalistic settings.
Collapse
Affiliation(s)
- S V Wass
- Centre for Brain and Cognitive Development, Birkbeck College, University of London, London, WC1E 7HX, UK.
| | | | | |
Collapse
|
36
|
Mould MS, Foster DH, Amano K, Oakley JP. A simple nonparametric method for classifying eye fixations. Vision Res 2012; 57:18-25. [PMID: 22227608 DOI: 10.1016/j.visres.2011.12.006] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2011] [Revised: 12/20/2011] [Accepted: 12/21/2011] [Indexed: 11/20/2022]
Abstract
There is no standard method for classifying eye fixations. Thresholds for speed, acceleration, duration, and stability of point of gaze have each been employed to demarcate data, but they have no commonly accepted values. Here, some general distributional properties of eye movements were used to construct a simple method for classifying fixations, without parametric assumptions or expert judgment. The method was primarily speed-based, but the required optimum speed threshold was derived automatically from individual data for each observer and stimulus with the aid of Tibshirani, Walther, and Hastie's 'gap statistic'. An optimum duration threshold, also derived automatically from individual data, was used to eliminate the effects of instrumental noise. The method was tested on data recorded from a video eye-tracker sampling at 250 frames a second while experimental observers viewed static natural scenes in over 30,000 one-second trials. The resulting classifications were compared with those by three independent expert visual classifiers, with 88-94% agreement, and also against two existing parametric methods. Robustness to instrumental noise and sampling rate were verified in separate simulations. The method was applied to the recorded data to illustrate the variation of mean fixation duration and saccade amplitude across observers and scenes.
Collapse
Affiliation(s)
- Matthew S Mould
- School of Electrical and Electronic Engineering, University of Manchester, Manchester, UK.
| | | | | | | |
Collapse
|
37
|
Virkkala J. Combined frequency and time domain sleep feature calculation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2011; 2011:7723-7726. [PMID: 22256128 DOI: 10.1109/iembs.2011.6091903] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In automated sleep analysis usually both frequency and time domain features are calculated from measured physiological (EEG, EOG, EMG) signals. Usually Discrete Fourier Transform (DFT) is used for different frequency domain measures and Digital Filtering (FIR or IIR) for time domain measurement. Here we demonstrate potential usefulness of using modified inverse DFT as a step for time domain feature calculation. Analytical formulas are shown for calculating interpolation, velocity and acceleration of filtered signals. Preliminary examples of electro-oculography (EOG) signal analysis during sleep are presented. Although same results could be obtained with conventional filtering followed by numerical differentiation the presented could be useful in some cases.
Collapse
Affiliation(s)
- Jussi Virkkala
- Department of Clinical Neurophysiology, Medical Imaging Centre, Pirkanmaa Hospital District, Tampere, Finland.
| |
Collapse
|