1
|
Taore A, Tiang M, Dakin SC. (The limits of) eye-tracking with iPads. J Vis 2024; 24:1. [PMID: 38953861 PMCID: PMC11223623 DOI: 10.1167/jov.24.7.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 04/22/2024] [Indexed: 07/04/2024] Open
Abstract
Applications for eye-tracking-particularly in the clinic-are limited by a reliance on dedicated hardware. Here we compare eye-tracking implemented on an Apple iPad Pro 11" (third generation)-using the device's infrared head-tracking and front-facing camera-with a Tobii 4c infrared eye-tracker. We estimated gaze location using both systems while 28 observers performed a variety of tasks. For estimating fixation, gaze position estimates from the iPad were less accurate and precise than the Tobii (mean absolute error of 3.2° ± 2.0° compared with 0.75° ± 0.43°), but fixation stability estimates were correlated across devices (r = 0.44, p < 0.05). For tasks eliciting saccades >1.5°, estimated saccade counts (r = 0.4-0.73, all p < 0.05) were moderately correlated across devices. For tasks eliciting saccades >8° we observed moderate correlations in estimated saccade speed and amplitude (r = 0.4-0.53, all p < 0.05). We did, however, note considerable variation in the vertical component of estimated smooth pursuit speed from the iPad and a catastrophic failure of tracking on the iPad in 5% to 20% of observers (depending on the test). Our findings sound a note of caution to researchers seeking to use iPads for eye-tracking and emphasize the need to properly examine their eye-tracking data to remove artifacts and outliers.
Collapse
Affiliation(s)
- Aryaman Taore
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand
| | - Michelle Tiang
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand
| | - Steven C Dakin
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
| |
Collapse
|
2
|
Shalileh S, Ignatov D, Lopukhina A, Dragoy O. Identifying dyslexia in school pupils from eye movement and demographic data using artificial intelligence. PLoS One 2023; 18:e0292047. [PMID: 37992041 PMCID: PMC10664902 DOI: 10.1371/journal.pone.0292047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Accepted: 09/09/2023] [Indexed: 11/24/2023] Open
Abstract
This paper represents our research results in the pursuit of the following objectives: (i) to introduce a novel multi-sources data set to tackle the shortcomings of the previous data sets, (ii) to propose a robust artificial intelligence-based solution to identify dyslexia in primary school pupils, (iii) to investigate our psycholinguistic knowledge by studying the importance of the features in identifying dyslexia by our best AI model. In order to achieve the first objective, we collected and annotated a new set of eye-movement-during-reading data. Furthermore, we collected demographic data, including the measure of non-verbal intelligence, to form our three data sources. Our data set is the largest eye-movement data set globally. Unlike the previously introduced binary-class data sets, it contains (A) three class labels and (B) reading speed. Concerning the second objective, we formulated the task of dyslexia prediction as regression and classification problems and scrutinized the performance of 12 classifications and eight regressions approaches. We exploited the Bayesian optimization method to fine-tune the hyperparameters of the models: and reported the average and the standard deviation of our evaluation metrics in a stratified ten-fold cross-validation. Our studies showed that multi-layer perceptron, random forest, gradient boosting, and k-nearest neighbor form the group having the most acceptable results. Moreover, we showed that although separately using each data source did not lead to accurate results, their combination led to a reliable solution. We also determined the importance of the features of our best classifier: our findings showed that the IQ, gender, and age are the top three important features; we also showed that fixation along the y-axis is more important than other fixation data. Dyslexia detection, eye fixation, eye movement, demographic, classification, regression, artificial intelligence.
Collapse
Affiliation(s)
| | - Dmitry Ignatov
- School of Data Analysis and Artificial Intelligence, Faculty of Computer Science, Moscow, Russia
| | | | - Olga Dragoy
- Center for Language and Brain, HSE University, Moscow, Russia
- Institute of Linguistics, Russian Academy of Sciences, Moscow, Russia
| |
Collapse
|
3
|
Friedman L, Prokopenko V, Djanian S, Katrychuk D, Komogortsev OV. Factors affecting inter-rater agreement in human classification of eye movements: a comparison of three datasets. Behav Res Methods 2023; 55:417-427. [PMID: 35411475 DOI: 10.3758/s13428-021-01782-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/22/2021] [Indexed: 11/08/2022]
Abstract
Manual classification of eye-movements is used in research and as a basis for comparison with automatic algorithms in the development phase. However, human classification will not be useful if it is unreliable and unrepeatable. Therefore, it is important to know what factors might influence and enhance the accuracy and reliability of human classification of eye-movements. In this report we compare three datasets of human manual classification, two from earlier datasets and one, our own dataset, which we present here for the first time. For inter-rater reliability, we assess both the event-level F1-score and sample-level Cohen's κ, across groups of raters. The report points to several possible influences on human classification reliability: eye-tracker quality, use of head restraint, characteristics of the recorded subjects, the availability of detailed scoring rules, and the characteristics and training of the raters.
Collapse
Affiliation(s)
- Lee Friedman
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA.
| | - Vladyslav Prokopenko
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
| | - Shagen Djanian
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
- Department of Computer Science, Aalborg University, Selma Lagerlofs Vej 300, 9220, Aalborg East, Denmark
| | - Dmytro Katrychuk
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
| | - Oleg V Komogortsev
- Derrick M5, Department of Computer Science, Texas State University, 601 University Drive, San Marcos, Texas, 78640, USA
| |
Collapse
|
4
|
Evaluating Eye Movement Event Detection: A Review of the State of the Art. Behav Res Methods 2022:10.3758/s13428-021-01763-7. [PMID: 35715615 DOI: 10.3758/s13428-021-01763-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/27/2021] [Indexed: 11/08/2022]
Abstract
Detecting eye movements in raw eye tracking data is a well-established research area by itself, as well as a common pre-processing step before any subsequent analysis. As in any field, however, progress and successful collaboration can only be achieved provided a shared understanding of the pursued goal. This is often formalised via defining metrics that express the quality of an approach to solving the posed problem. Both the big-picture intuition behind the evaluation strategies and seemingly small implementation details influence the resulting measures, making even studies with outwardly similar procedures essentially incomparable, impeding a common understanding. In this review, we systematically describe and analyse evaluation methods and measures employed in the eye movement event detection field to date. While recently developed evaluation strategies tend to quantify the detector's mistakes at the level of whole eye movement events rather than individual gaze samples, they typically do not separate establishing correspondences between true and predicted events from the quantification of the discovered errors. In our analysis we separate these two steps where possible, enabling their almost arbitrary combinations in an evaluation pipeline. We also present the first large-scale empirical analysis of event matching strategies in the literature, examining these various combinations both in practice and theoretically. We examine the particular benefits and downsides of the evaluation methods, providing recommendations towards more intuitive and informative assessment. We implemented the evaluation strategies on which this work focuses in a single publicly available library: https://github.com/r-zemblys/EM-event-detection-evaluation .
Collapse
|
5
|
Yin J, Sun J, Li J, Liu K. An Effective Gaze-Based Authentication Method with the Spatiotemporal Feature of Eye Movement. SENSORS (BASEL, SWITZERLAND) 2022; 22:3002. [PMID: 35458986 PMCID: PMC9032520 DOI: 10.3390/s22083002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 03/10/2022] [Accepted: 03/12/2022] [Indexed: 06/14/2023]
Abstract
Eye movement has become a new behavioral feature for biometric authentication. In the eye movement-based authentication methods that use temporal features and artificial design features, the required duration of eye movement recordings are too long to be applied. Therefore, this study aims at using eye movement recordings with shorter duration to realize authentication. And we give out a reasonable eye movement recording duration that should be less than 12 s, referring to the changing pattern of the deviation degree between the gaze point and the stimulus point on the screen. In this study, the temporal motion features of the gaze points and the spatial distribution features of the saccade are using to represent the personal identity. Two datasets are constructed for the experiments, including 5 s and 12 s of eye movement recordings. On the datasets constructed in this paper, the open-set authentication results show that the Equal Error Rate of our proposed methods can reach 10.62% when recording duration is 12 s and 12.48% when recording duration is 5 s. The closed-set authentication results show that the Equal Error Rate of our proposed methods can reach 5.25% when recording duration is 12 s and 7.82% when recording duration is 5 s. It demonstrates that the proposed method provides a reference for the eye movements data-based identity authentication.
Collapse
Affiliation(s)
- Jinghui Yin
- School of Information Science and Engineering, Shandong Normal University, Jinan 250399, China; (J.Y.); (K.L.)
| | - Jiande Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan 250399, China; (J.Y.); (K.L.)
| | - Jing Li
- School of Journalism and Communication, Shandong Normal University, Jinan 250399, China;
| | - Ke Liu
- School of Information Science and Engineering, Shandong Normal University, Jinan 250399, China; (J.Y.); (K.L.)
| |
Collapse
|
6
|
Eye movements during text reading align with the rate of speech production. Nat Hum Behav 2021; 6:429-442. [PMID: 34873275 DOI: 10.1038/s41562-021-01215-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Accepted: 09/10/2021] [Indexed: 11/08/2022]
Abstract
Across languages, the speech signal is characterized by a predominant modulation of the amplitude spectrum between about 4.3 and 5.5 Hz, reflecting the production and processing of linguistic information chunks (syllables and words) every ~200 ms. Interestingly, ~200 ms is also the typical duration of eye fixations during reading. Prompted by this observation, we demonstrate that German readers sample written text at ~5 Hz. A subsequent meta-analysis of 142 studies from 14 languages replicates this result and shows that sampling frequencies vary across languages between 3.9 Hz and 5.2 Hz. This variation systematically depends on the complexity of the writing systems (character-based versus alphabetic systems and orthographic transparency). Finally, we empirically demonstrate a positive correlation between speech spectrum and eye movement sampling in low-skilled non-native readers, with tentative evidence from post hoc analysis suggesting the same relationship in low-skilled native readers. On the basis of this convergent evidence, we propose that during reading, our brain's linguistic processing systems imprint a preferred processing rate-that is, the rate of spoken language production and perception-onto the oculomotor system.
Collapse
|
7
|
Griffith H, Lohr D, Abdulin E, Komogortsev O. GazeBase, a large-scale, multi-stimulus, longitudinal eye movement dataset. Sci Data 2021; 8:184. [PMID: 34272404 PMCID: PMC8285447 DOI: 10.1038/s41597-021-00959-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 05/20/2021] [Indexed: 11/09/2022] Open
Abstract
This manuscript presents GazeBase, a large-scale longitudinal dataset containing 12,334 monocular eye-movement recordings captured from 322 college-aged participants. Participants completed a battery of seven tasks in two contiguous sessions during each round of recording, including a - (1) fixation task, (2) horizontal saccade task, (3) random oblique saccade task, (4) reading task, (5/6) free viewing of cinematic video task, and (7) gaze-driven gaming task. Nine rounds of recording were conducted over a 37 month period, with participants in each subsequent round recruited exclusively from prior rounds. All data was collected using an EyeLink 1000 eye tracker at a 1,000 Hz sampling rate, with a calibration and validation protocol performed before each task to ensure data quality. Due to its large number of participants and longitudinal nature, GazeBase is well suited for exploring research hypotheses in eye movement biometrics, along with other applications applying machine learning to eye movement signal analysis. Classification labels produced by the instrument's real-time parser are provided for a subset of GazeBase, along with pupil area.
Collapse
Affiliation(s)
- Henry Griffith
- Texas State University, Department of Computer Science, San Marcos, TX, 78666, USA.
| | - Dillon Lohr
- Texas State University, Department of Computer Science, San Marcos, TX, 78666, USA
| | - Evgeny Abdulin
- Texas State University, Department of Computer Science, San Marcos, TX, 78666, USA
| | - Oleg Komogortsev
- Texas State University, Department of Computer Science, San Marcos, TX, 78666, USA
| |
Collapse
|
8
|
Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. Detection of normal and slow saccades using implicit piecewise polynomial approximation. J Vis 2021; 21:8. [PMID: 34125160 PMCID: PMC8212426 DOI: 10.1167/jov.21.6.8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The quantitative analysis of saccades in eye movement data unveils information associated with intention, cognition, and health status. Abnormally slow saccades are indicative of neurological disorders and often imply a specific pathological disturbance. However, conventional saccade detection algorithms are not designed to detect slow saccades, and are correspondingly unreliable when saccades are unusually slow. In this article, we propose an algorithm that is effective for the detection of both normal and slow saccades. The proposed algorithm is partly based on modeling saccadic waveforms as piecewise-quadratic signals. The algorithm first decreases noise in acquired eye-tracking data using optimization to minimize a prescribed objective function, then uses velocity thresholding to detect saccades. Using both simulated saccades and real saccades generated by healthy subjects and patients, we evaluate the performance of the proposed algorithm and 10 other detection algorithms. We show the proposed algorithm is more accurate in detecting both normal and slow saccades than other algorithms.
Collapse
Affiliation(s)
- Weiwei Dai
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - Ivan Selesnick
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - John-Ross Rizzo
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Janet Rucker
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Todd Hudson
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| |
Collapse
|
9
|
Friedman L, Lohr D, Hanson T, Komogortsev OV. Angular Offset Distributions During Fixation Are, More Often Than Not, Multimodal. J Eye Mov Res 2021; 14:10.16910/jemr.14.3.2. [PMID: 34122749 PMCID: PMC8189800 DOI: 10.16910/jemr.14.3.2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Typically, the position error of an eye-tracking device is measured as the distance of the eye-position from the target position in two-dimensional space (angular offset). Accuracy is the mean angular offset. The mean is a highly interpretable measure of central tendency if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the mean is less interpretable. We will present evidence that the majority of such distributions are multimodal. Only 14.7% of fixation angular offset distributions were unimodal, and of these, only 11.5% were normally distributed. (Of the entire dataset, 1.7% were unimodal and normal.) This multimodality is true even if there is only a single, continuous tracking fixation segment per trial. We present several approaches to measure accuracy in the face of multimodality. We also address the role of fixation drift in partially explaining multimodality.
Collapse
Affiliation(s)
| | - Dillon Lohr
- Texas State University, San Marcos, Texas, USA
| | | | | |
Collapse
|
10
|
Friedman L, Stern HS, Price LR, Komogortsev OV. Why Temporal Persistence of Biometric Features, as Assessed by the Intraclass Correlation Coefficient, Is So Valuable for Classification Performance. SENSORS (BASEL, SWITZERLAND) 2020; 20:E4555. [PMID: 32823860 PMCID: PMC7472145 DOI: 10.3390/s20164555] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Revised: 08/03/2020] [Accepted: 08/10/2020] [Indexed: 11/16/2022]
Abstract
It is generally accepted that relatively more permanent (i.e., more temporally persistent) traits are more valuable for biometric performance than less permanent traits. Although this finding is intuitive, there is no current work identifying exactly where in the biometric analysis temporal persistence makes a difference. In this paper, we answer this question. In a recent report, we introduced the intraclass correlation coefficient (ICC) as an index of temporal persistence for such features. Here, we present a novel approach using synthetic features to study which aspects of a biometric identification study are influenced by the temporal persistence of features. What we show is that using more temporally persistent features produces effects on the similarity score distributions that explain why this quality is so key to biometric performance. The results identified with the synthetic data are largely reinforced by an analysis of two datasets, one based on eye-movements and one based on gait. There was one difference between the synthetic and real data, related to the intercorrelation of features in real data. Removing these intercorrelations for real datasets with a decorrelation step produced results which were very similar to that obtained with synthetic features.
Collapse
Affiliation(s)
- Lee Friedman
- Department of Computer Science, Texas State University, 601 University Dr, San Marcos, TX 78666, USA;
| | - Hal S. Stern
- Department of Statistics, University of California, Irvine, CA 92697, USA;
| | - Larry R. Price
- Methodology, Measurement & Statistics Office of Research & Sponsored Programs, Texas State University, 601 University Dr, San Marcos, TX 78666, USA;
| | - Oleg V. Komogortsev
- Department of Computer Science, Texas State University, 601 University Dr, San Marcos, TX 78666, USA;
| |
Collapse
|
11
|
Abstract
Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources.
Collapse
|
12
|
Evaluating three approaches to binary event-level agreement scoring. A reply to Friedman (2020). Behav Res Methods 2020; 53:325-334. [PMID: 32705657 PMCID: PMC7880951 DOI: 10.3758/s13428-020-01425-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
13
|
Voloh B, Watson MR, König S, Womelsdorf T. MAD saccade: statistically robust saccade threshold estimation via the median absolute deviation. J Eye Mov Res 2020; 12:10.16910/jemr.12.8.3. [PMID: 33828776 PMCID: PMC7881893 DOI: 10.16910/jemr.12.8.3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Saccade detection is a critical step in the analysis of gaze data. A common method for saccade detection is to use a simple threshold for velocity or acceleration values, which can be estimated from the data using the mean and standard deviation. However, this method has the downside of being influenced by the very signal it is trying to detect, the outlying velocities or accelerations that occur during saccades. We propose instead to use the median absolute deviation (MAD), a robust estimator of dispersion that is not influenced by outliers. We modify an algorithm proposed by Nyström and colleagues, and quantify saccade detection performance in both simulated and human data. Our modified algorithm shows a significant and marked improvement in saccade detection - showing both more true positives and less false negatives - especially under higher noise levels. We conclude that robust estimators can be widely adopted in other common, automatic gaze classification algorithms due to their ease of implementation.
Collapse
|
14
|
Brief communication: Three errors and two problems in a recent paper: gazeNet: End-to-end eye-movement event detection with deep neural networks (Zemblys, Niehorster, and Holmqvist, 2019). Behav Res Methods 2020; 52:1671-1680. [PMID: 32291731 DOI: 10.3758/s13428-019-01342-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Zemblys et al. (Behavior Research Methods, 51(2), 840-864, 2019) reported on a method for the classification of eye-movements ("gazeNet"). I have found three errors and two problems with that paper that are explained herein. Error 1: The gazeNet classification method was built assuming that a hand-scored dataset from Lund University was all collected at 500 Hz, but in fact, six of the 34 recording files were actually collected at 200 Hz. Of the six datasets that were used as the training set for the gazeNet algorithm, two were actually collected at 200 Hz. Problem 1 has to do with the fact that even among the 500 Hz data, the inter-timestamp intervals varied widely. Problem 2 is that there are many unusual discontinuities in the saccade trajectories from the Lund University dataset that make it a very poor choice for the construction of an automatic classification method. Error 2 The gazeNet algorithm was trained on the Lund dataset, and then compared to other methods, not trained on this dataset, in terms of performance on this dataset. This is an inherently unfair comparison, and yet nowhere in the gazeNet paper is this unfairness mentioned. Error 3 arises out of the novel event-related agreement analysis employed by the gazeNet authors. Although the authors intended to classify unmatched events as either false positives or false negatives, many are actually being classified as true negatives. True negatives are not errors, and any unmatched event misclassified as a true negative is actually driving kappa higher, whereas unmatched events should be driving kappa lower.
Collapse
|
15
|
Prahalad KS, Coates DR. Asymmetries of reading eye movements in simulated central vision loss. Vision Res 2020; 171:1-10. [PMID: 32276109 DOI: 10.1016/j.visres.2020.03.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Revised: 01/31/2020] [Accepted: 03/13/2020] [Indexed: 11/18/2022]
Abstract
Patients with central vision loss are forced to use an eccentric retinal location as a substitute for the fovea, called a preferred retinal locus, or PRL. Clinical studies have shown that patients habitually choose a PRL located either to the left, and/or below the scotoma in the visual field. The position to the right of the scotoma is almost never chosen, even though this would be theoretically more suitable for reading, since the scotoma no longer blocks the upcoming text. In the current study, we tested whether this asymmetry may have an oculomotor basis. Six normally sighted subjects viewed page-like text with a simulated scotoma, identifying embedded numbers in "words" comprising random letters. Subjects trained and tested with three different artificial PRL ("pseudo-PRL," or pPRL) locations: inferior, to the right, or to the left of the scotoma. After several training blocks for each pPRL position, subjects were found to produce reliable oculomotor control. Both reading speed and eye movement characteristics reproduced observations from traditional paradigms such as page-mode reading and RSVP for an advantage for an inferior pPRL. While left and right positions resulted in similar reading speeds, we observed that a right pPRL caused excessively large saccades and more direction switches, exhibiting a zig-zag pattern that developed spontaneously. Thus, we propose that patients' typical avoidance of pPRL positions to the right of their scotoma could have an oculomotor component: the erratic eye motion might potentially negate the perceptual benefit that this pPRL would offer.
Collapse
Affiliation(s)
| | - Daniel R Coates
- College of Optometry, University of Houston, 4901 Calhoun Road, Houston, TX 77204, USA.
| |
Collapse
|
16
|
Abstract
It has come to our attention that the section "Post-processing: Labeling final events" on page 167 of "Using Machine Learning to Detect Events in Eye-Tracking Data" (Zemblys, Niehorster, Komogortsev, & Holmqvist, 2018) contains an erroneous description of the process by which post-processing was performed.
Collapse
|
17
|
|