1
|
Bischof WF, Anderson NC, Kingstone A. A tutorial: Analyzing eye and head movements in virtual reality. Behav Res Methods 2024:10.3758/s13428-024-02482-5. [PMID: 39117987 DOI: 10.3758/s13428-024-02482-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/16/2024] [Indexed: 08/10/2024]
Abstract
This tutorial provides instruction on how to use the eye tracking technology built into virtual reality (VR) headsets, emphasizing the analysis of head and eye movement data when an observer is situated in the center of an omnidirectional environment. We begin with a brief description of how VR eye movement research differs from previous forms of eye movement research, as well as identifying some outstanding gaps in the current literature. We then introduce the basic methodology used to collect VR eye movement data both in general and with regard to the specific data that we collected to illustrate different analytical approaches. We continue with an introduction of the foundational ideas regarding data analysis in VR, including frames of reference, how to map eye and head position, and event detection. In the next part, we introduce core head and eye data analyses focusing on determining where the head and eyes are directed. We then expand on what has been presented, introducing several novel spatial, spatio-temporal, and temporal head-eye data analysis techniques. We conclude with a reflection on what has been presented, and how the techniques introduced in this tutorial provide the scaffolding for extensions to more complex and dynamic VR environments.
Collapse
Affiliation(s)
- Walter F Bischof
- Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC, V6T 1Z4, Canada.
| | - Nicola C Anderson
- Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC, V6T 1Z4, Canada
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC, V6T 1Z4, Canada
| |
Collapse
|
2
|
Postuma EMJL, Heutink J, Tol S, Jansen JL, Koopman J, Cornelissen FW, de Haan GA. A systematic review on visual scanning behaviour in hemianopia considering task specificity, performance improvement, spontaneous and training-induced adaptations. Disabil Rehabil 2024; 46:3221-3242. [PMID: 37563867 PMCID: PMC11259206 DOI: 10.1080/09638288.2023.2243590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 07/29/2023] [Indexed: 08/12/2023]
Abstract
PURPOSE People with homonymous hemianopia (HH) benefit from applying compensatory scanning behaviour that limits the consequences of HH in a specific task. The aim of the study is to (i) review the current literature on task-specific scanning behaviour that improves performance and (ii) identify differences between this performance-enhancing scanning behaviour and scanning behaviour that is spontaneously adopted or acquired through training. MATERIALS AND METHODS The databases PsycInfo, Medline, and Web of Science were searched for articles on scanning behaviour in people with HH. RESULTS The final sample contained 60 articles, reporting on three main tasks, i.e., search (N = 17), reading (N = 16) and mobility (N = 14), and other tasks (N = 18). Five articles reported on two different tasks. Specific scanning behaviour related to task performance in search, reading, and mobility tasks. In search and reading tasks, spontaneous adaptations differed from this performance-enhancing scanning behaviour. Training could induce adaptations in scanning behaviour, enhancing performance in these two tasks. For mobility tasks, limited to no information was found on spontaneous and training-induced adaptations to scanning behaviour. CONCLUSIONS Performance-enhancing scanning behaviour is mainly task-specific. Spontaneous development of such scanning behaviour is rare. Luckily, current compensatory scanning training programs can induce such scanning behaviour, which confirms that providing scanning training is important.IMPLICATIONS FOR REHABILITATIONScanning behaviour that improves performance in people with homonymous hemianopia (HH) is task-specific.Most people with HH do not spontaneously adopt scanning behaviour that improves performance.Compensatory scanning training can induce performance-enhancing scanning behaviour.
Collapse
Affiliation(s)
- Eva M. J. L. Postuma
- Department Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, Rijksuniversiteit Groningen, Groningen, The Netherlands
| | - Joost Heutink
- Department Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, Rijksuniversiteit Groningen, Groningen, The Netherlands
- Royal Dutch Visio, Centre of Expertise for Blind and Partially Sighted People, Huizen, The Netherlands
| | - Sarah Tol
- Department Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, Rijksuniversiteit Groningen, Groningen, The Netherlands
| | - Josephien L. Jansen
- Department Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, Rijksuniversiteit Groningen, Groningen, The Netherlands
| | - Jan Koopman
- Royal Dutch Visio, Centre of Expertise for Blind and Partially Sighted People, Huizen, The Netherlands
| | - Frans W. Cornelissen
- Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Gera A. de Haan
- Department Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, Rijksuniversiteit Groningen, Groningen, The Netherlands
- Royal Dutch Visio, Centre of Expertise for Blind and Partially Sighted People, Huizen, The Netherlands
| |
Collapse
|
3
|
Zhang X, Wang L, He Y, Mou Z, Cao Y. High-speed eye tracking based on a synchronized imaging mechanism by a dual-ring infrared lighting source. APPLIED OPTICS 2024; 63:4293-4302. [PMID: 38856606 DOI: 10.1364/ao.521840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 04/30/2024] [Indexed: 06/11/2024]
Abstract
It is a challenge for conventional monocular-camera single-light source eye-tracking methods to achieve high-speed eye tracking. In this work, a dual-ring infrared lighting source was designed to achieve bright and dark pupils in high speed. The eye-tracking method used a dual-ring infrared lighting source and synchronized triggers for the even and odd camera frames to capture bright and dark pupils. A pupillary corneal reflex was calculated by the center coordinates of the Purkinje spot and the pupil. A map function was established to map the relationship between the pupillary corneal reflex and gaze spots. The gaze coordinate was calculated based on the mapping function. The average detection time of each gaze spot was 3.76 ms.
Collapse
|
4
|
Ambrad Giovannetti E, Rancz E. Behind mouse eyes: The function and control of eye movements in mice. Neurosci Biobehav Rev 2024; 161:105671. [PMID: 38604571 DOI: 10.1016/j.neubiorev.2024.105671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 03/12/2024] [Accepted: 04/08/2024] [Indexed: 04/13/2024]
Abstract
The mouse visual system has become the most popular model to study the cellular and circuit mechanisms of sensory processing. However, the importance of eye movements only started to be appreciated recently. Eye movements provide a basis for predictive sensing and deliver insights into various brain functions and dysfunctions. A plethora of knowledge on the central control of eye movements and their role in perception and behaviour arose from work on primates. However, an overview of various eye movements in mice and a comparison to primates is missing. Here, we review the eye movement types described to date in mice and compare them to those observed in primates. We discuss the central neuronal mechanisms for their generation and control. Furthermore, we review the mounting literature on eye movements in mice during head-fixed and freely moving behaviours. Finally, we highlight gaps in our understanding and suggest future directions for research.
Collapse
Affiliation(s)
| | - Ede Rancz
- INMED, INSERM, Aix-Marseille University, Marseille, France.
| |
Collapse
|
5
|
Ibragimov B, Mello-Thoms C. The Use of Machine Learning in Eye Tracking Studies in Medical Imaging: A Review. IEEE J Biomed Health Inform 2024; 28:3597-3612. [PMID: 38421842 PMCID: PMC11262011 DOI: 10.1109/jbhi.2024.3371893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2024]
Abstract
Machine learning (ML) has revolutionized medical image-based diagnostics. In this review, we cover a rapidly emerging field that can be potentially significantly impacted by ML - eye tracking in medical imaging. The review investigates the clinical, algorithmic, and hardware properties of the existing studies. In particular, it evaluates 1) the type of eye-tracking equipment used and how the equipment aligns with study aims; 2) the software required to record and process eye-tracking data, which often requires user interface development, and controller command and voice recording; 3) the ML methodology utilized depending on the anatomy of interest, gaze data representation, and target clinical application. The review concludes with a summary of recommendations for future studies, and confirms that the inclusion of gaze data broadens the ML applicability in Radiology from computer-aided diagnosis (CAD) to gaze-based image annotation, physicians' error detection, fatigue recognition, and other areas of potentially high research and clinical impact.
Collapse
|
6
|
Drews M, Dierkes K. Strategies for enhancing automatic fixation detection in head-mounted eye tracking. Behav Res Methods 2024:10.3758/s13428-024-02360-0. [PMID: 38594440 DOI: 10.3758/s13428-024-02360-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/30/2024] [Indexed: 04/11/2024]
Abstract
Moving through a dynamic world, humans need to intermittently stabilize gaze targets on their retina to process visual information. Overt attention being thus split into discrete intervals, the automatic detection of such fixation events is paramount to downstream analysis in many eye-tracking studies. Standard algorithms tackle this challenge in the limiting case of little to no head motion. In this static scenario, which is approximately realized for most remote eye-tracking systems, it amounts to detecting periods of relative eye stillness. In contrast, head-mounted eye trackers allow for experiments with subjects moving naturally in everyday environments. Detecting fixations in these dynamic scenarios is more challenging, since gaze-stabilizing eye movements need to be reliably distinguished from non-fixational gaze shifts. Here, we propose several strategies for enhancing existing algorithms developed for fixation detection in the static case to allow for robust fixation detection in dynamic real-world scenarios recorded with head-mounted eye trackers. Specifically, we consider (i) an optic-flow-based compensation stage explicitly accounting for stabilizing eye movements during head motion, (ii) an adaptive adjustment of algorithm sensitivity according to head-motion intensity, and (iii) a coherent tuning of all algorithm parameters. Introducing a new hand-labeled dataset, recorded with the Pupil Invisible glasses by Pupil Labs, we investigate their individual contributions. The dataset comprises both static and dynamic scenarios and is made publicly available. We show that a combination of all proposed strategies improves standard thresholding algorithms and outperforms previous approaches to fixation detection in head-mounted eye tracking.
Collapse
Affiliation(s)
- Michael Drews
- Pupil Labs, Sanderstraße 28, 12047, Berlin, Germany.
| | - Kai Dierkes
- Pupil Labs, Sanderstraße 28, 12047, Berlin, Germany.
| |
Collapse
|
7
|
Nyström M, Andersson R, Niehorster DC, Hessels RS, Hooge ITC. What is a blink? Classifying and characterizing blinks in eye openness signals. Behav Res Methods 2024; 56:3280-3299. [PMID: 38424292 PMCID: PMC11133197 DOI: 10.3758/s13428-023-02333-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/22/2023] [Indexed: 03/02/2024]
Abstract
Blinks, the closing and opening of the eyelids, are used in a wide array of fields where human function and behavior are studied. In data from video-based eye trackers, blink rate and duration are often estimated from the pupil-size signal. However, blinks and their parameters can be estimated only indirectly from this signal, since it does not explicitly contain information about the eyelid position. We ask whether blinks detected from an eye openness signal that estimates the distance between the eyelids (EO blinks) are comparable to blinks detected with a traditional algorithm using the pupil-size signal (PS blinks) and how robust blink detection is when data quality is low. In terms of rate, there was an almost-perfect overlap between EO and PS blink (F1 score: 0.98) when the head was in the center of the eye tracker's tracking range where data quality was high and a high overlap (F1 score 0.94) when the head was at the edge of the tracking range where data quality was worse. When there was a difference in blink rate between EO and PS blinks, it was mainly due to data loss in the pupil-size signal. Blink durations were about 60 ms longer in EO blinks compared to PS blinks. Moreover, the dynamics of EO blinks was similar to results from previous literature. We conclude that the eye openness signal together with our proposed blink detection algorithm provides an advantageous method to detect and describe blinks in greater detail.
Collapse
Affiliation(s)
- Marcus Nyström
- Lund University Humanities Lab, Box 201, SE-221 00, Lund, Sweden.
| | | | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Box 201, SE-221 00, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584, CS, Utrecht, The Netherlands
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584, CS, Utrecht, The Netherlands
| |
Collapse
|
8
|
Nejad A, de Haan GA, Heutink J, Cornelissen FW. ACE-DNV: Automatic classification of gaze events in dynamic natural viewing. Behav Res Methods 2024; 56:3300-3314. [PMID: 38448726 PMCID: PMC11133063 DOI: 10.3758/s13428-024-02358-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/02/2024] [Indexed: 03/08/2024]
Abstract
Eye movements offer valuable insights for clinical interventions, diagnostics, and understanding visual perception. The process usually involves recording a participant's eye movements and analyzing them in terms of various gaze events. Manual identification of these events is extremely time-consuming. Although the field has seen the development of automatic event detection and classification methods, these methods have primarily focused on distinguishing events when participants remain stationary. With increasing interest in studying gaze behavior in freely moving participants, such as during daily activities like walking, new methods are required to automatically classify events in data collected under unrestricted conditions. Existing methods often rely on additional information from depth cameras or inertial measurement units (IMUs), which are not typically integrated into mobile eye trackers. To address this challenge, we present a framework for classifying gaze events based solely on eye-movement signals and scene video footage. Our approach, the Automatic Classification of gaze Events in Dynamic and Natural Viewing (ACE-DNV), analyzes eye movements in terms of velocity and direction and leverages visual odometry to capture head and body motion. Additionally, ACE-DNV assesses changes in image content surrounding the point of gaze. We evaluate the performance of ACE-DNV using a publicly available dataset and showcased its ability to discriminate between gaze fixation, gaze pursuit, gaze following, and gaze shifting (saccade) events. ACE-DNV exhibited comparable performance to previous methods, while eliminating the necessity for additional devices such as IMUs and depth cameras. In summary, ACE-DNV simplifies the automatic classification of gaze events in natural and dynamic environments. The source code is accessible at https://github.com/arnejad/ACE-DNV .
Collapse
Affiliation(s)
- Ashkan Nejad
- Department of Research and Improvement of Care, Royal Dutch Visio, Huizen, The Netherlands.
- Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| | - Gera A de Haan
- Department of Research and Improvement of Care, Royal Dutch Visio, Huizen, The Netherlands
- Department of Clinical and Developmental Neuropsychology, University of Groningen, Groningen, The Netherlands
| | - Joost Heutink
- Department of Research and Improvement of Care, Royal Dutch Visio, Huizen, The Netherlands
- Department of Clinical and Developmental Neuropsychology, University of Groningen, Groningen, The Netherlands
| | - Frans W Cornelissen
- Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
9
|
Li D, Butala AA, Moro-Velazquez L, Meyer T, Oh ES, Motley C, Villalba J, Dehak N. Automating the analysis of eye movement for different neurodegenerative disorders. Comput Biol Med 2024; 170:107951. [PMID: 38219646 DOI: 10.1016/j.compbiomed.2024.107951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 12/08/2023] [Accepted: 01/01/2024] [Indexed: 01/16/2024]
Abstract
The clinical observation and assessment of extra-ocular movements is common practice in assessing neurodegenerative disorders but remains observer-dependent. In the present study, we propose an algorithm that can automatically identify saccades, fixation, smooth pursuit, and blinks using a non-invasive eye tracker. Subsequently, response-to-stimuli-derived interpretable features were elicited that objectively and quantitatively assess patient behaviors. The cohort analysis encompasses persons with mild cognitive impairment (MCI), Alzheimer's disease (AD), Parkinson's disease (PD), Parkinson's disease mimics (PDM), and controls (CTRL). Overall, results suggested that the AD/MCI and PD groups had significantly different saccade and pursuit characteristics compared to CTRL when the target moved faster or covered a larger visual angle during smooth pursuit. These two groups also displayed more omitted antisaccades and longer average antisaccade latency than CTRL. When reading a text passage silently, people with AD/MCI had more fixations. During visual exploration, people with PD demonstrated a more variable saccade duration than other groups. In the prosaccade task, the PD group showed a significantly smaller average hypometria gain and accuracy, with the most statistical significance and highest AUC scores of features studied. The minimum saccade gain was a PD-specific feature different from CTRL and PDM. These features, as oculographic biomarkers, can be potentially leveraged in distinguishing different types of NDs, yielding more objective and precise protocols to diagnose and monitor disease progression.
Collapse
Affiliation(s)
- Deming Li
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, 21218, MD, USA.
| | - Ankur A Butala
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, 21205, MD, USA
| | - Laureano Moro-Velazquez
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, 21218, MD, USA
| | - Trevor Meyer
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, 21218, MD, USA
| | - Esther S Oh
- Division of Geriatric Medicine and Gerontology, Johns Hopkins University School of Medicine, Baltimore, 21205, MD, USA
| | - Chelsey Motley
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, 21205, MD, USA
| | - Jesús Villalba
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, 21218, MD, USA
| | - Najim Dehak
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, 21218, MD, USA
| |
Collapse
|
10
|
Hooge ITC, Niehorster DC, Nyström M, Hessels RS. Large eye-head gaze shifts measured with a wearable eye tracker and an industrial camera. Behav Res Methods 2024:10.3758/s13428-023-02316-w. [PMID: 38200239 DOI: 10.3758/s13428-023-02316-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/04/2023] [Indexed: 01/12/2024]
Abstract
We built a novel setup to record large gaze shifts (up to 140[Formula: see text]). The setup consists of a wearable eye tracker and a high-speed camera with fiducial marker technology to track the head. We tested our setup by replicating findings from the classic eye-head gaze shift literature. We conclude that our new inexpensive setup is good enough to investigate the dynamics of large eye-head gaze shifts. This novel setup could be used for future research on large eye-head gaze shifts, but also for research on gaze during e.g., human interaction. We further discuss reference frames and terminology in head-free eye tracking. Despite a transition from head-fixed eye tracking to head-free gaze tracking, researchers still use head-fixed eye movement terminology when discussing world-fixed gaze phenomena. We propose to use more specific terminology for world-fixed phenomena, including gaze fixation, gaze pursuit, and gaze saccade.
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
11
|
Elmadjian C, Gonzales C, Costa RLD, Morimoto CH. Online eye-movement classification with temporal convolutional networks. Behav Res Methods 2023; 55:3602-3620. [PMID: 36220951 DOI: 10.3758/s13428-022-01978-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2022] [Indexed: 11/08/2022]
Abstract
The simultaneous classification of the three most basic eye-movement patterns is known as the ternary eye-movement classification problem (3EMCP). Dynamic, interactive real-time applications that must instantly adjust or respond to certain eye behaviors would highly benefit from accurate, robust, fast, and low-latency classification methods. Recent developments based on 1D-CNN-BiLSTM and TCN architectures have demonstrated to be more accurate and robust than previous solutions, but solely considering offline applications. In this paper, we propose a TCN classifier for the 3EMCP, adapted to online applications, that does not require look-ahead buffers. We introduce a new lightweight preprocessing technique that allows the TCN to make real-time predictions at about 500 Hz with low latency using commodity hardware. We evaluate the TCN performance against other two deep neural models: a CNN-LSTM and a CNN-BiLSTM, also adapted to online classification. Furthermore, we compare the performance of the deep neural models against a lightweight real-time Bayesian classifier (I-BDT). Our results, considering two publicly available datasets, show that the proposed TCN model consistently outperforms other methods for all classes. The results also show that, though it is possible to achieve reasonable accuracy levels with zero-length look ahead, the performance of all methods improve with the use of look-ahead information. The codebase, pre-trained models, and datasets are available at https://github.com/elmadjian/OEMC.
Collapse
Affiliation(s)
- Carlos Elmadjian
- University of São Paulo, R. do Matão, 1010, 256-A, São Paulo, Brazil.
| | - Candy Gonzales
- University of São Paulo, R. do Matão, 1010, 256-A, São Paulo, Brazil
| | | | - Carlos H Morimoto
- University of São Paulo, R. do Matão, 1010, 209-C, São Paulo, Brazil
| |
Collapse
|
12
|
Kredel R, Hernandez J, Hossner EJ, Zahno S. Eye-tracking technology and the dynamics of natural gaze behavior in sports: an update 2016-2022. Front Psychol 2023; 14:1130051. [PMID: 37359890 PMCID: PMC10286576 DOI: 10.3389/fpsyg.2023.1130051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 05/16/2023] [Indexed: 06/28/2023] Open
Abstract
Updating and complementing a previous review on eye-tracking technology and the dynamics of natural gaze behavior in sports, this short review focuses on the progress concerning researched sports tasks, applied methods of gaze data collection and analysis as well as derived gaze measures for the time interval of 2016-2022. To that end, a systematic review according to the PRISMA guidelines was conducted, searching Web of Science, PubMed Central, SPORTDiscus, and ScienceDirect for the keywords: eye tracking, gaze behavio*r, eye movement, and visual search. Thirty-one studies were identified for the review. On the one hand, a generally increased research interest and a wider area of researched sports with a particular increase in official's gaze behavior were diagnosed. On the other hand, a general lack of progress concerning sample sizes, amounts of trials, employed eye-tracking technology and gaze analysis procedures must be acknowledged. Nevertheless, first attempts to automated gaze-cue-allocations (GCA) in mobile eye-tracking studies were seen, potentially enhancing objectivity, and alleviating the burden of manual workload inherently associated with conventional gaze analyses. Reinforcing the claims of the previous review, this review concludes by describing four distinct technological approaches to automating GCA, some of which are specifically suited to tackle the validity and generalizability issues associated with the current limitations of mobile eye-tracking studies on natural gaze behavior in sports.
Collapse
|
13
|
Park SY, Holmqvist K, Niehorster DC, Huber L, Virányi Z. How to improve data quality in dog eye tracking. Behav Res Methods 2023; 55:1513-1536. [PMID: 35680764 PMCID: PMC10250523 DOI: 10.3758/s13428-022-01788-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/02/2022] [Indexed: 11/08/2022]
Abstract
Pupil-corneal reflection (P-CR) eye tracking has gained a prominent role in studying dog visual cognition, despite methodological challenges that often lead to lower-quality data than when recording from humans. In the current study, we investigated if and how the morphology of dogs might interfere with tracking of P-CR systems, and to what extent such interference, possibly in combination with dog-unique eye-movement characteristics, may undermine data quality and affect eye-movement classification when processed through algorithms. For this aim, we have conducted an eye-tracking experiment with dogs and humans, and investigated incidences of tracking interference, compared how they blinked, and examined how differential quality of dog and human data affected the detection and classification of eye-movement events. Our results show that the morphology of dogs' face and eye can interfere with tracking methods of the systems, and dogs blink less often but their blinks are longer. Importantly, the lower quality of dog data lead to larger differences in how two different event detection algorithms classified fixations, indicating that the results of key dependent variables are more susceptible to choice of algorithm in dog than human data. Further, two measures of the Nyström & Holmqvist (Behavior Research Methods, 42(4), 188-204, 2010) algorithm showed that dog fixations are less stable and dog data have more trials with extreme levels of noise. Our findings call for analyses better adjusted to the characteristics of dog eye-tracking data, and our recommendations help future dog eye-tracking studies acquire quality data to enable robust comparisons of visual cognition between dogs and humans.
Collapse
Affiliation(s)
- Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria.
- Medical University Vienna, Vienna, Austria.
- University of Vienna, Vienna, Austria.
| | - Kenneth Holmqvist
- Institute of Psychology, Nicolaus Copernicus University in Torun, Torun, Poland
- Department of Psychology, Regensburg University, Regensburg, Germany
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Ludwig Huber
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria
- Medical University Vienna, Vienna, Austria
- University of Vienna, Vienna, Austria
| | - Zsófia Virányi
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria
- Medical University Vienna, Vienna, Austria
- University of Vienna, Vienna, Austria
| |
Collapse
|
14
|
Ahmad Rudin AM, Abd Rahman NH, Rosli SA, Asrullah M. Effect of Contrast Polarity Towards Eye Fixation Rates When Reading On Smartphone. ENVIRONMENT-BEHAVIOUR PROCEEDINGS JOURNAL 2023; 8:347-353. [DOI: 10.21834/ebpj.v8i24.4680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 04/30/2023] [Indexed: 09/02/2023]
Abstract
This study is conducted to investigate the effect of contrast polarity towards eye fixation patterns when reading text on a smartphone in bright and dark conditions involving the effects when reading on a smartphone such as in real-life situations. The number of fixations and duration of fixation showed no statistically significant difference (p=0.160 and 0.099 respectively). However, emmetropic subjects showed a higher result in bright conditions compared to myopic subjects (p=0.046). This concludes that emmetropic eye movement efficiency seems superior, possibly due to lower spherical order aberration as pupil size decreases in bright illumination.
Collapse
|
15
|
Bischof WF, Anderson NC, Kingstone A. Eye and head movements while encoding and recognizing panoramic scenes in virtual reality. PLoS One 2023; 18:e0282030. [PMID: 36800398 PMCID: PMC9937482 DOI: 10.1371/journal.pone.0282030] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 02/06/2023] [Indexed: 02/18/2023] Open
Abstract
One approach to studying the recognition of scenes and objects relies on the comparison of eye movement patterns during encoding and recognition. Past studies typically analyzed the perception of flat stimuli of limited extent presented on a computer monitor that did not require head movements. In contrast, participants in the present study saw omnidirectional panoramic scenes through an immersive 3D virtual reality viewer, and they could move their head freely to inspect different parts of the visual scenes. This allowed us to examine how unconstrained observers use their head and eyes to encode and recognize visual scenes. By studying head and eye movement within a fully immersive environment, and applying cross-recurrence analysis, we found that eye movements are strongly influenced by the content of the visual environment, as are head movements-though to a much lesser degree. Moreover, we found that the head and eyes are linked, with the head supporting, and by and large mirroring the movements of the eyes, consistent with the notion that the head operates to support the acquisition of visual information by the eyes.
Collapse
Affiliation(s)
- Walter F. Bischof
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| | - Nicola C. Anderson
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
- * E-mail:
| |
Collapse
|
16
|
D’Amelio A, Patania S, Bursic S, Cuculo V, Boccignone G. Using Gaze for Behavioural Biometrics. SENSORS (BASEL, SWITZERLAND) 2023; 23:1262. [PMID: 36772302 PMCID: PMC9920149 DOI: 10.3390/s23031262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 01/15/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
A principled approach to the analysis of eye movements for behavioural biometrics is laid down. The approach grounds in foraging theory, which provides a sound basis to capture the uniqueness of individual eye movement behaviour. We propose a composite Ornstein-Uhlenbeck process for quantifying the exploration/exploitation signature characterising the foraging eye behaviour. The relevant parameters of the composite model, inferred from eye-tracking data via Bayesian analysis, are shown to yield a suitable feature set for biometric identification; the latter is eventually accomplished via a classical classification technique. A proof of concept of the method is provided by measuring its identification performance on a publicly available dataset. Data and code for reproducing the analyses are made available. Overall, we argue that the approach offers a fresh view on either the analyses of eye-tracking data and prospective applications in this field.
Collapse
Affiliation(s)
- Alessandro D’Amelio
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Sabrina Patania
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Sathya Bursic
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
- Department of Psychology, University of Milano-Bicocca, Piazza dell’Ateneo Nuovo 1, 20126 Milan, Italy
| | - Vittorio Cuculo
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Giuseppe Boccignone
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| |
Collapse
|
17
|
Schneider A, Vollenwyder B, Krueger E, Mühlethaler C, Miller DB, Thurau J, Elfering A. Mobile eye tracking applied as a tool for customer experience research in a crowded train station. J Eye Mov Res 2023; 16:10.16910/jemr.16.1.1. [PMID: 37927371 PMCID: PMC10624146 DOI: 10.16910/jemr.16.1.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2023] Open
Abstract
Train stations have increasingly become crowded, necessitating stringent requirements in the design of stations and commuter navigation through these stations. In this study, we explored the use of mobile eye tracking in combination with observation and a survey to gain knowledge on customer experience in a crowded train station. We investigated the utilization of mobile eye tracking in ascertaining customers' perception of the train station environment and analyzed the effect of a signalization prototype (visual pedestrian flow cues), which was intended for regulating pedestrian flow in a crowded underground passage. Gaze behavior, estimated crowd density, and comfort levels (an individual's comfort level in a certain situation), were measured before and after the implementation of the prototype. The results revealed that the prototype was visible in conditions of low crowd density. However, in conditions of high crowd density, the prototype was less visible, and the path choice was influenced by other commuters. Hence, herd behavior appeared to have a stronger effect than the implemented signalization prototype in conditions of high crowd density. Thus, mobile eye tracking in combination with observation and the survey successfully aided in understanding customers' perception of the train station environment on a qualitative level and supported the evaluation of the signalization prototype the crowded underground passage. However, the analysis process was laborious, which could be an obstacle for its practical use in gaining customer insights.
Collapse
Affiliation(s)
- Andrea Schneider
- University of Bern, Bern, Switzerland
- Ecole Polytechnique Fédéral de Lausanne EPFL, Lausanne, Switzerland
- Swiss Federal Railways SBB CFF FFS, Switzerland
| | | | - Eva Krueger
- Swiss Federal Railways SBB CFF FFS, Switzerland
| | | | | | | | | |
Collapse
|
18
|
Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, Drieghe D, Dunn MJ, Ettinger U, Fiedler S, Foulsham T, van der Geest JN, Hansen DW, Hutton SB, Kasneci E, Kingstone A, Knox PC, Kok EM, Lee H, Lee JY, Leppänen JM, Macknik S, Majaranta P, Martinez-Conde S, Nuthmann A, Nyström M, Orquin JL, Otero-Millan J, Park SY, Popelka S, Proudlock F, Renkewitz F, Roorda A, Schulte-Mecklenbeck M, Sharif B, Shic F, Shovman M, Thomas MG, Venrooij W, Zemblys R, Hessels RS. Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2023; 55:364-416. [PMID: 35384605 PMCID: PMC9535040 DOI: 10.3758/s13428-021-01762-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Department of Psychology, Nicolaus Copernicus University, Torun, Poland.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Department of Psychology, Regensburg University, Regensburg, Germany.
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Social, Health and Organizational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | | | - Lewis L Chuang
- Department of Ergonomics, Leibniz Institute for Working Environments and Human Factors, Dortmund, Germany
- Institute of Informatics, LMU Munich, Munich, Germany
| | | | - Denis Drieghe
- School of Psychology, University of Southampton, Southampton, UK
| | - Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | - Susann Fiedler
- Vienna University of Economics and Business, Vienna, Austria
| | - Tom Foulsham
- Department of Psychology, University of Essex, Essex, UK
| | | | - Dan Witzner Hansen
- Machine Learning Group, Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | | | - Enkelejda Kasneci
- Human-Computer Interaction, University of Tübingen, Tübingen, Germany
| | | | - Paul C Knox
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Ellen M Kok
- Department of Education and Pedagogy, Division Education, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, The Netherlands
| | - Helena Lee
- University of Southampton, Southampton, UK
| | - Joy Yeonjoo Lee
- School of Health Professions Education, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Jukka M Leppänen
- Department of Psychology and Speed-Language Pathology, University of Turku, Turku, Finland
| | - Stephen Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Päivi Majaranta
- TAUCHI Research Center, Computing Sciences, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jacob L Orquin
- Department of Management, Aarhus University, Aarhus, Denmark
- Center for Research in Marketing and Consumer Psychology, Reykjavik University, Reykjavik, Iceland
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, Vienna, Austria
| | - Stanislav Popelka
- Department of Geoinformatics, Palacký University Olomouc, Olomouc, Czech Republic
| | - Frank Proudlock
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Frank Renkewitz
- Department of Psychology, University of Erfurt, Erfurt, Germany
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of General Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Mark Shovman
- Eyeviation Systems, Herzliya, Israel
- Department of Industrial Design, Bezalel Academy of Arts and Design, Jerusalem, Israel
| | - Mervyn G Thomas
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Ward Venrooij
- Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Enschede, The Netherlands
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
19
|
Abstract
This chapter explores the current state of the art in eye tracking within 3D virtual environments. It begins with the motivation for eye tracking in Virtual Reality (VR) in psychological research, followed by descriptions of the hardware and software used for presenting virtual environments as well as for tracking eye and head movements in VR. This is followed by a detailed description of an example project on eye and head tracking while observers look at 360° panoramic scenes. The example is illustrated with descriptions of the user interface and program excerpts to show the measurement of eye and head movements in VR. The chapter continues with fundamentals of data analysis, in particular methods for the determination of fixations and saccades when viewing spherical displays. We then extend these methodological considerations to determining the spatial and temporal coordination of the eyes and head in VR perception. The chapter concludes with a discussion of outstanding problems and future directions for conducting eye- and head-tracking research in VR. We hope that this chapter will serve as a primer for those intending to implement VR eye tracking in their own research.
Collapse
|
20
|
Huber L, Lonardo L, Völter CJ. Eye Tracking in Dogs: Achievements and Challenges. COMPARATIVE COGNITION & BEHAVIOR REVIEWS 2023; 18:33-58. [PMID: 39045221 PMCID: PMC7616291 DOI: 10.3819/ccbr.2023.180005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2024] Open
Abstract
In this article, we review eye-tracking studies with dogs (Canis familiaris) with a threefold goal; we highlight the achievements in the field of canine perception and cognition using eye tracking, then discuss the challenges that arise in the application of a technology that has been developed in human psychophysics, and finally propose new avenues in dog eye-tracking research. For the first goal, we present studies that investigated dogs' perception of humans, mainly faces, but also hands, gaze, emotions, communicative signals, goal-directed movements, and social interactions, as well as the perception of animations representing possible and impossible physical processes and animacy cues. We then discuss the present challenges of eye tracking with dogs, like doubtful picture-object equivalence, extensive training, small sample sizes, difficult calibration, and artificial stimuli and settings. We suggest possible improvements and solutions for these problems in order to achieve better stimulus and data quality. Finally, we propose the use of dynamic stimuli, pupillometry, arrival time analyses, mobile eye tracking, and combinations with behavioral and neuroimaging methods to further advance canine research and open up new scientific fields in this highly dynamic branch of comparative cognition.
Collapse
Affiliation(s)
- Ludwig Huber
- Messerli Research Institute, Unit of Comparative Cognition, University of Veterinary Medicine Vienna, Medical University of Vienna, University of Vienna
| | - Lucrezia Lonardo
- Messerli Research Institute, Unit of Comparative Cognition, University of Veterinary Medicine Vienna, Medical University of Vienna, University of Vienna
| | - Christoph J Völter
- Messerli Research Institute, Unit of Comparative Cognition, University of Veterinary Medicine Vienna, Medical University of Vienna, University of Vienna
| |
Collapse
|
21
|
Pueyo V, Yam JCS, Perez-Roche T, Balasanyan V, Ortin M, Garcia G, Prieto E, Pham C, Gutierrez D, Castillo O, Masia B, Alejandre A, Bakkali M, Ciprés M, Esteban-Ibañez E, Fanlo-Zarazaga A, Gonzalez I, Gutiérrez-Luna IZK, Pan X, Pinilla J, Romero-Sanz M, Sanchez-huerto V, Vilella M, Tinh NX, Hiep NX, Zhang X. Development of oculomotor control throughout childhood: A multicenter and multiethnic study. J Vis 2022; 22:4. [DOI: 10.1167/jov.22.13.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Affiliation(s)
- Victoria Pueyo
- Ophthalmology Department, Miguel Servet University Hospital, Zaragoza, Spain
- Aragon Institute for Health Research (IIS Aragón), Zaragoza, Spain
| | | | | | | | - Marta Ortin
- Aragon Institute for Health Research (IIS Aragón), Zaragoza, Spain
- I3A Institute for Research in Engineering, Universidad de Zaragoza, Zaragoza, Spain
| | - Gerardo Garcia
- Hospital Luis Sánchez Bulnes, Asociación Para Evitar la Ceguera (APEC), Mexico DF, Mexico
| | - Esther Prieto
- Ophthalmology Department, Miguel Servet University Hospital, Zaragoza, Spain
- Aragon Institute for Health Research (IIS Aragón), Zaragoza, Spain
| | - Chau Pham
- National Institute of Ophthalmology, Hanoi, Vietnam
| | - Diego Gutierrez
- Aragon Institute for Health Research (IIS Aragón), Zaragoza, Spain
- I3A Institute for Research in Engineering, Universidad de Zaragoza, Zaragoza, Spain
| | - Olimpia Castillo
- Ophthalmology Department, Miguel Servet University Hospital, Zaragoza, Spain
- Aragon Institute for Health Research (IIS Aragón), Zaragoza, Spain
| | - Belen Masia
- Aragon Institute for Health Research (IIS Aragón), Zaragoza, Spain
- I3A Institute for Research in Engineering, Universidad de Zaragoza, Zaragoza, Spain
| | - Adrian Alejandre
- I3A Institute for Research in Engineering, Universidad de Zaragoza, Zaragoza, Spain
| | - Mohamed Bakkali
- Ophthalmology Department, Miguel Servet University Hospital, Zaragoza, Spain
- Aragon Institute for Health Research (IIS Aragón), Zaragoza, Spain
| | - Marta Ciprés
- Lozano Blesa University Hospital, Zaragoza, Spain
| | | | - Alvaro Fanlo-Zarazaga
- Ophthalmology Department, Miguel Servet University Hospital, Zaragoza, Spain
- Aragon Institute for Health Research (IIS Aragón), Zaragoza, Spain
| | - Inmaculada Gonzalez
- Ophthalmology Department, Miguel Servet University Hospital, Zaragoza, Spain
- Aragon Institute for Health Research (IIS Aragón), Zaragoza, Spain
| | | | - Xian Pan
- Aragon Institute for Health Research (IIS Aragón), Zaragoza, Spain
| | - Juan Pinilla
- Ophthalmology Department, Miguel Servet University Hospital, Zaragoza, Spain
- Aragon Institute for Health Research (IIS Aragón), Zaragoza, Spain
| | - María Romero-Sanz
- Ophthalmology Department, Miguel Servet University Hospital, Zaragoza, Spain
- Aragon Institute for Health Research (IIS Aragón), Zaragoza, Spain
| | - Valeria Sanchez-huerto
- Hospital Luis Sánchez Bulnes, Asociación Para Evitar la Ceguera (APEC), Mexico 21 DF, Mexico
| | - Marina Vilella
- Aragon Institute for Health Research (IIS Aragón), Zaragoza, Spain
| | | | | | | | | |
Collapse
|
22
|
Hooge ITC, Niehorster DC, Nyström M, Andersson R, Hessels RS. Fixation classification: how to merge and select fixation candidates. Behav Res Methods 2022; 54:2765-2776. [PMID: 35023066 PMCID: PMC9729319 DOI: 10.3758/s13428-021-01723-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/06/2021] [Indexed: 12/16/2022]
Abstract
Eye trackers are applied in many research fields (e.g., cognitive science, medicine, marketing research). To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data with high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5∘), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by a rule that selects saccades with amplitudes larger than 1.0∘ and a rule that selects fixations with duration longer than 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters.
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
23
|
Ban S, Lee YJ, Kim KR, Kim JH, Yeo WH. Advances in Materials, Sensors, and Integrated Systems for Monitoring Eye Movements. BIOSENSORS 2022; 12:1039. [PMID: 36421157 PMCID: PMC9688058 DOI: 10.3390/bios12111039] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/11/2022] [Accepted: 11/13/2022] [Indexed: 06/16/2023]
Abstract
Eye movements show primary responses that reflect humans' voluntary intention and conscious selection. Because visual perception is one of the fundamental sensory interactions in the brain, eye movements contain critical information regarding physical/psychological health, perception, intention, and preference. With the advancement of wearable device technologies, the performance of monitoring eye tracking has been significantly improved. It also has led to myriad applications for assisting and augmenting human activities. Among them, electrooculograms, measured by skin-mounted electrodes, have been widely used to track eye motions accurately. In addition, eye trackers that detect reflected optical signals offer alternative ways without using wearable sensors. This paper outlines a systematic summary of the latest research on various materials, sensors, and integrated systems for monitoring eye movements and enabling human-machine interfaces. Specifically, we summarize recent developments in soft materials, biocompatible materials, manufacturing methods, sensor functions, systems' performances, and their applications in eye tracking. Finally, we discuss the remaining challenges and suggest research directions for future studies.
Collapse
Affiliation(s)
- Seunghyeb Ban
- School of Engineering and Computer Science, Washington State University, Vancouver, WA 98686, USA
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Yoon Jae Lee
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Ka Ram Kim
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Jong-Hoon Kim
- School of Engineering and Computer Science, Washington State University, Vancouver, WA 98686, USA
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| | - Woon-Hong Yeo
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Tech and Emory University School of Medicine, Atlanta, GA 30332, USA
- Neural Engineering Center, Institute for Materials, Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA 30332, USA
| |
Collapse
|
24
|
Bekteshi S, Karlsson P, De Reyck L, Vermeerbergen K, Konings M, Hellin P, Aerts JM, Hallez H, Dan B, Monbaliu E. Eye movements and stress during eye-tracking gaming performance in children with dyskinetic cerebral palsy. Dev Med Child Neurol 2022; 64:1402-1415. [PMID: 35393636 DOI: 10.1111/dmcn.15237] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 03/15/2022] [Accepted: 03/15/2022] [Indexed: 11/28/2022]
Abstract
AIM This study aimed to explore eye movements and stress during eye-tracking gaming performance in children with dyskinetic cerebral palsy (CP) compared with typically developing children, and associations between eye-tracking performance, eye movements, stress, and participants' characteristics. METHOD This cohort study included 12 children with dyskinetic CP aged 5 to 12 years (mean age 8 years 7 months, standard deviation [SD] 2 years 3 months) and 23 typically developing children aged 5 to 13 years (mean age 9 years 0 months, SD 2 years 7 months). Participants played 10 eye-tracking games. Tobii X3-120 and Tobii Pro Lab were used to record and analyse eye movements. Stress was assessed through heart rate variability (HRV), recorded during rest, and eye-tracking performance using the Bittium Faros360° ECG Holter device. Eye-tracking performance was measured using gaming completion time. Fixation and saccade variables were used to quantify eye movements, and time- and frequency-domain variables to quantify HRV. Non-parametric statistics were used. RESULTS Gaming completion time was significantly different (p < 0.001) between groups, and it was negatively correlated with experience (rs = -0.63, p = 0.029). No significant differences were found between groups in fixation and saccade variables. HRV significantly changed from rest to eye-tracking performance only in typically developing children and not in children with dyskinetic CP. INTERPRETATION Children with dyskinetic CP took longer to perform the 10 games, especially the inexperienced users, indicating the importance of the early provision of eye-tracking training opportunities. It seems that eye-tracking tasks are not a source of increased stress and effort in children with dyskinetic CP. WHAT THIS PAPER ADDS Participants with dyskinetic cerebral palsy (CP) took twice as long to perform 10 eye-tracking games than typically developing peers. Participants with dyskinetic CP with previous eye-tracking experience performed the games faster. Fixation and saccade variables were not significantly different between children with and without dyskinetic CP. Heart rate variability showed no differences between rest and performance in participants with dyskinetic CP. Gross Motor Function Classification System, Manual Ability Classification System, and Viking Speech Scale levels were not correlated to the eye movements or stress variables.
Collapse
Affiliation(s)
- Saranda Bekteshi
- KU Leuven, Bruges Campus, Department of Rehabilitation Sciences, Research Group for Neurorehabilitation, Bruges, Belgium
| | - Petra Karlsson
- University of Sydney, Cerebral Palsy Alliance, Sydney, Australia
| | - Lieselot De Reyck
- KU Leuven, Bruges Campus, Department of Rehabilitation Sciences, Research Group for Neurorehabilitation, Bruges, Belgium
| | - Karen Vermeerbergen
- KU Leuven, Bruges Campus, Department of Rehabilitation Sciences, Research Group for Neurorehabilitation, Bruges, Belgium
| | - Marco Konings
- KU Leuven, Bruges Campus, Department of Rehabilitation Sciences, Research Group for Neurorehabilitation, Bruges, Belgium
| | | | - Jean-Marie Aerts
- KU Leuven, Department of Biosystems, Division of Animal and Human Health Engineering, Measure, Model and Manage Bioresponse (M3-BIORES), Leuven, Belgium
| | - Hans Hallez
- KU Leuven, Bruges Campus, Department of Computer Science, Mechatronics Research Group, Bruges, Belgium
| | - Bernard Dan
- Faculty of Medicine, Université Libre de Bruxelles, Brussels, Belgium
| | - Elegast Monbaliu
- KU Leuven, Bruges Campus, Department of Rehabilitation Sciences, Research Group for Neurorehabilitation, Bruges, Belgium
| |
Collapse
|
25
|
Großekathöfer JD, Seis C, Gamer M. Reality in a sphere: A direct comparison of social attention in the laboratory and the real world. Behav Res Methods 2022; 54:2286-2301. [PMID: 34918223 PMCID: PMC9579106 DOI: 10.3758/s13428-021-01724-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/08/2021] [Indexed: 11/24/2022]
Abstract
Humans often show reduced social attention in real situations, a finding rarely replicated in controlled laboratory studies. Virtual reality is supposed to allow for ecologically valid and at the same time highly controlled experiments. This study aimed to provide initial insights into the reliability and validity of using spherical videos viewed via a head-mounted display (HMD) to assess social attention. We chose five public places in the city of Würzburg and measured eye movements of 44 participants for 30 s at each location twice: Once in a real environment with mobile eye-tracking glasses and once in a virtual environment playing a spherical video of the location in an HMD with an integrated eye tracker. As hypothesized, participants demonstrated reduced social attention with less exploration of passengers in the real environment as compared to the virtual one. This is in line with earlier studies showing social avoidance in interactive situations. Furthermore, we only observed consistent gaze proportions on passengers across locations in virtual environments. These findings highlight that the potential for social interactions and an adherence to social norms are essential modulators of viewing behavior in social situations and cannot be easily simulated in laboratory contexts. However, spherical videos might be helpful for supplementing the range of methods in social cognition research and other fields. Data and analysis scripts are available at https://osf.io/hktdu/ .
Collapse
Affiliation(s)
- Jonas D Großekathöfer
- Department of Psychology, Julius Maximilian University of Würzburg, Würzburg, Germany.
| | - Christian Seis
- Department of Psychology, Julius Maximilian University of Würzburg, Würzburg, Germany
| | - Matthias Gamer
- Department of Psychology, Julius Maximilian University of Würzburg, Würzburg, Germany
| |
Collapse
|
26
|
Hunt R, Blackmore T, Mills C, Dicks M. Evaluating the integration of eye-tracking and motion capture technologies: Quantifying the accuracy and precision of gaze measures. Iperception 2022; 13:20416695221116652. [PMID: 36186610 PMCID: PMC9516427 DOI: 10.1177/20416695221116652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 07/10/2022] [Indexed: 11/18/2022] Open
Abstract
Integrating mobile eye tracking and optoelectronic motion capture enables point of gaze
to be expressed within the laboratory co-ordinate system and presents a method not
commonly applied during research examining dynamic behaviors, such as locomotion. This
paper examines the quality of gaze data collected through the integration. Based on
research suggesting increased viewing distances are associated with reduced data quality;
the accuracy and precision of gaze data as participants (N = 11) viewed
floor-based targets at distances of 1–6 m was investigated. A mean accuracy of
2.55 ± 1.12° was identified, however, accuracy and precision measures (relative to
targets) were significantly (p < .05) reduced at greater viewing
distances. We then consider if signal processing techniques may improve accuracy and
precision, and overcome issues associated with missing data. A 4th-order Butterworth
lowpass filter with cut-off frequencies determined via autocorrelation did not
significantly improve data quality, however, interpolation via Quintic spline was
sufficient to overcome gaps of up to 0.1 s. We conclude the integration of gaze and motion
capture presents a viable methodology in the study of human behavior and presents
advantages for data collection, treatment, and analysis. We provide considerations for the
collection, analysis, and treatment of gaze data that may help inform future
methodological decisions.
Collapse
Affiliation(s)
- Rhys Hunt
- School of Sport, Health and Exercise Science, University of Portsmouth, Portsmouth, UK
| | - Tim Blackmore
- School of Sport, Health and Exercise Science, University of Portsmouth, Portsmouth, UK
| | - Chris Mills
- School of Sport, Health and Exercise Science, University of Portsmouth, Portsmouth, UK
| | - Matt Dicks
- School of Sport, Health and Exercise Science, University of Portsmouth, Portsmouth, UK
| |
Collapse
|
27
|
Hoogerbrugge AJ, Strauch C, Oláh ZA, Dalmaijer ES, Nijboer TCW, Van der Stigchel S. Seeing the Forrest through the trees: Oculomotor metrics are linked to heart rate. PLoS One 2022; 17:e0272349. [PMID: 35917377 PMCID: PMC9345484 DOI: 10.1371/journal.pone.0272349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 07/19/2022] [Indexed: 11/18/2022] Open
Abstract
Fluctuations in a person’s arousal accompany mental states such as drowsiness, mental effort, or motivation, and have a profound effect on task performance. Here, we investigated the link between two central instances affected by arousal levels, heart rate and eye movements. In contrast to heart rate, eye movements can be inferred remotely and unobtrusively, and there is evidence that oculomotor metrics (i.e., fixations and saccades) are indicators for aspects of arousal going hand in hand with changes in mental effort, motivation, or task type. Gaze data and heart rate of 14 participants during film viewing were used in Random Forest models, the results of which show that blink rate and duration, and the movement aspect of oculomotor metrics (i.e., velocities and amplitudes) link to heart rate–more so than the amount or duration of fixations and saccades. We discuss that eye movements are not only linked to heart rate, but they may both be similarly influenced by the common underlying arousal system. These findings provide new pathways for the remote measurement of arousal, and its link to psychophysiological features.
Collapse
Affiliation(s)
- Alex J. Hoogerbrugge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
- * E-mail:
| | - Christoph Strauch
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
| | - Zoril A. Oláh
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
| | - Edwin S. Dalmaijer
- School of Psychological Science, University of Bristol, Bristol, United Kingdom
| | - Tanja C. W. Nijboer
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
- Center of Excellence for Rehabilitation Medicine, UMC Utrecht Brain Center, University Medical Center Utrecht, De Hoogstraat Rehabilitation, Utrecht, Netherlands
- Department of Rehabilitation, Physical Therapy Science & Sports, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | | |
Collapse
|
28
|
Employing Eye Tracking to Study Visual Attention to Live Streaming: A Case Study of Facebook Live. SUSTAINABILITY 2022. [DOI: 10.3390/su14127494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In recent years, the COVID-19 pandemic has led to the development of a new business model, “Live Streaming + Ecommerce”, which is a new method for commercial sales that shares the goal of sustainable economic growth (SDG 8). As information technology finds its way into the digital lives of internet users, the real-time and interactive nature of live streaming has overturned the traditional entertainment experience of audio and video content, moving towards a more nuanced division of labor with multiple applications. This study used a portable eye tracker to collect eye movement information from participants watching Facebook Live, with 31 participants who had experience using the live streaming platform. The four eye movement indicators, namely, latency of first fixation (LFF), duration of first fixation (DFF), total fixation durations (TFD), and the number of fixations (NOF), were used to analyze the distribution of the visual attention in each region of interest (ROI) and explore the study questions based on the ROIs. The findings of this study were as follows: (1) the fixation order of the ROIs in the live ecommerce platform differed between participants of different sexes; (2) the DFF of the ROIs in the live ecommerce platform differed among participants of different sexes; and (3) regarding the ROIs of participants on the live ecommerce platform, participants of different sexes showed the same attention to the live products according to the TFD and NOF eye movement indicators. This study explored the visual search behaviors of existing consumers watching live ecommerce and provides the results as a reference for operators and researchers of live streaming platforms.
Collapse
|
29
|
Evaluating Eye Movement Event Detection: A Review of the State of the Art. Behav Res Methods 2022:10.3758/s13428-021-01763-7. [PMID: 35715615 DOI: 10.3758/s13428-021-01763-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/27/2021] [Indexed: 11/08/2022]
Abstract
Detecting eye movements in raw eye tracking data is a well-established research area by itself, as well as a common pre-processing step before any subsequent analysis. As in any field, however, progress and successful collaboration can only be achieved provided a shared understanding of the pursued goal. This is often formalised via defining metrics that express the quality of an approach to solving the posed problem. Both the big-picture intuition behind the evaluation strategies and seemingly small implementation details influence the resulting measures, making even studies with outwardly similar procedures essentially incomparable, impeding a common understanding. In this review, we systematically describe and analyse evaluation methods and measures employed in the eye movement event detection field to date. While recently developed evaluation strategies tend to quantify the detector's mistakes at the level of whole eye movement events rather than individual gaze samples, they typically do not separate establishing correspondences between true and predicted events from the quantification of the discovered errors. In our analysis we separate these two steps where possible, enabling their almost arbitrary combinations in an evaluation pipeline. We also present the first large-scale empirical analysis of event matching strategies in the literature, examining these various combinations both in practice and theoretically. We examine the particular benefits and downsides of the evaluation methods, providing recommendations towards more intuitive and informative assessment. We implemented the evaluation strategies on which this work focuses in a single publicly available library: https://github.com/r-zemblys/EM-event-detection-evaluation .
Collapse
|
30
|
Automated Classification of Cognitive Workload Levels Based on Psychophysiological and Behavioural Variables of Ex-Gaussian Distributional Features. Brain Sci 2022; 12:brainsci12050542. [PMID: 35624928 PMCID: PMC9138891 DOI: 10.3390/brainsci12050542] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 04/04/2022] [Accepted: 04/21/2022] [Indexed: 11/16/2022] Open
Abstract
The study is focused on applying ex-Gaussian parameters of eye-tracking and cognitive measures in the classification process of cognitive workload level. A computerised version of the digit symbol substitution test has been developed in order to perform the case study. The dataset applied in the study is a collection of variables related to eye-tracking: saccades, fixations and blinks, as well as test-related variables including response time and correct response number. The application of ex-Gaussian modelling to all collected data was beneficial in the context of detection of dissimilarity in groups. An independent classification approach has been applied in the study. Several classical classification methods have been invoked in the process. The overall classification accuracy reached almost 96%. Furthermore, the interpretable machine learning model based on logistic regression was adapted in order to calculate the ranking of the most valuable features, which allowed us to examine their importance.
Collapse
|
31
|
Lappi O. Gaze Strategies in Driving-An Ecological Approach. Front Psychol 2022; 13:821440. [PMID: 35360580 PMCID: PMC8964278 DOI: 10.3389/fpsyg.2022.821440] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 02/07/2022] [Indexed: 01/16/2023] Open
Abstract
Human performance in natural environments is deeply impressive, and still much beyond current AI. Experimental techniques, such as eye tracking, may be useful to understand the cognitive basis of this performance, and "the human advantage." Driving is domain where these techniques may deployed, in tasks ranging from rigorously controlled laboratory settings through high-fidelity simulations to naturalistic experiments in the wild. This research has revealed robust patterns that can be reliably identified and replicated in the field and reproduced in the lab. The purpose of this review is to cover the basics of what is known about these gaze behaviors, and some of their implications for understanding visually guided steering. The phenomena reviewed will be of interest to those working on any domain where visual guidance and control with similar task demands is involved (e.g., many sports). The paper is intended to be accessible to the non-specialist, without oversimplifying the complexity of real-world visual behavior. The literature reviewed will provide an information base useful for researchers working on oculomotor behaviors and physiology in the lab who wish to extend their research into more naturalistic locomotor tasks, or researchers in more applied fields (sports, transportation) who wish to bring aspects of the real-world ecology under experimental scrutiny. Part of a Research Topic on Gaze Strategies in Closed Self-paced tasks, this aspect of the driving task is discussed. It is in particular emphasized why it is important to carefully separate the visual strategies driving (quite closed and self-paced) from visual behaviors relevant to other forms of driver behavior (an open-ended menagerie of behaviors). There is always a balance to strike between ecological complexity and experimental control. One way to reconcile these demands is to look for natural, real-world tasks and behavior that are rich enough to be interesting yet sufficiently constrained and well-understood to be replicated in simulators and the lab. This ecological approach to driving as a model behavior and the way the connection between "lab" and "real world" can be spanned in this research is of interest to anyone keen to develop more ecologically representative designs for studying human gaze behavior.
Collapse
Affiliation(s)
- Otto Lappi
- Cognitive Science/TRU, University of Helsinki, Helsinki, Finland
| |
Collapse
|
32
|
Kim SY, Moon BY, Cho HG, Yu DS. Quantitative Evaluation of the Association Between Fixation Stability and Phoria During Short-Term Binocular Viewing. Front Neurosci 2022; 16:721665. [PMID: 35368249 PMCID: PMC8965591 DOI: 10.3389/fnins.2022.721665] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 02/11/2022] [Indexed: 11/29/2022] Open
Abstract
Purpose Fixation stability for binocular anomalies with a phoria cannot be detected by direct observations. This study aimed to quantitatively evaluate fixation stability using an eye tracker rather than direct directions in binocular vision with abnormal and normal phorias. Methods Thirty-five and 25 participants with abnormal and normal phoria, respectively, were included in the study. The horizontal and vertical gaze points and convergence were recorded for 10 s using a remote eye tracker while binocularly viewing a target on a display screen 550 mm away. Fixation stability was quantified using bivariate contour ellipse areas (BCEA). Results The fixation stability for all participants-based evaluations as a single cluster in the abnormal phoria group was lower than that in the normal phoria group (p = 0.005). There was no difference between the two groups in the evaluation based on the BCEA for each participant-based evaluation (p = 0.66). Fixation stability was also more related to convergence for the abnormal phoria group than for the normal phoria group (r = 0.769, p < 0.001; r = 0.417, p = 0.038, respectively). Conclusion As the first study to evaluate fixation stability using an eye-tracker to differentiate between abnormal and normal phoria for non-strabismus, these findings may provide evidence for improving the evaluation of binocular vision not detected with clinical diagnostic tests.
Collapse
|
33
|
Beyond screen time: Using head-mounted eye tracking to study natural behavior. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2022; 62:61-91. [PMID: 35249686 DOI: 10.1016/bs.acdb.2021.11.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Head-mounted eye tracking is a new method that allows researchers to catch a glimpse of what infants and children see during naturalistic activities. In this chapter, we review how mobile, wearable eye trackers improve the construct validity of important developmental constructs, such as visual object experiences and social attention, in ways that would be impossible using screen-based eye tracking. Head-mounted eye tracking improves ecological validity by allowing researchers to present more realistic and complex visual scenes, create more interactive experimental situations, and examine how the body influences what infants and children see. As with any new method, there are difficulties to overcome. Accordingly, we identify what aspects of head-mounted eye-tracking study design affect the measurement quality, interpretability of the results, and efficiency of gathering data. Moreover, we provide a summary of best practices aimed at allowing researchers to make well-informed decisions about whether and how to apply head-mounted eye tracking to their own research questions.
Collapse
|
34
|
David EJ, Lebranchu P, Perreira Da Silva M, Le Callet P. What are the visuo-motor tendencies of omnidirectional scene free-viewing in virtual reality? J Vis 2022; 22:12. [PMID: 35323868 PMCID: PMC8963670 DOI: 10.1167/jov.22.4.12] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Central and peripheral vision during visual tasks have been extensively studied on two-dimensional screens, highlighting their perceptual and functional disparities. This study has two objectives: replicating on-screen gaze-contingent experiments removing central or peripheral field of view in virtual reality, and identifying visuo-motor biases specific to the exploration of 360 scenes with a wide field of view. Our results are useful for vision modelling, with applications in gaze position prediction (e.g., content compression and streaming). We ask how previous on-screen findings translate to conditions where observers can use their head to explore stimuli. We implemented a gaze-contingent paradigm to simulate loss of vision in virtual reality, participants could freely view omnidirectional natural scenes. This protocol allows the simulation of vision loss with an extended field of view (\(\gt \)80°) and studying the head's contributions to visual attention. The time-course of visuo-motor variables in our pure free-viewing task reveals long fixations and short saccades during first seconds of exploration, contrary to literature in visual tasks guided by instructions. We show that the effect of vision loss is reflected primarily on eye movements, in a manner consistent with two-dimensional screens literature. We hypothesize that head movements mainly serve to explore the scenes during free-viewing, the presence of masks did not significantly impact head scanning behaviours. We present new fixational and saccadic visuo-motor tendencies in a 360° context that we hope will help in the creation of gaze prediction models dedicated to virtual reality.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany.,
| | - Pierre Lebranchu
- LS2N UMR CNRS 6004, University of Nantes and Nantes University Hospital, Nantes, France.,
| | | | - Patrick Le Callet
- LS2N UMR CNRS 6004, University of Nantes, Nantes, France., http://pagesperso.ls2n.fr/~lecallet-p/index.html
| |
Collapse
|
35
|
Dahl M, Tryding M, Heckler A, Nyström M. Quiet Eye and Computerized Precision Tasks in First-Person Shooter Perspective Esport Games. Front Psychol 2021; 12:676591. [PMID: 34819892 PMCID: PMC8606425 DOI: 10.3389/fpsyg.2021.676591] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 10/15/2021] [Indexed: 11/13/2022] Open
Abstract
The gaze behavior in sports and other applied settings has been studied for more than 20 years. A common finding is related to the “quiet eye” (QE), predicting that the duration of the last fixation before a critical event is associated with higher performance. Unlike previous studies conducted in applied settings with mobile eye trackers, we investigate the QE in a context similar to esport, in which participants click the mouse to hit targets presented on a computer screen under different levels of cognitive load. Simultaneously, eye and mouse movements were tracked using a high-end remote eye tracker at 300 Hz. Consistent with previous studies, we found that longer QE fixations were associated with higher performance. Increasing the cognitive load delayed the onset of the QE fixation, but had no significant influence on the QE duration. We discuss the implications of our results in the context of how the QE is defined, the quality of the eye-tracker data, and the type of analysis applied to QE data.
Collapse
Affiliation(s)
- Mats Dahl
- Department of Psychology, Lund University, Lund, Sweden
| | | | | | | |
Collapse
|
36
|
Dias EC, Van Voorhis AC, Braga F, Todd J, Lopez-Calderon J, Martinez A, Javitt DC. Impaired Fixation-Related Theta Modulation Predicts Reduced Visual Span and Guided Search Deficits in Schizophrenia. Cereb Cortex 2021; 30:2823-2833. [PMID: 32030407 DOI: 10.1093/cercor/bhz277] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
During normal visual behavior, individuals scan the environment through a series of saccades and fixations. At each fixation, the phase of ongoing rhythmic neural oscillations is reset, thereby increasing efficiency of subsequent visual processing. This phase-reset is reflected in the generation of a fixation-related potential (FRP). Here, we evaluate the integrity of theta phase-reset/FRP generation and Guided Visual Search task in schizophrenia. Subjects performed serial and parallel versions of the task. An initial study (15 healthy controls (HC)/15 schizophrenia patients (SCZ)) investigated behavioral performance parametrically across stimulus features and set-sizes. A subsequent study (25-HC/25-SCZ) evaluated integrity of search-related FRP generation relative to search performance and evaluated visual span size as an index of parafoveal processing. Search times were significantly increased for patients versus controls across all conditions. Furthermore, significantly, deficits were observed for fixation-related theta phase-reset across conditions, that fully predicted impaired reduced visual span and search performance and correlated with impaired visual components of neurocognitive processing. By contrast, overall search strategy was similar between groups. Deficits in theta phase-reset mechanisms are increasingly documented across sensory modalities in schizophrenia. Here, we demonstrate that deficits in fixation-related theta phase-reset during naturalistic visual processing underlie impaired efficiency of early visual function in schizophrenia.
Collapse
Affiliation(s)
- Elisa C Dias
- Schizophrenia Research Division, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10920 USA.,Department of Psychiatry, New York University School of Medicine, New York, NY 10016 USA
| | - Abraham C Van Voorhis
- Schizophrenia Research Division, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10920 USA
| | - Filipe Braga
- Schizophrenia Research Division, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10920 USA
| | - Julianne Todd
- Schizophrenia Research Division, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10920 USA
| | - Javier Lopez-Calderon
- Schizophrenia Research Division, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10920 USA
| | - Antigona Martinez
- Schizophrenia Research Division, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10920 USA.,Department of Experimental Therapeutics, College of Physicians and Surgeons, Columbia University, New York, NY, 10032 USA
| | - Daniel C Javitt
- Schizophrenia Research Division, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10920 USA.,Department of Experimental Therapeutics, College of Physicians and Surgeons, Columbia University, New York, NY, 10032 USA
| |
Collapse
|
37
|
Abstract
This study demonstrates evidence for a foundational process underlying active vision in older infants during object play. Using head-mounted eye-tracking and motion capture, looks to an object are shown to be tightly linked to and synchronous with a stilled head, regardless of the duration of gaze, for infants 12 to 24 months of age. Despite being a developmental period of rapid and marked changes in motor abilities, the dynamic coordination of head stabilization and sustained gaze to a visual target is developmentally invariant during the examined age range. The findings indicate that looking with an aligned head and eyes is a fundamental property of human vision and highlights the importance of studying looking behavior in freely moving perceivers in everyday contexts, opening new questions about the role of body movement in both typical and atypical development of visual attention.
Collapse
Affiliation(s)
- Jeremy I Borjon
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA.,
| | - Drew H Abney
- Department of Psychology, University of Georgia, Athens, GA, USA.,
| | - Chen Yu
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA.,Department of Psychology, University of Texas, Austin, TX, USA.,
| | - Linda B Smith
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA.,School of Psychology, University of East Anglia, East Anglia, UK.,
| |
Collapse
|
38
|
Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. Detection of normal and slow saccades using implicit piecewise polynomial approximation. J Vis 2021; 21:8. [PMID: 34125160 PMCID: PMC8212426 DOI: 10.1167/jov.21.6.8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The quantitative analysis of saccades in eye movement data unveils information associated with intention, cognition, and health status. Abnormally slow saccades are indicative of neurological disorders and often imply a specific pathological disturbance. However, conventional saccade detection algorithms are not designed to detect slow saccades, and are correspondingly unreliable when saccades are unusually slow. In this article, we propose an algorithm that is effective for the detection of both normal and slow saccades. The proposed algorithm is partly based on modeling saccadic waveforms as piecewise-quadratic signals. The algorithm first decreases noise in acquired eye-tracking data using optimization to minimize a prescribed objective function, then uses velocity thresholding to detect saccades. Using both simulated saccades and real saccades generated by healthy subjects and patients, we evaluate the performance of the proposed algorithm and 10 other detection algorithms. We show the proposed algorithm is more accurate in detecting both normal and slow saccades than other algorithms.
Collapse
Affiliation(s)
- Weiwei Dai
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - Ivan Selesnick
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - John-Ross Rizzo
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Janet Rucker
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Todd Hudson
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| |
Collapse
|
39
|
Avoiding potential pitfalls in visual search and eye-movement experiments: A tutorial review. Atten Percept Psychophys 2021; 83:2753-2783. [PMID: 34089167 PMCID: PMC8460493 DOI: 10.3758/s13414-021-02326-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/03/2021] [Indexed: 12/15/2022]
Abstract
Examining eye-movement behavior during visual search is an increasingly popular approach for gaining insights into the moment-to-moment processing that takes place when we look for targets in our environment. In this tutorial review, we describe a set of pitfalls and considerations that are important for researchers – both experienced and new to the field – when engaging in eye-movement and visual search experiments. We walk the reader through the research cycle of a visual search and eye-movement experiment, from choosing the right predictions, through to data collection, reporting of methodology, analytic approaches, the different dependent variables to analyze, and drawing conclusions from patterns of results. Overall, our hope is that this review can serve as a guide, a talking point, a reflection on the practices and potential problems with the current literature on this topic, and ultimately a first step towards standardizing research practices in the field.
Collapse
|
40
|
Stelter M, Rommel M, Degner J. (Eye-) Tracking the Other-Race Effect: Comparison of Eye Movements During Encoding and Recognition of Ingroup Faces With Proximal and Distant Outgroup Faces. SOCIAL COGNITION 2021. [DOI: 10.1521/soco.2021.39.3.366] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
People experience difficulties recognizing faces of ethnic outgroups, known as the other-race effect. The present eye-tracking study investigates if this effect is related to differences in visual attention to ingroup and outgroup faces. We measured gaze fixations to specific facial features and overall eye-movement activity level during an old/new recognition task comparing ingroup faces with proximal and distal ethnic outgroup faces. Recognition was best for ingroup faces and decreased gradually for proximal and distal outgroup faces. Participants attended more to the eyes of ingroup faces than outgroup faces, but this effect was unrelated to recognition performance. Ingroup-outgroup differences in eye-movement activity level did not emerge during the study phase, but during the recognition phase, with ingroup-outgroup differences varying as a function of recognition accuracy and old/new effects. Overall, ingroup-outgroup effects on recognition performance and eye movements were more pronounced for recognition of new items, emphasizing the role of retrieval processes.
Collapse
|
41
|
Konovalov A, Ruff CC. Enhancing models of social and strategic decision making with process tracing and neural data. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2021; 13:e1559. [PMID: 33880846 DOI: 10.1002/wcs.1559] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 02/26/2021] [Accepted: 03/24/2021] [Indexed: 11/11/2022]
Abstract
Every decision we take is accompanied by a characteristic pattern of response delay, gaze position, pupil dilation, and neural activity. Nevertheless, many models of social decision making neglect the corresponding process tracing data and focus exclusively on the final choice outcome. Here, we argue that this is a mistake, as the use of process data can help to build better models of human behavior, create better experiments, and improve policy interventions. Specifically, such data allow us to unlock the "black box" of the decision process and evaluate the mechanisms underlying our social choices. Using these data, we can directly validate latent model variables, arbitrate between competing personal motives, and capture information processing strategies. These benefits are especially valuable in social science, where models must predict multi-faceted decisions that are taken in varying contexts and are based on many different types of information. This article is categorized under: Economics > Interactive Decision-Making Neuroscience > Cognition Psychology > Reasoning and Decision Making.
Collapse
Affiliation(s)
- Arkady Konovalov
- Department of Economics, Zurich Center for Neuroeconomics (ZNE), University of Zurich
| | - Christian C Ruff
- Department of Economics, Zurich Center for Neuroeconomics (ZNE), University of Zurich
| |
Collapse
|
42
|
Abstract
Eye trackers are sometimes used to study the miniature eye movements such as drift that occur while observers fixate a static location on a screen. Specifically, analysis of such eye-tracking data can be performed by examining the temporal spectrum composition of the recorded gaze position signal, allowing to assess its color. However, not only rotations of the eyeball but also filters in the eye tracker may affect the signal’s spectral color. Here, we therefore ask whether colored, as opposed to white, signal dynamics in eye-tracking recordings reflect fixational eye movements, or whether they are instead largely due to filters. We recorded gaze position data with five eye trackers from four pairs of human eyes performing fixation sequences, and also from artificial eyes. We examined the spectral color of the gaze position signals produced by the eye trackers, both with their filters switched on, and for unfiltered data. We found that while filtered data recorded from both human and artificial eyes were colored for all eye trackers, for most eye trackers the signal was white when examining both unfiltered human and unfiltered artificial eye data. These results suggest that color in the eye-movement recordings was due to filters for all eye trackers except the most precise eye tracker where it may partly reflect fixational eye movements. As such, researchers studying fixational eye movements should be careful to examine the properties of the filters in their eye tracker to ensure they are studying eyeball rotation and not filter properties.
Collapse
|
43
|
D'Amelio A, Boccignone G. Gazing at Social Interactions Between Foraging and Decision Theory. Front Neurorobot 2021; 15:639999. [PMID: 33859558 PMCID: PMC8042312 DOI: 10.3389/fnbot.2021.639999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 03/09/2021] [Indexed: 11/30/2022] Open
Abstract
Finding the underlying principles of social attention in humans seems to be essential for the design of the interaction between natural and artificial agents. Here, we focus on the computational modeling of gaze dynamics as exhibited by humans when perceiving socially relevant multimodal information. The audio-visual landscape of social interactions is distilled into a number of multimodal patches that convey different social value, and we work under the general frame of foraging as a tradeoff between local patch exploitation and landscape exploration. We show that the spatio-temporal dynamics of gaze shifts can be parsimoniously described by Langevin-type stochastic differential equations triggering a decision equation over time. In particular, value-based patch choice and handling is reduced to a simple multi-alternative perceptual decision making that relies on a race-to-threshold between independent continuous-time perceptual evidence integrators, each integrator being associated with a patch.
Collapse
Affiliation(s)
- Alessandro D'Amelio
- PHuSe Lab, Department of Computer Science, Universitá degli Studi di Milano, Milan, Italy
| | - Giuseppe Boccignone
- PHuSe Lab, Department of Computer Science, Universitá degli Studi di Milano, Milan, Italy
| |
Collapse
|
44
|
Bosworth RG, Stone A. Rapid development of perceptual gaze control in hearing native signing Infants and children. Dev Sci 2021; 24:e13086. [PMID: 33484575 DOI: 10.1111/desc.13086] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 11/23/2020] [Accepted: 01/19/2021] [Indexed: 11/30/2022]
Abstract
Children's gaze behavior reflects emergent linguistic knowledge and real-time language processing of speech, but little is known about naturalistic gaze behaviors while watching signed narratives. Measuring gaze patterns in signing children could uncover how they master perceptual gaze control during a time of active language learning. Gaze patterns were recorded using a Tobii X120 eye tracker, in 31 non-signing and 30 signing hearing infants (5-14 months) and children (2-8 years) as they watched signed narratives on video. Intelligibility of the signed narratives was manipulated by presenting them naturally and in video-reversed ("low intelligibility") conditions. This video manipulation was used because it distorts semantic content, while preserving most surface phonological features. We examined where participants looked, using linear mixed models with Language Group (non-signing vs. signing) and Video Condition (Forward vs. Reversed), controlling for trial order. Non-signing infants and children showed a preference to look at the face as well as areas below the face, possibly because their gaze was drawn to the moving articulators in signing space. Native signing infants and children demonstrated resilient, face-focused gaze behavior. Moreover, their gaze behavior was unchanged for video-reversed signed narratives, similar to what was seen for adult native signers, possibly because they already have efficient highly focused gaze behavior. The present study demonstrates that human perceptual gaze control is sensitive to visual language experience over the first year of life and emerges early, by 6 months of age. Results have implications for the critical importance of early visual language exposure for deaf infants. A video abstract of this article can be viewed at https://www.youtube.com/watch?v=2ahWUluFAAg.
Collapse
Affiliation(s)
- Rain G Bosworth
- National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY, USA
| | - Adam Stone
- Department of Psychology, University of California, San Diego, CA, USA
| |
Collapse
|
45
|
Kaczorowska M, Plechawska-Wójcik M, Tokovarov M. Interpretable Machine Learning Models for Three-Way Classification of Cognitive Workload Levels for Eye-Tracking Features. Brain Sci 2021; 11:brainsci11020210. [PMID: 33572232 PMCID: PMC7914927 DOI: 10.3390/brainsci11020210] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 01/12/2021] [Accepted: 02/03/2021] [Indexed: 11/16/2022] Open
Abstract
The paper is focussed on the assessment of cognitive workload level using selected machine learning models. In the study, eye-tracking data were gathered from 29 healthy volunteers during examination with three versions of the computerised version of the digit symbol substitution test (DSST). Understanding cognitive workload is of great importance in analysing human mental fatigue and the performance of intellectual tasks. It is also essential in the context of explanation of the brain cognitive process. Eight three-class classification machine learning models were constructed and analysed. Furthermore, the technique of interpretable machine learning model was applied to obtain the measures of feature importance and its contribution to the brain cognitive functions. The measures allowed improving the quality of classification, simultaneously lowering the number of applied features to six or eight, depending on the model. Moreover, the applied method of explainable machine learning provided valuable insights into understanding the process accompanying various levels of cognitive workload. The main classification performance metrics, such as F1, recall, precision, accuracy, and the area under the Receiver operating characteristic curve (ROC AUC) were used in order to assess the quality of classification quantitatively. The best result obtained on the complete feature set was as high as 0.95 (F1); however, feature importance interpretation allowed increasing the result up to 0.97 with only seven of 20 features applied.
Collapse
|
46
|
Gurtner LM, Hartmann M, Mast FW. Eye movements during visual imagery and perception show spatial correspondence but have unique temporal signatures. Cognition 2021; 210:104597. [PMID: 33508576 DOI: 10.1016/j.cognition.2021.104597] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 01/07/2021] [Accepted: 01/08/2021] [Indexed: 11/20/2022]
Abstract
Eye fixation patterns during mental imagery are similar to those during perception of the same picture, suggesting that oculomotor mechanisms play a role in mental imagery (i.e., the "looking at nothing" effect). Previous research has focused on the spatial similarities of eye movements during perception and mental imagery. The primary aim of this study was to assess whether the spatial similarity translates to the temporal domain. We used recurrence quantification analysis (RQA) to assess the temporal structure of eye fixations in visual perception and mental imagery and we compared the temporal as well as the spatial characteristics in mental imagery with perception by means of Bayesian hierarchical regression models. We further investigated how person and picture-specific characteristics contribute to eye movement behavior in mental imagery. Working memory capacity and mental imagery abilities were assessed to either predict gaze dynamics in visual imagery or to moderate a possible correspondence between spatial or temporal gaze dynamics in perception and mental imagery. We were able to show the spatial similarity of fixations between visual perception and imagery and we provide first evidence for its moderation by working memory capacity. Interestingly, the temporal gaze dynamics in mental imagery were unrelated to those in perception and their variance between participants was not explained by variance in visuo-spatial working memory capacity or vividness of mental images. The semantic content of the imagined pictures was the only meaningful predictor of temporal gaze dynamics. The spatial correspondence reflects shared spatial structure of mental images and perceived pictures, while the unique temporal gaze behavior could be driven by generation, maintenance and protection processes specific to visual imagery. The unique temporal gaze dynamics offer a window to new insights into the genuine process of mental imagery independent of its similarity to perception.
Collapse
Affiliation(s)
- Lilla M Gurtner
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012 Bern, Switzerland.
| | - Matthias Hartmann
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012 Bern, Switzerland; Faculty of Psychology, UniDistance Suisse, Überlandstrasse 12, 3900 Brig, Switzerland
| | - Fred W Mast
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012 Bern, Switzerland
| |
Collapse
|
47
|
Abstract
The magnitude of variation in the gaze position signals recorded by an eye tracker, also known as its precision, is an important aspect of an eye tracker’s data quality. However, data quality of eye-tracking signals is still poorly understood. In this paper, we therefore investigate the following: (1) How do the various available measures characterizing eye-tracking data during fixation relate to each other? (2) How are they influenced by signal type? (3) What type of noise should be used to augment eye-tracking data when evaluating eye-movement analysis methods? To support our analysis, this paper presents new measures to characterize signal type and signal magnitude based on RMS-S2S and STD, two established measures of precision. Simulations are performed to investigate how each of these measures depends on the number of gaze position samples over which they are calculated, and to reveal how RMS-S2S and STD relate to each other and to measures characterizing the temporal spectrum composition of the recorded gaze position signal. Further empirical investigations were performed using gaze position data recorded with five eye trackers from human and artificial eyes. We found that although the examined eye trackers produce gaze position signals with different characteristics, the relations between precision measures derived from simulations are borne out by the data. We furthermore conclude that data with a range of signal type values should be used to assess the robustness of eye-movement analysis methods. We present a method for generating artificial eye-tracker noise of any signal type and magnitude.
Collapse
|
48
|
Stein N. A Comparison of Eye Tracking Latencies Among Several Commercial Head-Mounted Displays. Iperception 2021; 12:2041669520983338. [PMID: 33628410 PMCID: PMC7883159 DOI: 10.1177/2041669520983338] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 11/16/2020] [Indexed: 11/15/2022] Open
Abstract
A number of virtual reality head-mounted displays (HMDs) with integrated eye trackers have recently become commercially available. If their eye tracking latency is low and reliable enough for gaze-contingent rendering, this may open up many interesting opportunities for researchers. We measured eye tracking latencies for the Fove-0, the Varjo VR-1, and the High Tech Computer Corporation (HTC) Vive Pro Eye using simultaneous electrooculography measurements. We determined the time from the occurrence of an eye position change to its availability as a data sample from the eye tracker (delay) and the time from an eye position change to the earliest possible change of the display content (latency). For each test and each device, participants performed 60 saccades between two targets 20° of visual angle apart. The targets were continuously visible in the HMD, and the saccades were instructed by an auditory cue. Data collection and eye tracking calibration were done using the recommended scripts for each device in Unity3D. The Vive Pro Eye was recorded twice, once using the SteamVR SDK and once using the Tobii XR SDK. Our results show clear differences between the HMDs. Delays ranged from 15 ms to 52 ms, and the latencies ranged from 45 ms to 81 ms. The Fove-0 appears to be the fastest device and best suited for gaze-contingent rendering.
Collapse
Affiliation(s)
- Niklas Stein
- Institute for Psychology, University of Muenster, Muenster, Germany
| |
Collapse
|
49
|
GlassesViewer: Open-source software for viewing and analyzing data from the Tobii Pro Glasses 2 eye tracker. Behav Res Methods 2020; 52:1244-1253. [PMID: 31898293 PMCID: PMC7280338 DOI: 10.3758/s13428-019-01314-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We present GlassesViewer, open-source software for viewing and analyzing eye-tracking data of the Tobii Pro Glasses 2 head-mounted eye tracker as well as the scene and eye videos and other data streams (pupil size, gyroscope, accelerometer, and TTL input) that this headset can record. The software provides the following functionality written in MATLAB: (1) a graphical interface for navigating the study- and recording structure produced by the Tobii Glasses 2; (2) functionality to unpack, parse, and synchronize the various data and video streams comprising a Glasses 2 recording; and (3) a graphical interface for viewing the Glasses 2's gaze direction, pupil size, gyroscope and accelerometer time-series data, along with the recorded scene and eye camera videos. In this latter interface, segments of data can furthermore be labeled through user-provided event classification algorithms or by means of manual annotation. Lastly, the toolbox provides integration with the GazeCode tool by Benjamins et al. (2018), enabling a completely open-source workflow for analyzing Tobii Pro Glasses 2 recordings.
Collapse
|
50
|
Abstract
Mobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant’s head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs’ Pupil in 3D mode, and (iv) Pupil-Labs’ Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8–3.1∘ increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup.
Collapse
|