1
|
Sadeghi R, Ressmeyer R, Yates J, Otero-Millan J. Open Iris - An Open Source Framework for Video-Based Eye-Tracking Research and Development. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.27.582401. [PMID: 38463977 PMCID: PMC10925248 DOI: 10.1101/2024.02.27.582401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
Eye-tracking is an essential tool in many fields, yet existing solutions are often limited for customized applications due to cost or lack of flexibility. We present OpenIris, an adaptable and user-friendly open-source framework for video-based eye-tracking. OpenIris is developed in C# with modular design that allows further extension and customization through plugins for different hardware systems, tracking, and calibration pipelines. It can be remotely controlled via a network interface from other devices or programs. Eye movements can be recorded online from camera stream or offline post-processing recorded videos. Example plugins have been developed to track eye motion in 3-D, including torsion. Currently implemented binocular pupil tracking pipelines can achieve frame rates of more than 500Hz. With the OpenIris framework, we aim to fill a gap in the research tools available for high-precision and high-speed eye-tracking, especially in environments that require custom solutions that are not currently well-served by commercial eye-trackers.
Collapse
Affiliation(s)
- Roksana Sadeghi
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, California, USA
| | - Ryan Ressmeyer
- Bioengineering, University of Washington, Seattle, Washington, USA
| | - Jacob Yates
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, California, USA
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, California, USA
- Department of Neurology, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
2
|
Falch L, Lohan KS. Webcam-based gaze estimation for computer screen interaction. Front Robot AI 2024; 11:1369566. [PMID: 38628652 PMCID: PMC11019238 DOI: 10.3389/frobt.2024.1369566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 02/26/2024] [Indexed: 04/19/2024] Open
Abstract
This paper presents a novel webcam-based approach for gaze estimation on computer screens. Utilizing appearance based gaze estimation models, the system provides a method for mapping the gaze vector from the user's perspective onto the computer screen. Notably, it determines the user's 3D position in front of the screen, using only a 2D webcam without the need for additional markers or equipment. The study presents a comprehensive comparative analysis, assessing the performance of the proposed method against established eye tracking solutions. This includes a direct comparison with the purpose-built Tobii Eye Tracker 5, a high-end hardware solution, and the webcam-based GazeRecorder software. In experiments replicating head movements, especially those imitating yaw rotations, the study brings to light the inherent difficulties associated with tracking such motions using 2D webcams. This research introduces a solution by integrating Structure from Motion (SfM) into the Convolutional Neural Network (CNN) model. The study's accomplishments include showcasing the potential for accurate screen gaze tracking with a simple webcam, presenting a novel approach for physical distance computation, and proposing compensation for head movements, laying the groundwork for advancements in real-world gaze estimation scenarios.
Collapse
Affiliation(s)
- Lucas Falch
- Institute for the Development of Mechatronic Systems EMS, Eastern Switzerland University of Applied Sciences (OST), Buchs, Switzerland
| | | |
Collapse
|
3
|
Yanaoka K, Van't Wout F, Saito S, Jarrold C. Prior task experience increases 5-year-old children's use of proactive control: Behavioral and pupillometric evidence. Dev Sci 2021; 25:e13181. [PMID: 34623719 DOI: 10.1111/desc.13181] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Revised: 09/24/2021] [Accepted: 10/01/2021] [Indexed: 11/29/2022]
Abstract
Children engage cognitive control reactively when they encounter conflicts; however, they can also resolve conflicts proactively. Recent studies have begun to clarify the mechanisms that support the use of proactive control in children; nonetheless, sufficient knowledge has not been accumulated regarding these mechanisms. Using behavioral and pupillometric measures, we tested the novel possibility that 5-year-old children (N = 58) learn to use proactive control via the acquisition of abstract task knowledge that captures regularities of the task. Participants were assigned to either a proactive training group or a control training group. In the proactive training group, participants engaged in a training phase where using proactive control was encouraged, followed by a test phase using different stimuli where both proactive and reactive control could be used. In the control training group, participants engaged in a training phase where both cognitive control strategies could be used, followed by a similarly-structured test phase using different stimuli. We demonstrated children in the control training group responded more quickly and accurately and showed greater cue-related pupil dilation in the test phase than in the training phase. However, there were no differences in response times, accuracies, and pupil dilation between the proactive and control training groups in the training and test phases. These findings suggest that prior task experience, that goes beyond specific knowledge about the timing of task goal activation, can lead children to engage more proactive control endogenously, even if they are not directly encouraged to do so.
Collapse
Affiliation(s)
- Kaichi Yanaoka
- Graduate School of Education, The University of Tokyo, Tokyo, Japan.,Japan Society for the Promotion of Science, Tokyo, Japan
| | | | - Satoru Saito
- Graduate School of Education, Kyoto University, Kyoto, Japan
| | | |
Collapse
|
4
|
Clarfeld LA, Gramling R, Rizzo DM, Eppstein MJ. A general model of conversational dynamics and an example application in serious illness communication. PLoS One 2021; 16:e0253124. [PMID: 34197490 PMCID: PMC8248661 DOI: 10.1371/journal.pone.0253124] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Accepted: 05/29/2021] [Indexed: 11/19/2022] Open
Abstract
Conversation has been a primary means for the exchange of information since ancient times. Understanding patterns of information flow in conversations is a critical step in assessing and improving communication quality. In this paper, we describe COnversational DYnamics Model (CODYM) analysis, a novel approach for studying patterns of information flow in conversations. CODYMs are Markov Models that capture sequential dependencies in the lengths of speaker turns. The proposed method is automated and scalable, and preserves the privacy of the conversational participants. The primary function of CODYM analysis is to quantify and visualize patterns of information flow, concisely summarized over sequential turns from one or more conversations. Our approach is general and complements existing methods, providing a new tool for use in the analysis of any type of conversation. As an important first application, we demonstrate the model on transcribed conversations between palliative care clinicians and seriously ill patients. These conversations are dynamic and complex, taking place amidst heavy emotions, and include difficult topics such as end-of-life preferences and patient values. We use CODYMs to identify normative patterns of information flow in serious illness conversations, show how these normative patterns change over the course of the conversations, and show how they differ in conversations where the patient does or doesn’t audibly express anger or fear. Potential applications of CODYMs range from assessment and training of effective healthcare communication to comparing conversational dynamics across languages, cultures, and contexts with the prospect of identifying universal similarities and unique “fingerprints” of information flow.
Collapse
Affiliation(s)
- Laurence A. Clarfeld
- Department of Computer Science, University of Vermont, Burlington, VT, United States of America
- * E-mail:
| | - Robert Gramling
- Department of Family Medicine, University of Vermont, Burlington, VT, United States of America
| | - Donna M. Rizzo
- Department of Civil and Environmental Engineering, University of Vermont, Burlington, VT, United States of America
- Vermont Complex Systems Center, University of Vermont, Burlington, VT, United States of America
| | - Margaret J. Eppstein
- Department of Computer Science, University of Vermont, Burlington, VT, United States of America
- Vermont Complex Systems Center, University of Vermont, Burlington, VT, United States of America
| |
Collapse
|
5
|
RemoteEye: An open-source high-speed remote eye tracker : Implementation insights of a pupil- and glint-detection algorithm for high-speed remote eye tracking. Behav Res Methods 2020; 52:1387-1401. [PMID: 32212086 DOI: 10.3758/s13428-019-01305-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The increasing employment of eye-tracking technology in different application areas and in vision research has led to an increased need to measure fast eye-movement events. Whereas the cost of commercial high-speed eye trackers (above 300 Hz) is usually in the tens of thousands of EUR, to date, only a small number of studies have proposed low-cost solutions. Existing low-cost solutions however, focus solely on lower frame rates (up to 120 Hz) that might suffice for basic eye tracking, leaving a gap when it comes to the investigation of high-speed saccadic eye movements. In this paper, we present and evaluate a system designed to track such high-speed eye movements, achieving operating frequencies well beyond 500 Hz. This includes methods to effectively and robustly detect and track glints and pupils in the context of high-speed remote eye tracking, which, paired with a geometric eye model, achieved an average gaze estimation error below 1 degree and average precision of 0.38 degrees. Moreover, average undetection rate was only 0.33%. At a total investment of less than 600 EUR, the proposed system represents a competitive and suitable alternative to commercial systems at a tiny fraction of the cost, with the additional advantage that it can be freely tuned by investigators to fit their requirements independent of eye-tracker vendors.
Collapse
|
6
|
Carter BT, Luke SG. Best practices in eye tracking research. Int J Psychophysiol 2020; 155:49-62. [PMID: 32504653 DOI: 10.1016/j.ijpsycho.2020.05.010] [Citation(s) in RCA: 81] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 05/26/2020] [Accepted: 05/27/2020] [Indexed: 12/14/2022]
Abstract
This guide describes best practices in using eye tracking technology for research in a variety of disciplines. A basic outline of the anatomy and physiology of the eyes and of eye movements is provided, along with a description of the sorts of research questions eye tracking can address. We then explain how eye tracking technology works and what sorts of data it generates, and provide guidance on how to select and use an eye tracker as well as selecting appropriate eye tracking measures. Challenges to the validity of eye tracking studies are described, along with recommendations for overcoming these challenges. We then outline correct reporting standards for eye tracking studies.
Collapse
|
7
|
Abstract
In this study I examined the role of the hands in scene perception. In Experiment 1, eye movements during free observation of natural scenes were analyzed. Fixations to faces and hands were compared under several conditions, including scenes with and without faces, with and without hands, and without a person. The hands were either resting (e.g., lying on the knees) or interacting with objects (e.g., holding a bottle). Faces held an absolute attentional advantage, regardless of hand presence. Importantly, fixations to interacting hands were faster and more frequent than those to resting hands, suggesting attentional priority to interacting hands. The interacting-hand advantage could not be attributed to perceptual saliency or to the hand-owner (i.e., the depicted person) gaze being directed at the interacting hand. Experiment 2 confirmed the interacting-hand advantage in a visual search paradigm with more controlled stimuli. The present results indicate that the key to understanding the role of attention in person perception is the competitive interaction among objects such as faces, hands, and objects interacting with the person.
Collapse
|
8
|
Development of Open-source Software and Gaze Data Repositories for Performance Evaluation of Eye Tracking Systems. Vision (Basel) 2019; 3:vision3040055. [PMID: 31735856 PMCID: PMC6969935 DOI: 10.3390/vision3040055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2019] [Revised: 10/10/2019] [Accepted: 10/15/2019] [Indexed: 11/17/2022] Open
Abstract
In this paper, a range of open-source tools, datasets, and software that have been developed for quantitative and in-depth evaluation of eye gaze data quality are presented. Eye tracking systems in contemporary vision research and applications face major challenges due to variable operating conditions such as user distance, head pose, and movements of the eye tracker platform. However, there is a lack of open-source tools and datasets that could be used for quantitatively evaluating an eye tracker’s data quality, comparing performance of multiple trackers, or studying the impact of various operating conditions on a tracker’s accuracy. To address these issues, an open-source code repository named GazeVisual-Lib is developed that contains a number of algorithms, visualizations, and software tools for detailed and quantitative analysis of an eye tracker’s performance and data quality. In addition, a new labelled eye gaze dataset that is collected from multiple user platforms and operating conditions is presented in an open data repository for benchmark comparison of gaze data from different eye tracking systems. The paper presents the concept, development, and organization of these two repositories that are envisioned to improve the performance analysis and reliability of eye tracking systems.
Collapse
|
9
|
Abstract
Pupil dilation is an effective indicator of cognitive and affective processes. Although several eyetracker systems on the market can provide effective solutions for pupil dilation measurement, there is a lack of tools for processing and analyzing the data provided by these systems. For this reason, we developed CHAP: open-source software written in MATLAB. This software provides a user-friendly graphical user interface for processing and analyzing pupillometry data. Our software creates uniform conventions for the preprocessing and analysis of pupillometry data and provides a quick and easy-to-use tool for researchers interested in pupillometry. To download CHAP or join our mailing list, please visit CHAP's website: http://in.bgu.ac.il/en/Labs/CNL/chap .
Collapse
|
10
|
The whereabouts of visual attention: Involuntary attentional bias toward the default gaze direction. Atten Percept Psychophys 2017; 79:1666-1673. [PMID: 28500508 DOI: 10.3758/s13414-017-1332-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This study proposed and verified a new hypothesis on the relationship between gaze direction and visual attention: attentional bias by default gaze direction based on eye-head coordination. We conducted a target identification task in which visual stimuli appeared briefly to the left and right of a fixation cross. In Experiment 1, the direction of the participant's head (aligned with the body) was manipulated to the left, front, or right relative to a central fixation point. In Experiment 2, head direction was manipulated to the left, front, or right relative to the body direction. This manipulation was based on results showing that bias of eye position distribution was highly correlated with head direction. In both experiments, accuracy was greater when the target appeared at a position where the eyes would potentially be directed. Consequently, eye-head coordination influences visual attention. That is, attention can be automatically biased toward the location where the eyes tend to be directed.
Collapse
|
11
|
Nakashima R, Kumada T. Peripersonal versus extrapersonal visual scene information for egocentric direction and position perception. Q J Exp Psychol (Hove) 2017; 71:1090-1099. [PMID: 28326888 DOI: 10.1080/17470218.2017.1310267] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
When perceiving the visual environment, people simultaneously perceive their own direction and position in the environment (i.e., egocentric spatial perception). This study investigated what visual information in a scene is necessary for egocentric spatial perceptions. In two perception tasks (the egocentric direction and position perception tasks), observers viewed two static road images presented sequentially. In Experiment 1, the critical manipulation involved an occluded region in the road image, an extrapersonal region (far-occlusion) and a peripersonal region (near-occlusion). Egocentric direction perception was worse in the far-occlusion condition than in the no-occlusion condition, and egocentric position perceptions were worse in the far- and near-occlusion conditions than in the no-occlusion condition. In Experiment 2, we conducted the same tasks manipulating the observers' gaze location in a scene-an extrapersonal region (far-gaze), a peripersonal region (near-gaze) and the intermediate region between the former two (middle-gaze). Egocentric direction perception performance was the best in the far-gaze condition, and egocentric position perception performances were not different among gaze location conditions. These results suggest that egocentric direction perception is based on fine visual information about the extrapersonal region in a road landscape, and egocentric position perception is based on information about the entire visual scene.
Collapse
Affiliation(s)
- Ryoichi Nakashima
- 1 RIKEN BSI-TOYOTA Collaboration Center, RIKEN, Saitama, Japan.,2 The University of Tokyo, Tokyo, Japan
| | - Takatsune Kumada
- 1 RIKEN BSI-TOYOTA Collaboration Center, RIKEN, Saitama, Japan.,3 Kyoto University, Kyoto, Japan
| |
Collapse
|
12
|
Mitz AR, Chacko RV, Putnam PT, Rudebeck PH, Murray EA. Using pupil size and heart rate to infer affective states during behavioral neurophysiology and neuropsychology experiments. J Neurosci Methods 2017; 279:1-12. [PMID: 28089759 PMCID: PMC5346348 DOI: 10.1016/j.jneumeth.2017.01.004] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Revised: 01/03/2017] [Accepted: 01/09/2017] [Indexed: 01/31/2023]
Abstract
BACKGROUND Nonhuman primates (NHPs) are a valuable research model because of their behavioral, physiological and neuroanatomical similarities to humans. In the absence of language, autonomic activity can provide crucial information about cognitive and affective states during single-unit recording, inactivation and lesion studies. Methods standardized for use in humans are not easily adapted to NHPs and detailed guidance has been lacking. NEW METHOD We provide guidance for monitoring heart rate and pupil size in the behavioral neurophysiology setting by addressing the methodological issues, pitfalls and solutions for NHP studies. The methods are based on comparative physiology to establish a rationale for each solution. We include examples from both electrophysiological and lesion studies. RESULTS Single-unit recording, pupil responses and heart rate changes represent a range of decreasing temporal resolution, a characteristic that impacts experimental design and analysis. We demonstrate the unexpected result that autonomic measures acquired before and after amygdala lesions are comparable despite disruption of normal autonomic function. COMPARISON WITH EXISTING METHODS Species and study design differences can render standard techniques used in human studies inappropriate for NHP studies. We show how to manage data from small groups typical of NHP studies, data from the short behavioral trials typical of neurophysiological studies, issues associated with longitudinal studies, and differences in anatomy and physiology. CONCLUSIONS Autonomic measurement to infer cognitive and affective states in NHP is neither off-the-shelf nor onerous. Familiarity with the issues and solutions will broaden the use of autonomic signals in NHP single unit and lesion studies.
Collapse
Affiliation(s)
- Andrew R Mitz
- Section on the Neurobiology of Learning and Memory, Laboratory of Neuropsychology, National Institute of Mental Health, Bethesda, MD, USA.
| | - Ravi V Chacko
- Section on the Neurobiology of Learning and Memory, Laboratory of Neuropsychology, National Institute of Mental Health, Bethesda, MD, USA; Washington University School of Medicine, Saint Louis, MO, USA
| | - Philip T Putnam
- Section on the Neurobiology of Learning and Memory, Laboratory of Neuropsychology, National Institute of Mental Health, Bethesda, MD, USA
| | - Peter H Rudebeck
- Section on the Neurobiology of Learning and Memory, Laboratory of Neuropsychology, National Institute of Mental Health, Bethesda, MD, USA; Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Elisabeth A Murray
- Section on the Neurobiology of Learning and Memory, Laboratory of Neuropsychology, National Institute of Mental Health, Bethesda, MD, USA
| |
Collapse
|
13
|
Farivar R, Michaud-Landry D. Construction and Operation of a High-Speed, High-Precision Eye Tracker for Tight Stimulus Synchronization and Real-Time Gaze Monitoring in Human and Animal Subjects. Front Syst Neurosci 2016; 10:73. [PMID: 27683545 PMCID: PMC5021695 DOI: 10.3389/fnsys.2016.00073] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2015] [Accepted: 08/15/2016] [Indexed: 11/13/2022] Open
Abstract
Measurements of the fast and precise movements of the eye—critical to many vision, oculomotor, and animal behavior studies—can be made non-invasively by video oculography. The protocol here describes the construction and operation of a research-grade video oculography system with ~0.1° precision over the full typical viewing range at over 450 Hz with tight synchronization with stimulus onset. The protocol consists of three stages: (1) system assembly, (2) calibration for both cooperative, and for minimally cooperative subjects (e.g., animals or infants), and (3) gaze monitoring and recording.
Collapse
Affiliation(s)
- Reza Farivar
- Department of Ophthalmology, McGill Vision Research Unit, McGill UniversityMontreal, QC, Canada; Research Institute of the McGill University Health CentreMontreal, QC, Canada
| | - Danny Michaud-Landry
- Department of Ophthalmology, McGill Vision Research Unit, McGill UniversityMontreal, QC, Canada; Research Institute of the McGill University Health CentreMontreal, QC, Canada
| |
Collapse
|
14
|
Abstract
Eye movement analysis is effective for investigating visual perception and cognition. The cost of conducting eye movement studies has decreased as a result of the recent release of low-cost commercial and open-source eye trackers. However, synchronizing visual stimulus presentation with eye movement recording is still difficult, particularly if the eye tracker does not come with a practical application programming interface. This paper introduces a Matlab/Octave toolbox named Sgttoolbox, which works in conjunction with the widely used experiment control library Psychtoolbox to control a cross-platform open-source eye tracker named SimpleGazeTracker, which is an eye-tracking application of GazeParser software. Hardware and software requirements for Sgttoolbox and its main functions are described. A test of temporal accuracy showed that eye movement sampling frequency was stable when stimulus presentation and recording were performed on a single PC, although better performance was obtained when presentation and recording were performed on separate PCs. Transferring the latest eye position from SimpleGazeTracker to Psychtoolbox script takes 2 to 4 ms on average, which causes a delay in drawing multiple visual stimuli when recording and stimulus presentation were performed on a single PC. When such a transfer delay is not importnat, Sgttoolbox would be a good choice for Psychtoolbox users who wish to conduct eye-tracking studies.
Collapse
|
15
|
Abstract
Interest has flourished in studying both the spatial and temporal aspects of eye movement behavior. This has sparked the development of a large number of new methods to compare scanpaths. In the present work, we present a detailed overview of common scanpath comparison measures. Each of these measures was developed to solve a specific problem, but quantifies different aspects of scanpath behavior and requires different data-processing techniques. To understand these differences, we applied each scanpath comparison method to data from an encoding and recognition experiment and compared their ability to reveal scanpath similarities within and between individuals looking at natural scenes. Results are discussed in terms of the unique aspects of scanpath behavior that the different methods quantify. We conclude by making recommendations for choosing an appropriate scanpath comparison measure.
Collapse
|
16
|
Abstract
Novel open source R extension package for general-purpose eye tracking results analysis proposed. Now supported features are data loading from SMI eye trackers, different methods of fixations detection, various imaging techniques for raw data, and detected fixations (time sequence, scanpath, heatmap, and dynamic visualization). The modular structure of the package and a detailed description of each function provide a convenient way to further extend the functionality. Effective use of package requires knowledge of R programming language and environment.
Collapse
Affiliation(s)
| | - Pavel A Marmalyuk
- Moscow Municipal University of Psychology and Education, Moscow, Russia
| |
Collapse
|
17
|
PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments. Behav Res Methods 2015; 46:913-21. [PMID: 24258321 DOI: 10.3758/s13428-013-0422-2] [Citation(s) in RCA: 132] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research.
Collapse
|