1
|
Upasani S, Srinivasan D, Zhu Q, Du J, Leonessa A. Eye-Tracking in Physical Human-Robot Interaction: Mental Workload and Performance Prediction. HUMAN FACTORS 2024; 66:2104-2119. [PMID: 37793896 DOI: 10.1177/00187208231204704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/06/2023]
Abstract
BACKGROUND In Physical Human-Robot Interaction (pHRI), the need to learn the robot's motor-control dynamics is associated with increased cognitive load. Eye-tracking metrics can help understand the dynamics of fluctuating mental workload over the course of learning. OBJECTIVE The aim of this study was to test eye-tracking measures' sensitivity and reliability to variations in task difficulty, as well as their performance-prediction capability, in physical human-robot collaboration tasks involving an industrial robot for object comanipulation. METHODS Participants (9M, 9F) learned to coperform a virtual pick-and-place task with a bimanual robot over multiple trials. Joint stiffness of the robot was manipulated to increase motor-coordination demands. The psychometric properties of eye-tracking measures and their ability to predict performance was investigated. RESULTS Stationary Gaze Entropy and pupil diameter were the most reliable and sensitive measures of workload associated with changes in task difficulty and learning. Increased task difficulty was more likely to result in a robot-monitoring strategy. Eye-tracking measures were able to predict the occurrence of success or failure in each trial with 70% sensitivity and 71% accuracy. CONCLUSION The sensitivity and reliability of eye-tracking measures was acceptable, although values were lower than those observed in cognitive domains. Measures of gaze behaviors indicative of visual monitoring strategies were most sensitive to task difficulty manipulations, and should be explored further for the pHRI domain where motor-control and internal-model formation will likely be strong contributors to workload. APPLICATION Future collaborative robots can adapt to human cognitive state and skill-level measured using eye-tracking measures of workload and visual attention.
Collapse
Affiliation(s)
| | | | - Qi Zhu
- National Institute of Standards and Technology, Boulder, CO, USA
| | - Jing Du
- University of Florida, Gainesville, FL, USA
| | | |
Collapse
|
2
|
Hoogerbrugge AJ, Strauch C, Böing S, Nijboer TCW, Van der Stigchel S. Just-in-Time Encoding Into Visual Working Memory Is Contingent Upon Constant Availability of External Information. J Cogn 2024; 7:39. [PMID: 38706717 PMCID: PMC11067970 DOI: 10.5334/joc.364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 04/16/2024] [Indexed: 05/07/2024] Open
Abstract
Humans maintain an intricate balance between storing information in visual working memory (VWM) and just-in-time sampling of the external world, rooted in a trade-off between the cost of maintaining items in VWM versus retrieving information as it is needed. Previous studies have consistently shown that one prerequisite of just-in-time sampling is a high degree of availability of external information, and that introducing a delay before being able to access information led participants to rely less on the external world and more on VWM. However, these studies manipulated availability in such a manner that the cost of sampling was stable and predictable. It is yet unclear whether participants become less reliant on external information when it is more difficult to factor in the cost of sampling that information. In two experiments, participants copied an example layout from the left to the right side of the screen. In Experiment 1, intermittent occlusion of the example layout led participants to attempt to encode more items per inspection than when the layout was constantly available, but this did not consistently result in more correct placements. However, these findings could potentially be explained by inherent differences in how long the example layout could be viewed. Therefore in Experiment 2, the example layout only became available after a gaze-contingent delay, which could be constant or variable. Here, the introduction of any delay led to increased VWM load compared to no delay, although the degree of variability in the delay did not alter behaviour. These results reaffirm that the nature of when we engage VWM is dynamical, and suggest that any disruption to the continuous availability of external information is the main driver of increased VWM usage relative to whether availability is predictable or not.
Collapse
Affiliation(s)
- Alex J. Hoogerbrugge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Christoph Strauch
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Sanne Böing
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Tanja C. W. Nijboer
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | | |
Collapse
|
3
|
Drews M, Dierkes K. Strategies for enhancing automatic fixation detection in head-mounted eye tracking. Behav Res Methods 2024:10.3758/s13428-024-02360-0. [PMID: 38594440 DOI: 10.3758/s13428-024-02360-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/30/2024] [Indexed: 04/11/2024]
Abstract
Moving through a dynamic world, humans need to intermittently stabilize gaze targets on their retina to process visual information. Overt attention being thus split into discrete intervals, the automatic detection of such fixation events is paramount to downstream analysis in many eye-tracking studies. Standard algorithms tackle this challenge in the limiting case of little to no head motion. In this static scenario, which is approximately realized for most remote eye-tracking systems, it amounts to detecting periods of relative eye stillness. In contrast, head-mounted eye trackers allow for experiments with subjects moving naturally in everyday environments. Detecting fixations in these dynamic scenarios is more challenging, since gaze-stabilizing eye movements need to be reliably distinguished from non-fixational gaze shifts. Here, we propose several strategies for enhancing existing algorithms developed for fixation detection in the static case to allow for robust fixation detection in dynamic real-world scenarios recorded with head-mounted eye trackers. Specifically, we consider (i) an optic-flow-based compensation stage explicitly accounting for stabilizing eye movements during head motion, (ii) an adaptive adjustment of algorithm sensitivity according to head-motion intensity, and (iii) a coherent tuning of all algorithm parameters. Introducing a new hand-labeled dataset, recorded with the Pupil Invisible glasses by Pupil Labs, we investigate their individual contributions. The dataset comprises both static and dynamic scenarios and is made publicly available. We show that a combination of all proposed strategies improves standard thresholding algorithms and outperforms previous approaches to fixation detection in head-mounted eye tracking.
Collapse
Affiliation(s)
- Michael Drews
- Pupil Labs, Sanderstraße 28, 12047, Berlin, Germany.
| | - Kai Dierkes
- Pupil Labs, Sanderstraße 28, 12047, Berlin, Germany.
| |
Collapse
|
4
|
Nyström M, Andersson R, Niehorster DC, Hessels RS, Hooge ITC. What is a blink? Classifying and characterizing blinks in eye openness signals. Behav Res Methods 2024; 56:3280-3299. [PMID: 38424292 PMCID: PMC11133197 DOI: 10.3758/s13428-023-02333-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/22/2023] [Indexed: 03/02/2024]
Abstract
Blinks, the closing and opening of the eyelids, are used in a wide array of fields where human function and behavior are studied. In data from video-based eye trackers, blink rate and duration are often estimated from the pupil-size signal. However, blinks and their parameters can be estimated only indirectly from this signal, since it does not explicitly contain information about the eyelid position. We ask whether blinks detected from an eye openness signal that estimates the distance between the eyelids (EO blinks) are comparable to blinks detected with a traditional algorithm using the pupil-size signal (PS blinks) and how robust blink detection is when data quality is low. In terms of rate, there was an almost-perfect overlap between EO and PS blink (F1 score: 0.98) when the head was in the center of the eye tracker's tracking range where data quality was high and a high overlap (F1 score 0.94) when the head was at the edge of the tracking range where data quality was worse. When there was a difference in blink rate between EO and PS blinks, it was mainly due to data loss in the pupil-size signal. Blink durations were about 60 ms longer in EO blinks compared to PS blinks. Moreover, the dynamics of EO blinks was similar to results from previous literature. We conclude that the eye openness signal together with our proposed blink detection algorithm provides an advantageous method to detect and describe blinks in greater detail.
Collapse
Affiliation(s)
- Marcus Nyström
- Lund University Humanities Lab, Box 201, SE-221 00, Lund, Sweden.
| | | | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Box 201, SE-221 00, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584, CS, Utrecht, The Netherlands
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584, CS, Utrecht, The Netherlands
| |
Collapse
|
5
|
Kupers ER, Kim I, Grill-Spector K. Rethinking simultaneous suppression in visual cortex via compressive spatiotemporal population receptive fields. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.06.24.546388. [PMID: 37461470 PMCID: PMC10350247 DOI: 10.1101/2023.06.24.546388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
When multiple visual stimuli are presented simultaneously in the receptive field, the neural response is suppressed compared to presenting the same stimuli sequentially. The prevailing hypothesis suggests that this suppression is due to competition among multiple stimuli for limited resources within receptive fields, governed by task demands. However, it is unknown how stimulus-driven computations may give rise to simultaneous suppression. Using fMRI, we find simultaneous suppression in single voxels, which varies with both stimulus size and timing, and progressively increases up the visual hierarchy. Using population receptive field (pRF) models, we find that compressive spatiotemporal summation rather than compressive spatial summation predicts simultaneous suppression, and that increased simultaneous suppression is linked to larger pRF sizes and stronger compressive nonlinearities. These results necessitate a rethinking of simultaneous suppression as the outcome of stimulus-driven compressive spatiotemporal computations within pRFs, and open new opportunities to study visual processing capacity across space and time.
Collapse
Affiliation(s)
| | - Insub Kim
- Department of Psychology, Stanford University, CA, USA
| | - Kalanit Grill-Spector
- Department of Psychology, Stanford University, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, CA, USA
| |
Collapse
|
6
|
Kok EM, Niehorster DC, van der Gijp A, Rutgers DR, Auffermann WF, van der Schaaf M, Kester L, van Gog T. The effects of gaze-display feedback on medical students' self-monitoring and learning in radiology. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024:10.1007/s10459-024-10322-6. [PMID: 38555550 DOI: 10.1007/s10459-024-10322-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 03/03/2024] [Indexed: 04/02/2024]
Abstract
Self-monitoring is essential for effectively regulating learning, but difficult in visual diagnostic tasks such as radiograph interpretation. Eye-tracking technology can visualize viewing behavior in gaze displays, thereby providing information about visual search and decision-making. We hypothesized that individually adaptive gaze-display feedback improves posttest performance and self-monitoring of medical students who learn to detect nodules in radiographs. We investigated the effects of: (1) Search displays, showing which part of the image was searched by the participant; and (2) Decision displays, showing which parts of the image received prolonged attention in 78 medical students. After a pretest and instruction, participants practiced identifying nodules in 16 cases under search-display, decision-display, or no feedback conditions (n = 26 per condition). A 10-case posttest, without feedback, was administered to assess learning outcomes. After each case, participants provided self-monitoring and confidence judgments. Afterward, participants reported on self-efficacy, perceived competence, feedback use, and perceived usefulness of the feedback. Bayesian analyses showed no benefits of gaze displays for post-test performance, monitoring accuracy (absolute difference between participants' estimated and their actual test performance), completeness of viewing behavior, self-efficacy, and perceived competence. Participants receiving search-displays reported greater feedback utilization than participants receiving decision-displays, and also found the feedback more useful when the gaze data displayed was precise and accurate. As the completeness of search was not related to posttest performance, search displays might not have been sufficiently informative to improve self-monitoring. Information from decision displays was rarely used to inform self-monitoring. Further research should address if and when gaze displays can support learning.
Collapse
Affiliation(s)
- Ellen M Kok
- Department of Education, Utrecht University, P.O. Box 80140, 3508 CS, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Anouk van der Gijp
- Department of Radiology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Dirk R Rutgers
- Department of Radiology, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | - Marieke van der Schaaf
- Utrecht Center for Research and Development in Health Professions Education, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Liesbeth Kester
- Department of Education, Utrecht University, P.O. Box 80140, 3508 CS, Utrecht, The Netherlands
| | - Tamara van Gog
- Department of Education, Utrecht University, P.O. Box 80140, 3508 CS, Utrecht, The Netherlands
| |
Collapse
|
7
|
Osanami Törngren S, Schütze C, Van Belle E, Nyström M. "We choose this CV because we choose diversity" - What do eye movements say about the choices recruiters make? FRONTIERS IN SOCIOLOGY 2024; 9:1222850. [PMID: 38515653 PMCID: PMC10954785 DOI: 10.3389/fsoc.2024.1222850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 02/02/2024] [Indexed: 03/23/2024]
Abstract
Introduction A large body of research has established a consensus that racial discrimination in CV screening occurs and persists. Nevertheless, we still know very little about how recruiters look at the CV and how this is connected to the discriminatory patterns. This article examines the way recruiters view and select CVs and how they reason about their CV selection choices, as a first step in unpacking the patterns of hiring discrimination. Specifically, we explore how race and ethnicity signaled through the CV matter, and how recruiters reason about the choices they make. Methods We recorded data from 40 respondents (20 pairs) who are real-life recruiters with experiences in recruitment of diverse employees in three large Swedish-based firms in the finance and retail sector in two large cities. The participating firms all value diversity, equity and inclusion in their recruitment. Their task was to individually rate 10 fictious CVs where race (signaled by face image) and ethnicity (signaled by name) were systematically manipulated, select the top three candidates, and then discuss their choices in pairs to decide on a single top candidate. We examined whether respondents' choices were associated with the parts of the CV they looked at, and how they reasoned and justified their choices through dialog. Results Our results show that non-White CVs were rated higher than White CVs. While we do not observe any statistically significant differences in the ratings between different racial groups, we see a statistically significant preference for Chinese over Iraqi names. There were no significant differences in time spent looking at the CV across different racial groups, but respondents looked longer at Polish names compared to Swedish names when presented next to a White face. The dialog data reveal how respondents assess different CVs by making assumptions about the candidates' job and organizational fit through limited information on the CVs, especially when the qualifications of the candidates are evaluated to be equal.
Collapse
Affiliation(s)
- Sayaka Osanami Törngren
- Department of Global Political Studies, Malmö Institute for Studies of Migration, Diversity, and Welfare, Malmö University, Malmö, Sweden
| | - Carolin Schütze
- Department of Global Political Studies, Malmö Institute for Studies of Migration, Diversity, and Welfare, Malmö University, Malmö, Sweden
| | - Eva Van Belle
- Brussels Institute for Social and Population Studies (BRISPO), Vrije Universiteit Brussel, Brussels, Belgium
| | | |
Collapse
|
8
|
Hagenaar DA, Bindels-de Heus KGCB, van Gils MM, van den Berg L, Ten Hoopen LW, Affourtit P, Pel JJM, Joosten KFM, Hillegers MHJ, Moll HA, de Wit MCY, Dieleman GC, Mous SE. Outcome measures in Angelman syndrome. J Neurodev Disord 2024; 16:6. [PMID: 38429713 PMCID: PMC10905876 DOI: 10.1186/s11689-024-09516-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 01/29/2024] [Indexed: 03/03/2024] Open
Abstract
BACKGROUND Angelman syndrome (AS) is a rare neurodevelopmental disorder characterized by severe intellectual disability, little to no expressive speech, visual and motor problems, emotional/behavioral challenges, and a tendency towards hyperphagia and weight gain. The characteristics of AS make it difficult to measure these children's functioning with standard clinical tests. Feasible outcome measures are needed to measure current functioning and change over time, in clinical practice and clinical trials. AIM Our first aim is to assess the feasibility of several functional tests. We target domains of neurocognitive functioning and physical growth using the following measurement methods: eye-tracking, functional Near-Infrared Spectroscopy (fNIRS), indirect calorimetry, bio-impedance analysis (BIA), and BOD POD (air-displacement plethysmography). Our second aim is to explore the results of the above measures, in order to better understand the AS phenotype. METHODS The study sample consisted of 28 children with AS aged 2-18 years. We defined an outcome measure as feasible when (1) at least 70% of participants successfully finished the measurement and (2) at least 60% of those participants had acceptable data quality. Adaptations to the test procedure and reasons for early termination were noted. Parents rated acceptability and importance and were invited to make recommendations to increase feasibility. The results of the measures were explored. RESULTS Outcome measures obtained with eye-tracking and BOD POD met the definition of feasibility, while fNIRS, indirect calorimetry, and BIA did not. The most important reasons for early termination of measurements were showing signs of protest, inability to sit still and poor/no calibration (eye-tracking specific). Post-calibration was often applied to obtain valid eye-tracking results. Parents rated the BOD POD als most acceptable and fNIRS as least acceptable for their child. All outcome measures were rated to be important. Exploratory results indicated longer reaction times to high salient visual stimuli (eye-tracking) as well as high body fat percentage (BOD POD). CONCLUSIONS Eye-tracking and BOD POD are feasible measurement methods for children with AS. Eye-tracking was successfully used to assess visual orienting functions in the current study and (with some practical adaptations) can potentially be used to assess other outcomes as well. BOD POD was successfully used to examine body composition. TRIAL REGISTRATION Registered d.d. 23-04-2020 under number 'NL8550' in the Dutch Trial Register: https://onderzoekmetmensen.nl/en/trial/23075.
Collapse
Affiliation(s)
- Doesjka A Hagenaar
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands.
- Department of Child- and Adolescent Psychiatry/Psychology, Erasmus MC, Rotterdam, The Netherlands.
- Department of Paediatrics, Erasmus MC, Rotterdam, The Netherlands.
| | - Karen G C B Bindels-de Heus
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Paediatrics, Erasmus MC, Rotterdam, The Netherlands
| | - Maud M van Gils
- Vestibular and Oculomotor Research Group, Department of Neuroscience, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Louise van den Berg
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Child- and Adolescent Psychiatry/Psychology, Erasmus MC, Rotterdam, The Netherlands
| | - Leontine W Ten Hoopen
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Child- and Adolescent Psychiatry/Psychology, Erasmus MC, Rotterdam, The Netherlands
| | - Philine Affourtit
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Dietetics, Erasmus MC, Rotterdam, The Netherlands
| | - Johan J M Pel
- Vestibular and Oculomotor Research Group, Department of Neuroscience, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Koen F M Joosten
- Division of Pediatric ICU, Department of Neonatal and Pediatric ICU, Erasmus MC, Rotterdam, The Netherlands
| | - Manon H J Hillegers
- Department of Child- and Adolescent Psychiatry/Psychology, Erasmus MC, Rotterdam, The Netherlands
| | - Henriëtte A Moll
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Paediatrics, Erasmus MC, Rotterdam, The Netherlands
| | - Marie-Claire Y de Wit
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Neurology and Paediatric Neurology, Erasmus MC, Rotterdam, The Netherlands
| | - Gwen C Dieleman
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Child- and Adolescent Psychiatry/Psychology, Erasmus MC, Rotterdam, The Netherlands
| | - Sabine E Mous
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Child- and Adolescent Psychiatry/Psychology, Erasmus MC, Rotterdam, The Netherlands
| |
Collapse
|
9
|
Niehorster DC, Hessels RS, Benjamins JS, Nyström M, Hooge ITC. GlassesValidator: A data quality tool for eye tracking glasses. Behav Res Methods 2024; 56:1476-1484. [PMID: 37326770 PMCID: PMC10991001 DOI: 10.3758/s13428-023-02105-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/03/2023] [Indexed: 06/17/2023]
Abstract
According to the proposal for a minimum reporting guideline for an eye tracking study by Holmqvist et al. (2022), the accuracy (in degrees) of eye tracking data should be reported. Currently, there is no easy way to determine accuracy for wearable eye tracking recordings. To enable determining the accuracy quickly and easily, we have produced a simple validation procedure using a printable poster and accompanying Python software. We tested the poster and procedure with 61 participants using one wearable eye tracker. In addition, the software was tested with six different wearable eye trackers. We found that the validation procedure can be administered within a minute per participant and provides measures of accuracy and precision. Calculating the eye-tracking data quality measures can be done offline on a simple computer and requires no advanced computer skills.
Collapse
Affiliation(s)
- Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden.
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
| | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute & Social, Health and Organisational Psychology, Utrecht University, Utrecht, Netherlands
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
10
|
Huang Z, Duan X, Zhu G, Zhang S, Wang R, Wang Z. Assessing the data quality of AdHawk MindLink eye-tracking glasses. Behav Res Methods 2024:10.3758/s13428-023-02310-2. [PMID: 38168041 DOI: 10.3758/s13428-023-02310-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/30/2023] [Indexed: 01/05/2024]
Abstract
Most commercially available eye-tracking devices rely on video cameras and image processing algorithms to track gaze. Despite this, emerging technologies are entering the field, making high-speed, cameraless eye-tracking more accessible. In this study, a series of tests were conducted to compare the data quality of MEMS-based eye-tracking glasses (AdHawk MindLink) with three widely used camera-based eye-tracking devices (EyeLink Portable Duo, Tobii Pro Glasses 2, and SMI Eye Tracking Glasses 2). The data quality measures assessed in these tests included accuracy, precision, data loss, and system latency. The results suggest that, overall, the data quality of the eye-tracking glasses was lower compared to that of a desktop EyeLink Portable Duo eye-tracker. Among the eye-tracking glasses, the accuracy and precision of the MindLink eye-tracking glasses were either higher or on par with those of Tobii Pro Glasses 2 and SMI Eye Tracking Glasses 2. The system latency of MindLink was approximately 9 ms, significantly lower than that of camera-based eye-tracking devices found in VR goggles. These results suggest that the MindLink eye-tracking glasses show promise for research applications where high sampling rates and low latency are preferred.
Collapse
Affiliation(s)
- Zehao Huang
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Xiaoting Duan
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Gancheng Zhu
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Shuai Zhang
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Rong Wang
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China
| | - Zhiguo Wang
- Center for Psychological Sciences, Zhejiang University, 148 Tianmushan Rd., Hangzhou, 310028, China.
| |
Collapse
|
11
|
Serpa E, Alecka M, Ceple I, Krumina G, Svede A, Kassaliete E, Goliskina V, Volberga L, Berzina A, Mikelsone R, Ozola E, Toloka D, Ruza T, Klavinska A, Vasiljeva S, Koleda M. The impact of eye dominance on fixation stability in school-aged children. J Eye Mov Res 2023; 16:10.16910/jemr.16.3.6. [PMID: 38370528 PMCID: PMC10874631 DOI: 10.16910/jemr.16.3.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/20/2024] Open
Abstract
The aim of the study was to analyze the stability of dominant and non-dominant eye fixations, as well as the influence of development on fixation stability. The study analyzed fixation stability in 280 school-age children, ranging in age from 7 to 12 years old. Fixation stability was determined by calculating the bivariate contour ellipse area (BCEA). During the fixation task, eye movements were recorded using the Tobii Pro Fusion eye tracking device at a 250 Hz sampling frequency. The results indicate that the fixation stability of dominant and non-dominant eyes, as well as the fixation stability of each eye regardless of dominance, improves as children grow older. It was found that for 7 and 8- year-old children, fixation in the dominant eye is significantly more stable than in the non-dominant eye, while in older children, there is no significant difference in fixation stability between the dominant and non-dominant eye.
Collapse
|
12
|
Kaduk T, Goeke C, Finger H, König P. Webcam eye tracking close to laboratory standards: Comparing a new webcam-based system and the EyeLink 1000. Behav Res Methods 2023:10.3758/s13428-023-02237-8. [PMID: 37821751 DOI: 10.3758/s13428-023-02237-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/04/2023] [Indexed: 10/13/2023]
Abstract
This paper aims to compare a new webcam-based eye-tracking system, integrated into the Labvanced platform for online experiments, to a "gold standard" lab-based eye tracker (EyeLink 1000 - SR Research). Specifically, we simultaneously recorded data with both eye trackers in five different tasks, analyzing their real-time performance. These tasks were a subset of a standardized test battery for eye trackers, including a Large Grid task, Smooth Pursuit eye movements, viewing natural images, and two Head Movements tasks (roll, yaw). The results show that the webcam-based system achieved an overall accuracy of 1.4°, and a precision of 1.1° (standard deviation (SD) across subjects), an error of about 0.5° larger than the EyeLink system. Interestingly, both accuracy (1.3°) and precision (0.9°) were slightly better for centrally presented targets, the region of interest in many psychophysical experiments. Remarkably, the correlation of raw gaze samples between the EyeLink and webcam-based was at about 90% for the Large Grid task and about 80% for Free View and Smooth Pursuit. Overall, these results put the performance of the webcam-based system roughly on par with mobile eye-tracking devices (Ehinger et al. PeerJ, 7, e7086, 2019; Tonsen et al., 2020) and demonstrate substantial improvement compared to existing webcam eye-tracking solutions (Papoutsaki et al., 2017).
Collapse
Affiliation(s)
- Tobiasz Kaduk
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.
- Research and Development Division, Scicovery GmbH, Paderborn, Germany.
| | - Caspar Goeke
- Research and Development Division, Scicovery GmbH, Paderborn, Germany
| | - Holger Finger
- Research and Development Division, Scicovery GmbH, Paderborn, Germany
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
13
|
Nicholls VI, Alsbury-Nealy B, Krugliak A, Clarke A. Context effects on object recognition in real-world environments: A study protocol. Wellcome Open Res 2023; 7:165. [PMID: 37274451 PMCID: PMC10238820 DOI: 10.12688/wellcomeopenres.17856.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2023] [Indexed: 07/22/2023] Open
Abstract
Background: The environments that we live in impact on our ability to recognise objects, with recognition being facilitated when objects appear in expected locations (congruent) compared to unexpected locations (incongruent). However, these findings are based on experiments where the object is isolated from its environment. Moreover, it is not clear which components of the recognition process are impacted by the environment. In this experiment, we seek to examine the impact real world environments have on object recognition. Specifically, we will use mobile electroencephalography (mEEG) and augmented reality (AR) to investigate how the visual and semantic processing aspects of object recognition are changed by the environment. Methods: We will use AR to place congruent and incongruent virtual objects around indoor and outdoor environments. During the experiment a total of 34 participants will walk around the environments and find these objects while we record their eye movements and neural signals. We will perform two primary analyses. First, we will analyse the event-related potential (ERP) data using paired samples t-tests in the N300/400 time windows in an attempt to replicate congruency effects on the N300/400. Second, we will use representational similarity analysis (RSA) and computational models of vision and semantics to determine how visual and semantic processes are changed by congruency. Conclusions: Based on previous literature, we hypothesise that scene-object congruence would facilitate object recognition. For ERPs, we predict a congruency effect in the N300/N400, and for RSA we predict that higher level visual and semantic information will be represented earlier for congruent scenes than incongruent scenes. By collecting mEEG data while participants are exploring a real-world environment, we will be able to determine the impact of a natural context on object recognition, and the different processing stages of object recognition.
Collapse
Affiliation(s)
| | | | - Alexandra Krugliak
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
| | - Alex Clarke
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
| |
Collapse
|
14
|
Poli F, Ghilardi T, Mars RB, Hinne M, Hunnius S. Eight-Month-Old Infants Meta-Learn by Downweighting Irrelevant Evidence. Open Mind (Camb) 2023; 7:141-155. [PMID: 37416070 PMCID: PMC10320826 DOI: 10.1162/opmi_a_00079] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 04/06/2023] [Indexed: 07/08/2023] Open
Abstract
Infants learn to navigate the complexity of the physical and social world at an outstanding pace, but how they accomplish this learning is still largely unknown. Recent advances in human and artificial intelligence research propose that a key feature to achieving quick and efficient learning is meta-learning, the ability to make use of prior experiences to learn how to learn better in the future. Here we show that 8-month-old infants successfully engage in meta-learning within very short timespans after being exposed to a new learning environment. We developed a Bayesian model that captures how infants attribute informativity to incoming events, and how this process is optimized by the meta-parameters of their hierarchical models over the task structure. We fitted the model with infants' gaze behavior during a learning task. Our results reveal how infants actively use past experiences to generate new inductive biases that allow future learning to proceed faster.
Collapse
Affiliation(s)
- Francesco Poli
- Donders Center for Cognition, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Tommaso Ghilardi
- Donders Center for Cognition, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Rogier B. Mars
- Donders Center for Cognition, Radboud University Nijmegen, Nijmegen, The Netherlands
- Nuffield Department of Clinical Neurosciences, Wellcome Centre for Integrative Neuroimaging, FMRIB, University of Oxford, John Radcliffe Hospital, Headington, Oxford, UK
| | - Max Hinne
- Donders Center for Cognition, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Sabine Hunnius
- Donders Center for Cognition, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
15
|
Park SY, Holmqvist K, Niehorster DC, Huber L, Virányi Z. How to improve data quality in dog eye tracking. Behav Res Methods 2023; 55:1513-1536. [PMID: 35680764 PMCID: PMC10250523 DOI: 10.3758/s13428-022-01788-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/02/2022] [Indexed: 11/08/2022]
Abstract
Pupil-corneal reflection (P-CR) eye tracking has gained a prominent role in studying dog visual cognition, despite methodological challenges that often lead to lower-quality data than when recording from humans. In the current study, we investigated if and how the morphology of dogs might interfere with tracking of P-CR systems, and to what extent such interference, possibly in combination with dog-unique eye-movement characteristics, may undermine data quality and affect eye-movement classification when processed through algorithms. For this aim, we have conducted an eye-tracking experiment with dogs and humans, and investigated incidences of tracking interference, compared how they blinked, and examined how differential quality of dog and human data affected the detection and classification of eye-movement events. Our results show that the morphology of dogs' face and eye can interfere with tracking methods of the systems, and dogs blink less often but their blinks are longer. Importantly, the lower quality of dog data lead to larger differences in how two different event detection algorithms classified fixations, indicating that the results of key dependent variables are more susceptible to choice of algorithm in dog than human data. Further, two measures of the Nyström & Holmqvist (Behavior Research Methods, 42(4), 188-204, 2010) algorithm showed that dog fixations are less stable and dog data have more trials with extreme levels of noise. Our findings call for analyses better adjusted to the characteristics of dog eye-tracking data, and our recommendations help future dog eye-tracking studies acquire quality data to enable robust comparisons of visual cognition between dogs and humans.
Collapse
Affiliation(s)
- Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria.
- Medical University Vienna, Vienna, Austria.
- University of Vienna, Vienna, Austria.
| | - Kenneth Holmqvist
- Institute of Psychology, Nicolaus Copernicus University in Torun, Torun, Poland
- Department of Psychology, Regensburg University, Regensburg, Germany
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Ludwig Huber
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria
- Medical University Vienna, Vienna, Austria
- University of Vienna, Vienna, Austria
| | - Zsófia Virányi
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria
- Medical University Vienna, Vienna, Austria
- University of Vienna, Vienna, Austria
| |
Collapse
|
16
|
Musso MF, Moyano S, Rico-Picó J, Conejero Á, Ballesteros-Duperón MÁ, Cascallar EC, Rueda MR. Predicting Effortful Control at 3 Years of Age from Measures of Attention and Home Environment in Infancy: A Machine Learning Approach. CHILDREN (BASEL, SWITZERLAND) 2023; 10:982. [PMID: 37371215 DOI: 10.3390/children10060982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 05/26/2023] [Accepted: 05/29/2023] [Indexed: 06/29/2023]
Abstract
Effortful control (EC) is a dimension of temperament that encompass individual differences in self-regulation and the control of reactivity. Much research suggests that EC has a strong foundation on the development of executive attention, but increasing evidence also shows a significant contribution of the rearing environment to individual differences in EC. The aim of the current study was to predict the development of EC at 36 months of age from early attentional and environmental measures taken in infancy using a machine learning approach. A sample of 78 infants participated in a longitudinal study running three waves of data collection at 6, 9, and 36 months of age. Attentional tasks were administered at 6 months of age, with two additional measures (i.e., one attentional measure and another self-restraint measure) being collected at 9 months of age. Parents reported household environment variables during wave 1, and their child's EC at 36 months. A machine-learning algorithm was implemented to identify children with low EC scores at 36 months of age. An "attention only" model showed greater predictive sensitivity than the "environmental only" model. However, a model including both attentional and environmental variables was able to classify the groups (Low-EC vs. Average-to-High EC) with 100% accuracy. Sensitivity analyses indicate that socio-economic variables together with attention control processes at 6 months, and self-restraint capacity at 9 months, are the most important predictors of EC. Results suggest a foundational role of executive attention processes in the development of EC in complex interactions with household environments and provide a new tool to identify early markers of socio-emotional regulation development.
Collapse
Affiliation(s)
- Mariel F Musso
- Department of Experimental Psychology, University of Granada, 18071 Granada, Spain
- Interdisciplinary Center for Research in Mathematical and Experimental Psychology (CIIPME), National Council for Scientific and Technical Research (CONICET), Ciudad Autónoma de Buenos Aires 1040, Argentina
- Department of Psychology, Faculty of Health Sciences, Universidad Argentina de la Empresa (UADE), Ciudad Autónoma de Buenos Aires 1073, Argentina
- Mind, Brain and Behavior Research Center, University of Granada, 18071 Granada, Spain
| | - Sebastián Moyano
- Department of Experimental Psychology, University of Granada, 18071 Granada, Spain
- Mind, Brain and Behavior Research Center, University of Granada, 18071 Granada, Spain
| | - Josué Rico-Picó
- Department of Experimental Psychology, University of Granada, 18071 Granada, Spain
- Mind, Brain and Behavior Research Center, University of Granada, 18071 Granada, Spain
| | - Ángela Conejero
- Mind, Brain and Behavior Research Center, University of Granada, 18071 Granada, Spain
- Department of Educational and Developmental Psychology, University of Granada, 18071 Granada, Spain
| | - M Ángeles Ballesteros-Duperón
- Mind, Brain and Behavior Research Center, University of Granada, 18071 Granada, Spain
- Department of Psychobiology, University of Granada, 18071 Granada, Spain
| | - Eduardo C Cascallar
- Faculty of Psychology and Educational Sciences, KU Leuven, 3000 Leuven, Belgium
| | - M Rosario Rueda
- Department of Experimental Psychology, University of Granada, 18071 Granada, Spain
- Mind, Brain and Behavior Research Center, University of Granada, 18071 Granada, Spain
| |
Collapse
|
17
|
Moyano S, Rico-Picó J, Conejero Á, Hoyo Á, Ballesteros-Duperón MDLÁ, Rueda MR. Influence of the environment on the early development of attentional control. Infant Behav Dev 2023; 71:101842. [PMID: 37187034 DOI: 10.1016/j.infbeh.2023.101842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 04/13/2023] [Accepted: 05/08/2023] [Indexed: 05/17/2023]
Abstract
The control of visual attention is key to learning and has a foundational role in the development of self-regulated behavior. Basic attention control skills emerge early in life and show a protracted development along childhood. Prior research suggests that attentional development is influenced by environmental factors in early and late childhood. Although, much less information is available about the impact of the early environment on emerging endogenous attention skills during infancy. In the current study we aimed to test the impact of parental socioeconomic status (SES) and home environment (chaos) in the emerging control of orienting in a sample of typically-developing infants. A group of 142 (73 female) 6-month-old infants were longitudinally tested at 6, 9 (n = 122; 60 female) and 16-18 (n = 91; 50 female) months of age using the gap-overlap paradigm. Median saccade latency (mdSL) and disengagement failure (DF) were computed as dependent variables for both overlap and gap conditions. Also, composite scores for a Disengagement Cost Index (DCI) and Disengagement Failure Index (DFI) were computed considering mdSL and DF of each condition, respectively. Families reported SES and chaos in the first and last follow-up sessions. Using Linear Mixed Models with Maximum Likelihood estimation (ML) we found a longitudinal decrease in mdSL in the gap but not in the overlap condition, while DF decreased with age independently of the experimental condition. Concerning early environmental factors, an SES index, parental occupation and chaos at 6 months were found to show a negative correlation with DFI at 16-18 months, although in the former case it was only marginally significant. Hierarchical regression models implementing ML showed that both SES and chaos at 6 months significantly predicted a lower DFI at 16-18 months. Results show a longitudinal progression of endogenous orienting between infancy and toddlerhood. With age, an increased endogenous control of orienting is displayed in contexts where visual disengagement is facilitated. Visual orienting involving attention disengagement in contexts of visual competition do not show changes with age. Moreover, these attentional mechanisms of endogenous control seem to be modulated by early experiences of the individual with the environment.
Collapse
Affiliation(s)
- Sebastián Moyano
- Department of Experimental Psychology, University of Granada, Granada, Spain; Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, Granada, Spain.
| | - Josué Rico-Picó
- Department of Experimental Psychology, University of Granada, Granada, Spain; Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, Granada, Spain
| | - Ángela Conejero
- Department of Developmental & Educational Psychology, University of Granada, Granada, Spain; Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, Granada, Spain
| | - Ángela Hoyo
- Department of Experimental Psychology, University of Granada, Granada, Spain; Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, Granada, Spain
| | | | - M Rosario Rueda
- Department of Experimental Psychology, University of Granada, Granada, Spain; Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, Granada, Spain
| |
Collapse
|
18
|
Hunt R, Mills C, Frost G, Blackmore T, Miller-Dicks M. The visual control of locomotion when stepping onto moving surfaces: A comparison of younger and older adults. Exp Gerontol 2023; 174:112117. [PMID: 36758648 DOI: 10.1016/j.exger.2023.112117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/27/2023] [Accepted: 02/04/2023] [Indexed: 02/11/2023]
Abstract
Stepping between static and moving surfaces presents a locomotor challenge associated with increased injury frequency and severity in older adults. The current study evaluates younger and older adults' behaviours when overcoming challenges sampling moving walkway and escalator environments. Twelve younger adults (18-40 years, Male = 8) and 15 older adults (60-81 years, Male = 5) were examined using an integration of optoelectronic motion capture and mobile eye-tracking. Participants were investigated approaching and stepping onto a flat conveyor belt (static or moving; with or without surface (demarcation) lines). Specifically, the four conditions were: (i) static surface without demarcation lines; (ii) static surface with demarcation lines; (iii) moving surface without demarcation lines; and (iv) moving surface with demarcation lines. A two (age group) x two (surface-condition) x two (demarcation-condition) linear mixed-model revealed no main or interaction effects (p > .05) for perturbation magnitude, indicating participants maintained successful locomotion. However, different adaptive behaviours were identified between conditions with moving and accuracy demands (e.g., moving surfaces increased step length, demarcations reduced step length). Between subject effects identified differences between age groups. Older adults utilised different behaviours, such as earlier gaze transfer from the final approach walkway step location. Overall, the current study suggests that adaptive behaviours emerge relative to the environment's specific demands and the individual's action capabilities.
Collapse
Affiliation(s)
- Rhys Hunt
- School of Sport, Health and Exercise Science, University of Portsmouth, United Kingdom.
| | - Chris Mills
- School of Sport, Health and Exercise Science, University of Portsmouth, United Kingdom
| | - Gillian Frost
- Health and Safety Executive, Science Division, United Kingdom
| | - Tim Blackmore
- School of Sport, Health and Exercise Science, University of Portsmouth, United Kingdom
| | - Matt Miller-Dicks
- School of Sport, Health and Exercise Science, University of Portsmouth, United Kingdom.
| |
Collapse
|
19
|
Nyström M, Niehorster DC, Andersson R, Hessels RS, Hooge ITC. The amplitude of small eye movements can be accurately estimated with video-based eye trackers. Behav Res Methods 2023; 55:657-669. [PMID: 35419703 PMCID: PMC10027793 DOI: 10.3758/s13428-021-01780-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/20/2021] [Indexed: 11/08/2022]
Abstract
Estimating the gaze direction with a digital video-based pupil and corneal reflection (P-CR) eye tracker is challenging partly since a video camera is limited in terms of spatial and temporal resolution, and because the captured eye images contain noise. Through computer simulation, we evaluated the localization accuracy of pupil-, and CR centers in the eye image for small eye rotations (≪ 1 deg). Results highlight how inaccuracies in center localization are related to 1) how many pixels the pupil and CR span in the eye camera image, 2) the method to compute the center of the pupil and CRs, and 3) the level of image noise. Our results provide a possible explanation to why the amplitude of small saccades may not be accurately estimated by many currently used video-based eye trackers. We conclude that eye movements with arbitrarily small amplitudes can be accurately estimated using the P-CR eye-tracking principle given that the level of image noise is low and the pupil and CR span enough pixels in the eye camera, or if localization of the CR is based on the intensity values in the eye image instead of a binary representation.
Collapse
Affiliation(s)
- Marcus Nyström
- Lund University Humanities Lab, Lund University, Box 201, SE-221 00, Lund, Sweden.
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Box 201, SE-221 00, Lund, Sweden
- Department of Psychology, Lund University, Box 201, SE-221 00, Lund, Sweden
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584, CS, Utrecht, The Netherlands
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584, CS, Utrecht, The Netherlands
| |
Collapse
|
20
|
Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, Drieghe D, Dunn MJ, Ettinger U, Fiedler S, Foulsham T, van der Geest JN, Hansen DW, Hutton SB, Kasneci E, Kingstone A, Knox PC, Kok EM, Lee H, Lee JY, Leppänen JM, Macknik S, Majaranta P, Martinez-Conde S, Nuthmann A, Nyström M, Orquin JL, Otero-Millan J, Park SY, Popelka S, Proudlock F, Renkewitz F, Roorda A, Schulte-Mecklenbeck M, Sharif B, Shic F, Shovman M, Thomas MG, Venrooij W, Zemblys R, Hessels RS. Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2023; 55:364-416. [PMID: 35384605 PMCID: PMC9535040 DOI: 10.3758/s13428-021-01762-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Department of Psychology, Nicolaus Copernicus University, Torun, Poland.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Department of Psychology, Regensburg University, Regensburg, Germany.
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Social, Health and Organizational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | | | - Lewis L Chuang
- Department of Ergonomics, Leibniz Institute for Working Environments and Human Factors, Dortmund, Germany
- Institute of Informatics, LMU Munich, Munich, Germany
| | | | - Denis Drieghe
- School of Psychology, University of Southampton, Southampton, UK
| | - Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | - Susann Fiedler
- Vienna University of Economics and Business, Vienna, Austria
| | - Tom Foulsham
- Department of Psychology, University of Essex, Essex, UK
| | | | - Dan Witzner Hansen
- Machine Learning Group, Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | | | - Enkelejda Kasneci
- Human-Computer Interaction, University of Tübingen, Tübingen, Germany
| | | | - Paul C Knox
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Ellen M Kok
- Department of Education and Pedagogy, Division Education, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, The Netherlands
| | - Helena Lee
- University of Southampton, Southampton, UK
| | - Joy Yeonjoo Lee
- School of Health Professions Education, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Jukka M Leppänen
- Department of Psychology and Speed-Language Pathology, University of Turku, Turku, Finland
| | - Stephen Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Päivi Majaranta
- TAUCHI Research Center, Computing Sciences, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jacob L Orquin
- Department of Management, Aarhus University, Aarhus, Denmark
- Center for Research in Marketing and Consumer Psychology, Reykjavik University, Reykjavik, Iceland
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, Vienna, Austria
| | - Stanislav Popelka
- Department of Geoinformatics, Palacký University Olomouc, Olomouc, Czech Republic
| | - Frank Proudlock
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Frank Renkewitz
- Department of Psychology, University of Erfurt, Erfurt, Germany
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of General Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Mark Shovman
- Eyeviation Systems, Herzliya, Israel
- Department of Industrial Design, Bezalel Academy of Arts and Design, Jerusalem, Israel
| | - Mervyn G Thomas
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Ward Venrooij
- Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Enschede, The Netherlands
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
21
|
Hooge ITC, Niehorster DC, Nyström M, Andersson R, Hessels RS. Fixation classification: how to merge and select fixation candidates. Behav Res Methods 2022; 54:2765-2776. [PMID: 35023066 PMCID: PMC9729319 DOI: 10.3758/s13428-021-01723-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/06/2021] [Indexed: 12/16/2022]
Abstract
Eye trackers are applied in many research fields (e.g., cognitive science, medicine, marketing research). To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data with high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5∘), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by a rule that selects saccades with amplitudes larger than 1.0∘ and a rule that selects fixations with duration longer than 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters.
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
22
|
Tomalski P, López Pérez D, Radkowska A, Malinowska-Korczak A. Dyadic interactions during infant learning: Exploring infant-parent exchanges in experimental eye-tracking studies. Infant Behav Dev 2022; 69:101780. [DOI: 10.1016/j.infbeh.2022.101780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 09/20/2022] [Accepted: 10/12/2022] [Indexed: 11/05/2022]
|
23
|
Hunt R, Blackmore T, Mills C, Dicks M. Evaluating the integration of eye-tracking and motion capture technologies: Quantifying the accuracy and precision of gaze measures. Iperception 2022; 13:20416695221116652. [PMID: 36186610 PMCID: PMC9516427 DOI: 10.1177/20416695221116652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 07/10/2022] [Indexed: 11/18/2022] Open
Abstract
Integrating mobile eye tracking and optoelectronic motion capture enables point of gaze
to be expressed within the laboratory co-ordinate system and presents a method not
commonly applied during research examining dynamic behaviors, such as locomotion. This
paper examines the quality of gaze data collected through the integration. Based on
research suggesting increased viewing distances are associated with reduced data quality;
the accuracy and precision of gaze data as participants (N = 11) viewed
floor-based targets at distances of 1–6 m was investigated. A mean accuracy of
2.55 ± 1.12° was identified, however, accuracy and precision measures (relative to
targets) were significantly (p < .05) reduced at greater viewing
distances. We then consider if signal processing techniques may improve accuracy and
precision, and overcome issues associated with missing data. A 4th-order Butterworth
lowpass filter with cut-off frequencies determined via autocorrelation did not
significantly improve data quality, however, interpolation via Quintic spline was
sufficient to overcome gaps of up to 0.1 s. We conclude the integration of gaze and motion
capture presents a viable methodology in the study of human behavior and presents
advantages for data collection, treatment, and analysis. We provide considerations for the
collection, analysis, and treatment of gaze data that may help inform future
methodological decisions.
Collapse
Affiliation(s)
- Rhys Hunt
- School of Sport, Health and Exercise Science, University of Portsmouth, Portsmouth, UK
| | - Tim Blackmore
- School of Sport, Health and Exercise Science, University of Portsmouth, Portsmouth, UK
| | - Chris Mills
- School of Sport, Health and Exercise Science, University of Portsmouth, Portsmouth, UK
| | - Matt Dicks
- School of Sport, Health and Exercise Science, University of Portsmouth, Portsmouth, UK
| |
Collapse
|
24
|
VAN Rooijen R, Ward EK, DE Jonge M, Kemner C, Junge C. Two-year-olds at elevated risk for ASD can learn novel words from their parents. JOURNAL OF CHILD LANGUAGE 2022; 49:1052-1063. [PMID: 34227461 DOI: 10.1017/s0305000921000428] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Children diagnosed with autism spectrum disorder (ASD) often have smaller vocabularies in infancy compared to typically-developing children. To understand whether their smaller vocabularies stem from problems in learning, our study compared a prospective risk sample of 18 elevated risk and 11 lower risk 24-month-olds on current vocabulary size and word learning ability using a paradigm in which parents teach their child words. Results revealed that both groups learned novel words, even though parents indicated that infants at elevated risk of ASD knew fewer words. This suggests that these early compromised vocabularies cannot be solely linked to difficulties in word formations.
Collapse
Affiliation(s)
- Rianne VAN Rooijen
- Experimental Psychology, Helmholtz Institute, Utrecht University, Netherlands
- Developmental Psychology, Utrecht University, Netherlands
| | - Emma Kate Ward
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Maretha DE Jonge
- Clinical Neurodevelopmental Sciences, Leiden University, Netherlands
- Brain Center Rudolf Magnus, University Medical Center Utrecht, Netherlands
| | - Chantal Kemner
- Experimental Psychology, Helmholtz Institute, Utrecht University, Netherlands
- Developmental Psychology, Utrecht University, Netherlands
- Brain Center Rudolf Magnus, University Medical Center Utrecht, Netherlands
| | - Caroline Junge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Netherlands
- Developmental Psychology, Utrecht University, Netherlands
| |
Collapse
|
25
|
Eye contact avoidance in crowds: A large wearable eye-tracking study. Atten Percept Psychophys 2022; 84:2623-2640. [PMID: 35996058 PMCID: PMC9630249 DOI: 10.3758/s13414-022-02541-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/08/2022] [Indexed: 11/30/2022]
Abstract
Eye contact is essential for human interactions. We investigated whether humans are able to avoid eye contact while navigating crowds. At a science festival, we fitted 62 participants with a wearable eye tracker and instructed them to walk a route. Half of the participants were further instructed to avoid eye contact. We report that humans can flexibly allocate their gaze while navigating crowds and avoid eye contact primarily by orienting their head and eyes towards the floor. We discuss implications for crowd navigation and gaze behavior. In addition, we address a number of issues encountered in such field studies with regard to data quality, control of the environment, and participant adherence to instructions. We stress that methodological innovation and scientific progress are strongly interrelated.
Collapse
|
26
|
Evaluating Eye Movement Event Detection: A Review of the State of the Art. Behav Res Methods 2022:10.3758/s13428-021-01763-7. [PMID: 35715615 DOI: 10.3758/s13428-021-01763-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/27/2021] [Indexed: 11/08/2022]
Abstract
Detecting eye movements in raw eye tracking data is a well-established research area by itself, as well as a common pre-processing step before any subsequent analysis. As in any field, however, progress and successful collaboration can only be achieved provided a shared understanding of the pursued goal. This is often formalised via defining metrics that express the quality of an approach to solving the posed problem. Both the big-picture intuition behind the evaluation strategies and seemingly small implementation details influence the resulting measures, making even studies with outwardly similar procedures essentially incomparable, impeding a common understanding. In this review, we systematically describe and analyse evaluation methods and measures employed in the eye movement event detection field to date. While recently developed evaluation strategies tend to quantify the detector's mistakes at the level of whole eye movement events rather than individual gaze samples, they typically do not separate establishing correspondences between true and predicted events from the quantification of the discovered errors. In our analysis we separate these two steps where possible, enabling their almost arbitrary combinations in an evaluation pipeline. We also present the first large-scale empirical analysis of event matching strategies in the literature, examining these various combinations both in practice and theoretically. We examine the particular benefits and downsides of the evaluation methods, providing recommendations towards more intuitive and informative assessment. We implemented the evaluation strategies on which this work focuses in a single publicly available library: https://github.com/r-zemblys/EM-event-detection-evaluation .
Collapse
|
27
|
Johansson R, Nyström M, Dewhurst R, Johansson M. Eye-movement replay supports episodic remembering. Proc Biol Sci 2022; 289:20220964. [PMID: 35703049 PMCID: PMC9198773 DOI: 10.1098/rspb.2022.0964] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
When we bring to mind something we have seen before, our eyes spontaneously unfold in a sequential pattern strikingly similar to that made during the original encounter, even in the absence of supporting visual input. Oculomotor movements of the eye may then serve the opposite purpose of acquiring new visual information; they may serve as self-generated cues, pointing to stored memories. Over 50 years ago Donald Hebb, the forefather of cognitive neuroscience, posited that such a sequential replay of eye movements supports our ability to mentally recreate visuospatial relations during episodic remembering. However, direct evidence for this influential claim is lacking. Here we isolate the sequential properties of spontaneous eye movements during encoding and retrieval in a pure recall memory task and capture their encoding-retrieval overlap. Critically, we show that the fidelity with which a series of consecutive eye movements from initial encoding is sequentially retained during subsequent retrieval predicts the quality of the recalled memory. Our findings provide direct evidence that such scanpaths are replayed to assemble and reconstruct spatio-temporal relations as we remember and further suggest that distinct scanpath properties differentially contribute depending on the nature of the goal-relevant memory.
Collapse
|
28
|
Cimminella F, D'Innocenzo G, Sala SD, Iavarone A, Musella C, Coco MI. Preserved Extra-Foveal Processing of Object Semantics in Alzheimer's Disease. J Geriatr Psychiatry Neurol 2022; 35:418-433. [PMID: 34044661 DOI: 10.1177/08919887211016056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Alzheimer's disease (AD) patients underperform on a range of tasks requiring semantic processing, but it is unclear whether this impairment is due to a generalised loss of semantic knowledge or to issues in accessing and selecting such information from memory. The objective of this eye-tracking visual search study was to determine whether semantic expectancy mechanisms known to support object recognition in healthy adults are preserved in AD patients. Furthermore, as AD patients are often reported to be impaired in accessing information in extra-foveal vision, we investigated whether that was also the case in our study. Twenty AD patients and 20 age-matched controls searched for a target object among an array of distractors presented extra-foveally. The distractors were either semantically related or unrelated to the target (e.g., a car in an array with other vehicles or kitchen items). Results showed that semantically related objects were detected with more difficulty than semantically unrelated objects by both groups, but more markedly by the AD group. Participants looked earlier and for longer at the critical objects when these were semantically unrelated to the distractors. Our findings show that AD patients can process the semantics of objects and access it in extra-foveal vision. This suggests that their impairments in semantic processing may reflect difficulties in accessing semantic information rather than a generalised loss of semantic memory.
Collapse
Affiliation(s)
- Francesco Cimminella
- Human Cognitive Neuroscience, Psychology, University of Edinburgh, Edinburgh, United Kingdom.,Laboratory of Experimental Psychology, Suor Orsola Benincasa University, Naples, Italy
| | | | - Sergio Della Sala
- Human Cognitive Neuroscience, Psychology, University of Edinburgh, Edinburgh, United Kingdom
| | | | - Caterina Musella
- Associazione Italiana Malattia d'Alzheimer (AIMA sezione Campania), Naples, Italy
| | - Moreno I Coco
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal.,School of Psychology, The University of East London, London, United Kingdom
| |
Collapse
|
29
|
Holmqvist K, Örbom SL, Zemblys R. Small head movements increase and colour noise in data from five video-based P-CR eye trackers. Behav Res Methods 2022; 54:845-863. [PMID: 34357538 PMCID: PMC8344338 DOI: 10.3758/s13428-021-01648-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/06/2021] [Indexed: 11/08/2022]
Abstract
We empirically investigate the role of small, almost imperceptible balance and breathing movements of the head on the level and colour of noise in data from five commercial video-based P-CR eye trackers. By comparing noise from recordings with completely static artificial eyes to noise from recordings where the artificial eyes are worn by humans, we show that very small head movements increase levels and colouring of the noise in data recorded from all five eye trackers in this study. This increase of noise levels is seen not only in the gaze signal, but also in the P and CR signals of the eye trackers that provide these camera image features. The P and CR signals of the SMI eye trackers correlate strongly during small head movements, but less so or not at all when the head is completely still, indicating that head movements are registered by the P and CR images in the eye camera. By recording with artificial eyes, we can also show that the pupil size artefact has no major role in increasing and colouring noise. Our findings add to and replicate the observation by Niehorster et al., (2021) that lowpass filters in video-based P-CR eye trackers colour the data. Irrespective of source, filters or head movements, coloured noise can be confused for oculomotor drift. We also find that usage of the default head restriction in the EyeLink 1000+, the EyeLink II and the HiSpeed240 result in noisier data compared to less head restriction. Researchers investigating data quality in eye trackers should consider not using the Gen 2 artificial eye from SR Research / EyeLink. Data recorded with this artificial eye are much noisier than data recorded with other artificial eyes, on average 2.2-14.5 times worse for the five eye trackers.
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Institute of Psychology, Nicolaus Copernicus University in Torun, Torun, Poland
- Department of Psychology, Regensburg University, Regensburg, Germany
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | | |
Collapse
|
30
|
Quinn PD, López Pérez D, Kennedy DP, Bölte S, D'Onofrio B, Lichtenstein P, Falck‐Ytter T. Visual search: Heritability and association with general intelligence. GENES, BRAIN, AND BEHAVIOR 2022; 21:e12779. [PMID: 35044053 PMCID: PMC9744476 DOI: 10.1111/gbb.12779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 10/01/2021] [Accepted: 11/09/2021] [Indexed: 02/03/2023]
Abstract
Visual search guides goal-directed action in humans and many other species, and it has been studied extensively in the past. Yet, no study has investigated the relative contributions of genes and environments to individual differences in visual search performance, or to which extent etiologies are shared with broader cognitive phenotypes. To address this gap, we studied visual search and general intelligence in 156 monozygotic (MZ) and 158 same-sex dizygotic (DZ) twin pairs. We found that different indexes of visual search performance (response latency and visual search efficiency) were moderately heritable. Phenotypic correlations between visual search and intelligence were small-to-moderate, and only a small proportion of the genetic variance in visual search was shared with genetic variance in intelligence. We discuss these findings in the context of the "generalist genes hypothesis" stating that different cognitive functions have a common genetic basis.
Collapse
Affiliation(s)
- Patrick D. Quinn
- Department of Applied Health Science, School of Public HealthIndiana UniversityBloomingtonIndianaUSA
| | - David López Pérez
- Neurocognitive Development UnitInstitute of Psychology, Polish Academy of SciencesWarsawPoland
| | - Daniel P. Kennedy
- Deparment of Psychological and Brain Sciences, Cognitive Science Program, Program in NeuroscienceIndiana UniversityBloomingtonIndianaUSA
| | - Sven Bölte
- Center of Neurodevelopmental Disorders (KIND), Centre for Psychiatry Research; Department of Women's and Children's Health, Karolinska Institutet & Child and Adolescent PsychiatryStockholm Health Care Services, Region StockholmStockholmSweden,Curtin Autism Research Group, Curtin School of Allied HealthCurtin UniversityPerthWestern AustraliaAustralia
| | - Brian D'Onofrio
- Department of Applied Health Science, School of Public HealthIndiana UniversityBloomingtonIndianaUSA,Department of Medical Epidemiology and BiostatisticsKarolinska InstitutetStockholmSweden
| | - Paul Lichtenstein
- Department of Medical Epidemiology and BiostatisticsKarolinska InstitutetStockholmSweden
| | - Terje Falck‐Ytter
- Center of Neurodevelopmental Disorders (KIND), Centre for Psychiatry Research; Department of Women's and Children's Health, Karolinska Institutet & Child and Adolescent PsychiatryStockholm Health Care Services, Region StockholmStockholmSweden,Development and Neurodiversity Lab, Department of PsychologyUppsala UniversityUppsalaSweden,Swedish Collegium for Advanced StudyUppsalaSweden
| |
Collapse
|
31
|
Dahl M, Tryding M, Heckler A, Nyström M. Quiet Eye and Computerized Precision Tasks in First-Person Shooter Perspective Esport Games. Front Psychol 2021; 12:676591. [PMID: 34819892 PMCID: PMC8606425 DOI: 10.3389/fpsyg.2021.676591] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 10/15/2021] [Indexed: 11/13/2022] Open
Abstract
The gaze behavior in sports and other applied settings has been studied for more than 20 years. A common finding is related to the “quiet eye” (QE), predicting that the duration of the last fixation before a critical event is associated with higher performance. Unlike previous studies conducted in applied settings with mobile eye trackers, we investigate the QE in a context similar to esport, in which participants click the mouse to hit targets presented on a computer screen under different levels of cognitive load. Simultaneously, eye and mouse movements were tracked using a high-end remote eye tracker at 300 Hz. Consistent with previous studies, we found that longer QE fixations were associated with higher performance. Increasing the cognitive load delayed the onset of the QE fixation, but had no significant influence on the QE duration. We discuss the implications of our results in the context of how the QE is defined, the quality of the eye-tracker data, and the type of analysis applied to QE data.
Collapse
Affiliation(s)
- Mats Dahl
- Department of Psychology, Lund University, Lund, Sweden
| | | | | | | |
Collapse
|
32
|
Tomalski P, López Pérez D, Radkowska A, Malinowska-Korczak A. Selective Changes in Complexity of Visual Scanning for Social Stimuli in Infancy. Front Psychol 2021; 12:705600. [PMID: 34795610 PMCID: PMC8593402 DOI: 10.3389/fpsyg.2021.705600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 10/08/2021] [Indexed: 11/17/2022] Open
Abstract
In the 1st year of life, infants gradually gain the ability to control their eye movements and explore visual scenes, which support their learning and emerging cognitive skills. These gains include domain-general skills such as rapid orienting or attention disengagement as well as domain-specific ones such as increased sensitivity to social stimuli. However, it remains unknown whether these developmental changes in what infants fixate and for how long in naturalistic scenes lead to the emergence of more complex, repeated sequences of fixations, especially when viewing human figures and faces, and whether these changes are related to improvements in domain-general attentional skills. Here we tested longitudinally the developmental changes in the complexity of fixation sequences at 5.5 and 11 months of age using Recurrence Quantification Analysis. We measured changes in how fixations recur in the same location and changes in the patterns (repeated sequences) of fixations in social and non-social scenes that were either static or dynamic. We found more complex patterns (i.e., repeated and longer sequences) of fixations in social than non-social scenes, both static and dynamic. There was also an age-related increase in the length of repeated fixation sequences only for social static scenes, which was independent of individual differences in orienting and attention disengagement. Our results can be interpreted as evidence for fine-tuning of infants' visual scanning skills. They selectively produce longer and more complex sequences of fixations on faces and bodies before reaching the end of the 1st year of life.
Collapse
Affiliation(s)
- Przemysław Tomalski
- Institute of Psychology, Polish Academy of Sciences, Warsaw, Poland.,Faculty of Psychology, University of Warsaw, Warsaw, Poland
| | | | - Alicja Radkowska
- Institute of Psychology, Polish Academy of Sciences, Warsaw, Poland.,Faculty of Psychology, University of Warsaw, Warsaw, Poland
| | | |
Collapse
|
33
|
De Kloe YJR, Hooge ITC, Kemner C, Niehorster DC, Nyström M, Hessels RS. Replacing eye trackers in ongoing studies: A comparison of eye-tracking data quality between the Tobii Pro TX300 and the Tobii Pro Spectrum. INFANCY 2021; 27:25-45. [PMID: 34687142 DOI: 10.1111/infa.12441] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 09/06/2021] [Accepted: 09/20/2021] [Indexed: 11/26/2022]
Abstract
The Tobii Pro TX300 is a popular eye tracker in developmental eye-tracking research, yet it is no longer manufactured. If a TX300 breaks down, it may have to be replaced. The data quality of the replacement eye tracker may differ from that of the TX300, which may affect the experimental outcome measures. This is problematic for longitudinal and multi-site studies, and for researchers replacing eye trackers between studies. We, therefore, ask how the TX300 and its successor, the Tobii Pro Spectrum, compare in terms of eye-tracking data quality. Data quality-operationalized through precision, accuracy, and data loss-was compared between eye trackers for three age groups (around 5-months, 10-months, and 3-years). Precision was better for all gaze position signals obtained with the Spectrum in comparison to the TX300. Accuracy of the Spectrum was higher for the 5-month-old and 10-month-old children. For the three-year-old children, accuracy was similar across both eye trackers. Gaze position signals from the Spectrum exhibited lower proportions of data loss, and the duration of the data loss periods tended to be shorter. In conclusion, the Spectrum produces gaze position signals with higher data quality, especially for the younger infants. Implications for data analysis are discussed.
Collapse
Affiliation(s)
- Yentl J R De Kloe
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Chantal Kemner
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden.,Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
34
|
Abstract
Eye-tracking and recording of physiological signals are increasingly used in research within cognitive science and human–computer interaction. For example, gaze position and measures of autonomic arousal, including pupil dilation, skin conductance (SC), and heart rate (HR), provide an indicator of cognitive and physiological processes. The growing popularity of these techniques is partially driven by the emergence of low-cost recording equipment and the proliferation of open-source software for data collection and analysis of such signals. However, the use of new technology requires investigation of its reliability and validation with respect to real-world usage and against established technologies. Accordingly, in two experiments (total N = 69), we assessed the Gazepoint GP3-HD eye-tracker and Gazepoint Biometrics (GPB) system from Gazepoint. We show that the accuracy, precision, and robustness of the eye-tracker are comparable to competing systems. While fixation and saccade events can be reliably extracted, the study of saccade kinematics is affected by the low sampling rate. The GP3-HD is also able to capture psychological effects on pupil dilation in addition to the well-defined pupillary light reflex. Finally, moderate-to-strong correlations between physiological recordings and derived metrics of SC and HR between the GPB and the well-established BIOPAC MP160 support its validity. However, low amplitude of the SC signal obtained from the GPB may reduce sensitivity when separating phasic and tonic components. Similarly, data loss in pulse monitoring may pose difficulties for certain HR variability analyses.
Collapse
|
35
|
Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. Detection of normal and slow saccades using implicit piecewise polynomial approximation. J Vis 2021; 21:8. [PMID: 34125160 PMCID: PMC8212426 DOI: 10.1167/jov.21.6.8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The quantitative analysis of saccades in eye movement data unveils information associated with intention, cognition, and health status. Abnormally slow saccades are indicative of neurological disorders and often imply a specific pathological disturbance. However, conventional saccade detection algorithms are not designed to detect slow saccades, and are correspondingly unreliable when saccades are unusually slow. In this article, we propose an algorithm that is effective for the detection of both normal and slow saccades. The proposed algorithm is partly based on modeling saccadic waveforms as piecewise-quadratic signals. The algorithm first decreases noise in acquired eye-tracking data using optimization to minimize a prescribed objective function, then uses velocity thresholding to detect saccades. Using both simulated saccades and real saccades generated by healthy subjects and patients, we evaluate the performance of the proposed algorithm and 10 other detection algorithms. We show the proposed algorithm is more accurate in detecting both normal and slow saccades than other algorithms.
Collapse
Affiliation(s)
- Weiwei Dai
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - Ivan Selesnick
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - John-Ross Rizzo
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Janet Rucker
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Todd Hudson
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| |
Collapse
|
36
|
Spiller M, Liu YH, Hossain MZ, Gedeon T, Geissler J, Nürnberger A. Predicting Visual Search Task Success from Eye Gaze Data as a Basis for User-Adaptive Information Visualization Systems. ACM T INTERACT INTEL 2021. [DOI: 10.1145/3446638] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Information visualizations are an efficient means to support the users in understanding large amounts of complex, interconnected data; user comprehension, however, depends on individual factors such as their cognitive abilities. The research literature provides evidence that user-adaptive information visualizations positively impact the users’ performance in visualization tasks. This study attempts to contribute toward the development of a computational model to predict the users’ success in visual search tasks from eye gaze data and thereby drive such user-adaptive systems. State-of-the-art deep learning models for time series classification have been trained on sequential eye gaze data obtained from 40 study participants’ interaction with a circular and an organizational graph. The results suggest that such models yield higher accuracy than a baseline classifier and previously used models for this purpose. In particular, a Multivariate Long Short Term Memory Fully Convolutional Network shows encouraging performance for its use in online user-adaptive systems. Given this finding, such a computational model can infer the users’ need for support during interaction with a graph and trigger appropriate interventions in user-adaptive information visualization systems. This facilitates the design of such systems since further interaction data like mouse clicks is not required.
Collapse
Affiliation(s)
- Moritz Spiller
- INKA—Innovation Laboratory for Image Guided Therapy, Health Campus Immunology Infectiology and Inflammation (GC-I3), Otto-von-Guericke-University, Germany
| | | | | | - Tom Gedeon
- The Australian National University, Australia
| | | | | |
Collapse
|
37
|
Eye-movements reveal semantic interference effects during the encoding of naturalistic scenes in long-term memory. Psychon Bull Rev 2021; 28:1601-1614. [PMID: 34009623 DOI: 10.3758/s13423-021-01920-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2021] [Indexed: 11/08/2022]
Abstract
Similarity-based semantic interference (SI) hinders memory recognition. Within long-term visual memory paradigms, the more scenes (or objects) from the same semantic category are viewed, the harder it is to recognize each individual instance. A growing body of evidence shows that overt attention is intimately linked to memory. However, it is yet to be understood whether SI mediates overt attention during scene encoding, and so explain its detrimental impact on recognition memory. In the current experiment, participants watched 372 photographs belonging to different semantic categories (e.g., a kitchen) with different frequency (4, 20, 40 or 60 images), while being eye-tracked. After 10 minutes, they were presented with the same 372 photographs plus 372 new photographs and asked whether they recognized (or not) each photo (i.e., old/new paradigm). We found that the more the SI, the poorer the recognition performance, especially for old scenes of which memory representations existed. Scenes more widely explored were better recognized, but for increasing SI, participants focused on more local regions of the scene in search for its potentially distinctive details. Attending to the centre of the display, or to scene regions rich in low-level saliency was detrimental to recognition accuracy, and as SI increased participants were more likely to rely on visual saliency. The complexity of maintaining faithful memory representations for increasing SI also manifested in longer fixation durations; in fact, a more successful encoding was also associated with shorter fixations. Our study highlights the interdependence between attention and memory during high-level processing of semantic information.
Collapse
|
38
|
Alexithymia explains atypical spatiotemporal dynamics of eye gaze in autism. Cognition 2021; 212:104710. [PMID: 33862441 DOI: 10.1016/j.cognition.2021.104710] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 03/24/2021] [Accepted: 03/26/2021] [Indexed: 11/23/2022]
Abstract
Recognition of emotional facial expressions is considered to be atypical in autism. This difficulty is thought to be due to the way that facial expressions are visually explored. Evidence for atypical visual exploration of emotional faces in autism is, however, equivocal. We propose that, where observed, atypical visual exploration of emotional facial expressions is due to alexithymia, a distinct but frequently co-occurring condition. In this eye-tracking study we tested the alexithymia hypothesis using a number of recent methodological advances to study eye gaze during several emotion processing tasks (emotion recognition, intensity judgements, free gaze), in 25 adults with, and 45 without, autism. A multilevel polynomial modelling strategy was used to describe the spatiotemporal dynamics of eye gaze to emotional facial expressions. Converging evidence from traditional and novel analysis methods revealed that atypical gaze to the eyes is best predicted by alexithymia in both autistic and non-autistic individuals. Information theoretic analyses also revealed differential effects of task on gaze patterns as a function of alexithymia, but not autism. These findings highlight factors underlying atypical emotion processing in autistic individuals, with wide-ranging implications for emotion research.
Collapse
|
39
|
Van der Donck S, Vettori S, Dzhelyova M, Mahdi SS, Claes P, Steyaert J, Boets B. Investigating automatic emotion processing in boys with autism via eye tracking and facial mimicry recordings. Autism Res 2021; 14:1404-1420. [PMID: 33704930 DOI: 10.1002/aur.2490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Accepted: 02/08/2021] [Indexed: 11/08/2022]
Abstract
Difficulties in automatic emotion processing in individuals with autism spectrum disorder (ASD) might remain concealed in behavioral studies due to compensatory strategies. To gain more insight in the mechanisms underlying facial emotion recognition, we recorded eye tracking and facial mimicry data of 20 school-aged boys with ASD and 20 matched typically developing controls while performing an explicit emotion recognition task. Proportional looking times to specific face regions (eyes, nose, and mouth) and face exploration dynamics were analyzed. In addition, facial mimicry was assessed. Boys with ASD and controls were equally capable to recognize expressions and did not differ in proportional looking times, and number and duration of fixations. Yet, specific facial expressions elicited particular gaze patterns, especially within the control group. Both groups showed similar face scanning dynamics, although boys with ASD demonstrated smaller saccadic amplitudes. Regarding the facial mimicry, we found no emotion specific facial responses and no group differences in the responses to the displayed facial expressions. Our results indicate that boys with and without ASD employ similar eye gaze strategies to recognize facial expressions. Smaller saccadic amplitudes in boys with ASD might indicate a less exploratory face processing strategy. Yet, this slightly more persistent visual scanning behavior in boys with ASD does not imply less efficient emotion information processing, given the similar behavioral performance. Results on the facial mimicry data indicate similar facial responses to emotional faces in boys with and without ASD. LAY SUMMARY: We investigated (i) whether boys with and without autism apply different face exploration strategies when recognizing facial expressions and (ii) whether they mimic the displayed facial expression to a similar extent. We found that boys with and without ASD recognize facial expressions equally well, and that both groups show similar facial reactions to the displayed facial emotions. Yet, boys with ASD visually explored the faces slightly less than the boys without ASD.
Collapse
Affiliation(s)
- Stephanie Van der Donck
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| | - Sofie Vettori
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| | - Milena Dzhelyova
- Institute of Research in Psychological Sciences, Institute of Neuroscience, Université de Louvain, Louvain-La-Neuve, Belgium
| | - Soha Sadat Mahdi
- Medical Imaging Research Center, MIRC, Leuven, Belgium.,Department of Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium
| | - Peter Claes
- Medical Imaging Research Center, MIRC, Leuven, Belgium.,Department of Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium.,Department of Human Genetics, KU Leuven, Leuven, Belgium
| | - Jean Steyaert
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| | - Bart Boets
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| |
Collapse
|
40
|
Abstract
The magnitude of variation in the gaze position signals recorded by an eye tracker, also known as its precision, is an important aspect of an eye tracker’s data quality. However, data quality of eye-tracking signals is still poorly understood. In this paper, we therefore investigate the following: (1) How do the various available measures characterizing eye-tracking data during fixation relate to each other? (2) How are they influenced by signal type? (3) What type of noise should be used to augment eye-tracking data when evaluating eye-movement analysis methods? To support our analysis, this paper presents new measures to characterize signal type and signal magnitude based on RMS-S2S and STD, two established measures of precision. Simulations are performed to investigate how each of these measures depends on the number of gaze position samples over which they are calculated, and to reveal how RMS-S2S and STD relate to each other and to measures characterizing the temporal spectrum composition of the recorded gaze position signal. Further empirical investigations were performed using gaze position data recorded with five eye trackers from human and artificial eyes. We found that although the examined eye trackers produce gaze position signals with different characteristics, the relations between precision measures derived from simulations are borne out by the data. We furthermore conclude that data with a range of signal type values should be used to assess the robustness of eye-movement analysis methods. We present a method for generating artificial eye-tracker noise of any signal type and magnitude.
Collapse
|
41
|
Aydarkhanov R, Uscumlic M, Chavarriaga R, Gheorghe L, Millan JDR. Closed-loop EEG study on visual recognition during driving. J Neural Eng 2021; 18. [PMID: 33494072 DOI: 10.1088/1741-2552/abdfb2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 01/25/2021] [Indexed: 11/12/2022]
Abstract
OBJECTIVE In contrast to the classical visual BCI paradigms, which adhere to a rigid trial structure and restricted user behavior, EEG-based visual recognition decoding during our daily activities remains challenging. The objective of this study is to explore the feasibility of decoding the EEG signature of visual recognition in experimental conditions promoting our natural ocular behavior when interacting with our dynamic environment. APPROACH In our experiment, subjects visually search for a target object among suddenly appearing objects in the environment while driving a car-simulator. Given that subjects exhibit an unconstrained overt visual behavior, we based our study on eye fixation-related potentials (EFRP). We report on gaze behavior and single-trial EFRP decoding performance (fixations on visually similar target vs. non-target objects). In addition, we demonstrate the application of our approach in a closed-loop BCI setup. MAIN RESULTS To identify the target out of four symbol types along a road segment, the BCI system integrated decoding probabilities of multiple EFRP and achieved the average online accuracy of 0.37 ± 0.06 (12 subjects), statistically significantly above the chance level. Using the acquired data, we performed a comparative study of classification algorithms (discriminating target vs. non-target) and feature spaces in a simulated online scenario. The EEG approaches yielded similar moderate performances of at most 0.6 AUC, yet statistically significantly above the chance level. In addition, the gaze duration (dwell time) appears to be an additional informative feature in this context. SIGNIFICANCE These results show that visual recognition of sudden events can be decoded during active driving. Therefore, this study lays a foundation for assistive and recommender systems based on the driver's brain signals.
Collapse
Affiliation(s)
- Ruslan Aydarkhanov
- EPFL, EPFL STI IBI-STI MIPLAB, Ch. des Mines 9, Geneva, 1202, SWITZERLAND
| | - Marija Uscumlic
- Nissan International SA, La Pièce 12, Rolle, 1180, SWITZERLAND
| | - Ricardo Chavarriaga
- Forschungsschwerpunkt Information Engineering, ZHAW, Obere Kirchgasse 2 / Steinberggasse 12/14, Winterthur, 8400, SWITZERLAND
| | - Lucian Gheorghe
- Advanced Materials and Processing Laboratory, Nissan Research Center, Nissan Motors Co. LTD, 1, Natsushima, Yokosuka-shi, 237-8523, JAPAN
| | - Jose Del R Millan
- ECE & Neurology, University of Texas at Austin, HDB 5.306, 1601 TRINITY ST BLDG B, Austin, Texas, 78712, UNITED STATES
| |
Collapse
|
42
|
Stein N. A Comparison of Eye Tracking Latencies Among Several Commercial Head-Mounted Displays. Iperception 2021; 12:2041669520983338. [PMID: 33628410 PMCID: PMC7883159 DOI: 10.1177/2041669520983338] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 11/16/2020] [Indexed: 11/15/2022] Open
Abstract
A number of virtual reality head-mounted displays (HMDs) with integrated eye trackers have recently become commercially available. If their eye tracking latency is low and reliable enough for gaze-contingent rendering, this may open up many interesting opportunities for researchers. We measured eye tracking latencies for the Fove-0, the Varjo VR-1, and the High Tech Computer Corporation (HTC) Vive Pro Eye using simultaneous electrooculography measurements. We determined the time from the occurrence of an eye position change to its availability as a data sample from the eye tracker (delay) and the time from an eye position change to the earliest possible change of the display content (latency). For each test and each device, participants performed 60 saccades between two targets 20° of visual angle apart. The targets were continuously visible in the HMD, and the saccades were instructed by an auditory cue. Data collection and eye tracking calibration were done using the recommended scripts for each device in Unity3D. The Vive Pro Eye was recorded twice, once using the SteamVR SDK and once using the Tobii XR SDK. Our results show clear differences between the HMDs. Delays ranged from 15 ms to 52 ms, and the latencies ranged from 45 ms to 81 ms. The Fove-0 appears to be the fastest device and best suited for gaze-contingent rendering.
Collapse
Affiliation(s)
- Niklas Stein
- Institute for Psychology, University of Muenster, Muenster, Germany
| |
Collapse
|
43
|
GlassesViewer: Open-source software for viewing and analyzing data from the Tobii Pro Glasses 2 eye tracker. Behav Res Methods 2020; 52:1244-1253. [PMID: 31898293 PMCID: PMC7280338 DOI: 10.3758/s13428-019-01314-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We present GlassesViewer, open-source software for viewing and analyzing eye-tracking data of the Tobii Pro Glasses 2 head-mounted eye tracker as well as the scene and eye videos and other data streams (pupil size, gyroscope, accelerometer, and TTL input) that this headset can record. The software provides the following functionality written in MATLAB: (1) a graphical interface for navigating the study- and recording structure produced by the Tobii Glasses 2; (2) functionality to unpack, parse, and synchronize the various data and video streams comprising a Glasses 2 recording; and (3) a graphical interface for viewing the Glasses 2's gaze direction, pupil size, gyroscope and accelerometer time-series data, along with the recorded scene and eye camera videos. In this latter interface, segments of data can furthermore be labeled through user-provided event classification algorithms or by means of manual annotation. Lastly, the toolbox provides integration with the GazeCode tool by Benjamins et al. (2018), enabling a completely open-source workflow for analyzing Tobii Pro Glasses 2 recordings.
Collapse
|
44
|
Vettori S, Van der Donck S, Nys J, Moors P, Van Wesemael T, Steyaert J, Rossion B, Dzhelyova M, Boets B. Combined frequency-tagging EEG and eye-tracking measures provide no support for the "excess mouth/diminished eye attention" hypothesis in autism. Mol Autism 2020; 11:94. [PMID: 33228763 PMCID: PMC7686749 DOI: 10.1186/s13229-020-00396-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 11/02/2020] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND Scanning faces is important for social interactions. Difficulty with the social use of eye contact constitutes one of the clinical symptoms of autism spectrum disorder (ASD). It has been suggested that individuals with ASD look less at the eyes and more at the mouth than typically developing (TD) individuals, possibly due to gaze aversion or gaze indifference. However, eye-tracking evidence for this hypothesis is mixed. While gaze patterns convey information about overt orienting processes, it is unclear how this is manifested at the neural level and how relative covert attention to the eyes and mouth of faces might be affected in ASD. METHODS We used frequency-tagging EEG in combination with eye tracking, while participants watched fast flickering faces for 1-min stimulation sequences. The upper and lower halves of the faces were presented at 6 Hz and 7.5 Hz or vice versa in different stimulation sequences, allowing to objectively disentangle the neural saliency of the eyes versus mouth region of a perceived face. We tested 21 boys with ASD (8-12 years old) and 21 TD control boys, matched for age and IQ. RESULTS Both groups looked longer at the eyes than the mouth, without any group difference in relative fixation duration to these features. TD boys looked significantly more to the nose, while the ASD boys looked more outside the face. EEG neural saliency data partly followed this pattern: neural responses to the upper or lower face half were not different between groups, but in the TD group, neural responses to the lower face halves were larger than responses to the upper part. Face exploration dynamics showed that TD individuals mostly maintained fixations within the same facial region, whereas individuals with ASD switched more often between the face parts. LIMITATIONS Replication in large and independent samples may be needed to validate exploratory results. CONCLUSIONS Combined eye-tracking and frequency-tagged neural responses show no support for the excess mouth/diminished eye gaze hypothesis in ASD. The more exploratory face scanning style observed in ASD might be related to their increased feature-based face processing style.
Collapse
Affiliation(s)
- Sofie Vettori
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium.
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium.
| | - Stephanie Van der Donck
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
| | - Jannes Nys
- Department of Physics and Astronomy, Ghent University, Ghent, Belgium
- IDLab - Department of Computer Science, University of Antwerp - IMEC, Antwerp, Belgium
| | - Pieter Moors
- Laboratory of Experimental Psychology, University of Leuven (KU Leuven), Leuven, Belgium
| | - Tim Van Wesemael
- Department of Electrical Engineering (ESAT), Stadius Center for Dynamical Systems, Signal Processing and Data Analytics, Leuven, Belgium
| | - Jean Steyaert
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
| | - Bruno Rossion
- Institute of Research in Psychological Science, Institute of Neuroscience, University of Louvain, Louvain-La-Neuve, Belgium
- CNRS, CRAN - UMR 7039, Université de Lorraine, 54000, Nancy, France
- CHRU-Nancy, Service de Neurologie, Université de Lorraine, 54000, Nancy, France
| | - Milena Dzhelyova
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
- Institute of Research in Psychological Science, Institute of Neuroscience, University of Louvain, Louvain-La-Neuve, Belgium
| | - Bart Boets
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
| |
Collapse
|
45
|
Poli F, Serino G, Mars RB, Hunnius S. Infants tailor their attention to maximize learning. SCIENCE ADVANCES 2020; 6:6/39/eabb5053. [PMID: 32967830 PMCID: PMC7531891 DOI: 10.1126/sciadv.abb5053] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Accepted: 08/06/2020] [Indexed: 06/11/2023]
Abstract
Infants' remarkable learning abilities allow them to rapidly acquire many complex skills. It has been suggested that infants achieve this learning by optimally allocating their attention to relevant stimuli in the environment, but the underlying mechanisms remain poorly understood. Here, we modeled infants' looking behavior during a learning task through an ideal learner that quantified the informational structure of environmental stimuli. We show that saccadic latencies, looking time, and time spent engaged with a stimulus sequence are explained by the properties of the learning environments, including the level of surprise of the stimulus, overall predictability of the environment, and progress in learning the environmental structure. These findings reveal the factors that shape infants' advanced learning, emphasizing their predisposition to seek out stimuli that maximize learning.
Collapse
Affiliation(s)
- F Poli
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands.
| | - G Serino
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| | - R B Mars
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
- Wellcome Centre for Integrative Neuroimaging, Centre for Functional MRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, UK
| | - S Hunnius
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
46
|
Abstract
Tracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources.
Collapse
|
47
|
Evaluating three approaches to binary event-level agreement scoring. A reply to Friedman (2020). Behav Res Methods 2020; 53:325-334. [PMID: 32705657 PMCID: PMC7880951 DOI: 10.3758/s13428-020-01425-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
48
|
Auditory stimuli degrade visual performance in virtual reality. Sci Rep 2020; 10:12363. [PMID: 32703981 PMCID: PMC7378072 DOI: 10.1038/s41598-020-69135-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 07/07/2020] [Indexed: 12/01/2022] Open
Abstract
We report an auditory effect of visual performance degradation in a virtual reality (VR) setting, where the viewing conditions are significantly different from previous studies. With the presentation of temporally congruent but spatially incongruent sound, we can degrade visual performance significantly at detection and recognition levels. We further show that this effect is robust to different types and locations of both auditory and visual stimuli. We also analyze participants behavior with an eye tracker to study the underlying cause of the degradation effect. We find that the performance degradation occurs even in the absence of saccades towards the sound source, during normal gaze behavior. This suggests that this effect is not caused by oculomotor phenomena, but rather by neural interactions or attentional shifts.
Collapse
|
49
|
Hessels RS, van Doorn AJ, Benjamins JS, Holleman GA, Hooge ITC. Task-related gaze control in human crowd navigation. Atten Percept Psychophys 2020; 82:2482-2501. [PMID: 31993979 PMCID: PMC7343766 DOI: 10.3758/s13414-019-01952-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Human crowds provide an interesting case for research on the perception of people. In this study, we investigate how visual information is acquired for (1) navigating human crowds and (2) seeking out social affordances in crowds by studying gaze behavior during human crowd navigation under different task instructions. Observers (n = 11) wore head-mounted eye-tracking glasses and walked two rounds through hallways containing walking crowds (n = 38) and static objects. For round one, observers were instructed to avoid collisions. For round two, observers furthermore had to indicate with a button press whether oncoming people made eye contact. Task performance (walking speed, absence of collisions) was similar across rounds. Fixation durations indicated that heads, bodies, objects, and walls maintained gaze comparably long. Only crowds in the distance maintained gaze relatively longer. We find no compelling evidence that human bodies and heads hold one's gaze more than objects while navigating crowds. When eye contact was assessed, heads were fixated more often and for a total longer duration, which came at the cost of looking at bodies. We conclude that gaze behavior in crowd navigation is task-dependent, and that not every fixation is strictly necessary for navigating crowds. When explicitly tasked with seeking out potential social affordances, gaze is modulated as a result. We discuss our findings in the light of current theories and models of gaze behavior. Furthermore, we show that in a head-mounted eye-tracking study, a large degree of experimental control can be maintained while many degrees of freedom on the side of the observer remain.
Collapse
Affiliation(s)
- Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584CS, Utrecht, The Netherlands.
- Developmental Psychology, Utrecht University, Utrecht, The Netherlands.
| | - Andrea J van Doorn
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584CS, Utrecht, The Netherlands
| | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584CS, Utrecht, The Netherlands
- Social, Health and Organisational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Gijs A Holleman
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584CS, Utrecht, The Netherlands
- Developmental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584CS, Utrecht, The Netherlands
| |
Collapse
|
50
|
Zhao D. A detection method for DC power disturbance data of charging pile based on linear algebra. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2020. [DOI: 10.3233/jifs-179828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Di Zhao
- Electronic Information Engineering, College of Information Science and Engineering, Hunan First Normal University, Changsha, China
| |
Collapse
|