1
|
Qian K, Arichi T, Edwards AD, Hajnal JV. Instant interaction driven adaptive gaze control interface. Sci Rep 2024; 14:11661. [PMID: 38778122 PMCID: PMC11111737 DOI: 10.1038/s41598-024-62365-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 05/16/2024] [Indexed: 05/25/2024] Open
Abstract
Gaze estimation is long been recognised as having potential as the basis for human-computer interaction (HCI) systems, but usability and robustness of performance remain challenging . This work focuses on systems in which there is a live video stream showing enough of the subjects face to track eye movements and some means to infer gaze location from detected eye features. Currently, systems generally require some form of calibration or set-up procedure at the start of each user session. Here we explore some simple strategies for enabling gaze based HCI to operate immediately and robustly without any explicit set-up tasks. We explore different choices of coordinate origin for combining extracted features from multiple subjects and the replacement of subject specific calibration by system initiation based on prior models. Results show that referencing all extracted features to local coordinate origins determined by subject start position enables robust immediate operation. Combining this approach with an adaptive gaze estimation model using an interactive user interface enables continuous operation with the 75th percentile gaze errors of 0.7∘ , and maximum gaze errors of 1.7∘ during prospective testing. There constitute state-of-the-art results and have the potential to enable a new generation of reliable gaze based HCI systems.
Collapse
Affiliation(s)
- Kun Qian
- King's College London, Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, London, SE1 7EH, UK.
| | - Tomoki Arichi
- King's College London, Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, London, SE1 7EH, UK
| | - A David Edwards
- King's College London, Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, London, SE1 7EH, UK
| | - Joseph V Hajnal
- King's College London, Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, London, SE1 7EH, UK.
| |
Collapse
|
2
|
Prystauka Y, Altmann GTM, Rothman J. Online eye tracking and real-time sentence processing: On opportunities and efficacy for capturing psycholinguistic effects of different magnitudes and diversity. Behav Res Methods 2024; 56:3504-3522. [PMID: 37528290 PMCID: PMC11133053 DOI: 10.3758/s13428-023-02176-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/15/2023] [Indexed: 08/03/2023]
Abstract
Online research methods have the potential to facilitate equitable accessibility to otherwise-expensive research resources, as well as to more diverse populations and language combinations than currently populate our studies. In psycholinguistics specifically, webcam-based eye tracking is emerging as a powerful online tool capable of capturing sentence processing effects in real time. The present paper asks whether webcam-based eye tracking provides the necessary granularity to replicate effects-crucially both large and small-that tracker-based eye tracking has shown. Using the Gorilla Experiment Builder platform, this study set out to replicate two psycholinguistic effects: a robust one, the verb semantic constraint effect, first reported in Altmann and Kamide, Cognition 73(3), 247-264 (1999), and a smaller one, the lexical interference effect, first examined by Kukona et al. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(2), 326 (2014). Webcam-based eye tracking was able to replicate both effects, thus showing that its functionality is not limited to large effects. Moreover, the paper also reports two approaches to computing statistical power and discusses the differences in their outputs. Beyond discussing several important methodological, theoretical, and practical implications, we offer some further technical details and advice on how to implement webcam-based eye-tracking studies. We believe that the advent of webcam-based eye tracking, at least in respect of the visual world paradigm, will kickstart a new wave of more diverse studies with more diverse populations.
Collapse
Affiliation(s)
- Yanina Prystauka
- Department of Language and Culture, UiT The Arctic University of Norway, Tromsø, Norway.
| | - Gerry T M Altmann
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - Jason Rothman
- Department of Language and Culture, UiT The Arctic University of Norway, Tromsø, Norway
- Centro de Investigación Nebrija en Cognición (CINC), University Nebrija, Madrid, Spain
| |
Collapse
|
3
|
Le Cunff AL, Dommett E, Giampietro V. Neurophysiological measures and correlates of cognitive load in attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD) and dyslexia: A scoping review and research recommendations. Eur J Neurosci 2024; 59:256-282. [PMID: 38109476 DOI: 10.1111/ejn.16201] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 10/27/2023] [Accepted: 11/06/2023] [Indexed: 12/20/2023]
Abstract
Working memory is integral to a range of critical cognitive functions such as reasoning and decision-making. Although alterations in working memory have been observed in neurodivergent populations, there has been no review mapping how cognitive load is measured in common neurodevelopmental conditions such as attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD) and dyslexia. This scoping review explores the neurophysiological measures used to study cognitive load in these specific populations. Our findings highlight that electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) are the most frequently used methods, with a limited number of studies employing functional near-infrared spectroscopy (fNIRs), magnetoencephalography (MEG) or eye-tracking. Notably, eye-related measures are less commonly used, despite their prominence in cognitive load research among neurotypical individuals. The review also highlights potential correlates of cognitive load, such as neural oscillations in the theta and alpha ranges for EEG studies, blood oxygenation level-dependent (BOLD) responses in lateral and medial frontal brain regions for fMRI and fNIRS studies and eye-related measures such as pupil dilation and blink rate. Finally, critical issues for future studies are discussed, including the technical challenges associated with multimodal approaches, the possible impact of atypical features on cognitive load measures and balancing data richness with participant well-being. These insights contribute to a more nuanced understanding of cognitive load measurement in neurodivergent populations and point to important methodological considerations for future neuroscientific research in this area.
Collapse
Affiliation(s)
- Anne-Laure Le Cunff
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | - Eleanor Dommett
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | - Vincent Giampietro
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| |
Collapse
|
4
|
Adhanom IB, MacNeilage P, Folmer E. Eye Tracking in Virtual Reality: a Broad Review of Applications and Challenges. VIRTUAL REALITY 2023; 27:1481-1505. [PMID: 37621305 PMCID: PMC10449001 DOI: 10.1007/s10055-022-00738-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/30/2022] [Indexed: 08/26/2023]
Abstract
Eye tracking is becoming increasingly available in head-mounted virtual reality displays with various headsets with integrated eye trackers already commercially available. The applications of eye tracking in virtual reality are highly diversified and span multiple disciplines. As a result, the number of peer-reviewed publications that study eye tracking applications has surged in recent years. We performed a broad review to comprehensively search academic literature databases with the aim of assessing the extent of published research dealing with applications of eye tracking in virtual reality, and highlighting challenges, limitations and areas for future research.
Collapse
Affiliation(s)
| | - Paul MacNeilage
- University of Nevada Reno, 1664 N Virginia St, Reno, NV 89557, USA
| | - Eelke Folmer
- University of Nevada Reno, 1664 N Virginia St, Reno, NV 89557, USA
| |
Collapse
|
5
|
High-Accuracy Gaze Estimation for Interpolation-Based Eye-Tracking Methods. Vision (Basel) 2021; 5:vision5030041. [PMID: 34564339 PMCID: PMC8482219 DOI: 10.3390/vision5030041] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 08/29/2021] [Accepted: 09/02/2021] [Indexed: 11/17/2022] Open
Abstract
This study investigates the influence of the eye-camera location associated with the accuracy and precision of interpolation-based eye-tracking methods. Several factors can negatively influence gaze estimation methods when building a commercial or off-the-shelf eye tracker device, including the eye-camera location in uncalibrated setups. Our experiments show that the eye-camera location combined with the non-coplanarity of the eye plane deforms the eye feature distribution when the eye-camera is far from the eye’s optical axis. This paper proposes geometric transformation methods to reshape the eye feature distribution based on the virtual alignment of the eye-camera in the center of the eye’s optical axis. The data analysis uses eye-tracking data from a simulated environment and an experiment with 83 volunteer participants (55 males and 28 females). We evaluate the improvements achieved with the proposed methods using Gaussian analysis, which defines a range for high-accuracy gaze estimation between −0.5∘ and 0.5∘. Compared to traditional polynomial-based and homography-based gaze estimation methods, the proposed methods increase the number of gaze estimations in the high-accuracy range.
Collapse
|
6
|
Sipatchin A, Wahl S, Rifai K. Eye-Tracking for Clinical Ophthalmology with Virtual Reality (VR): A Case Study of the HTC Vive Pro Eye's Usability. Healthcare (Basel) 2021; 9:180. [PMID: 33572072 PMCID: PMC7914806 DOI: 10.3390/healthcare9020180] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 02/04/2021] [Accepted: 02/05/2021] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND A case study is proposed to empirically test and discuss the eye-tracking status-quo hardware capabilities and limitations of an off-the-shelf virtual reality (VR) headset with embedded eye-tracking for at-home ready-to-go online usability in ophthalmology applications. METHODS The eye-tracking status-quo data quality of the HTC Vive Pro Eye is investigated with novel testing specific to objective online VR perimetry. Testing was done across a wide visual field of the head-mounted-display's (HMD) screen and in two different moving conditions. A new automatic and low-cost Raspberry Pi system is introduced for VR temporal precision testing for assessing the usability of the HTC Vive Pro Eye as an online assistance tool for visual loss. RESULTS The target position on the screen and head movement evidenced limitations of the eye-tracker capabilities as a perimetry assessment tool. Temporal precision testing showed the system's latency of 58.1 milliseconds (ms), evidencing its good potential usage as a ready-to-go online assistance tool for visual loss. CONCLUSIONS The test of the eye-tracking data quality provides novel analysis useful for testing upcoming VR headsets with embedded eye-tracking and opens discussion regarding expanding future introduction of these HMDs into patients' homes for low-vision clinical usability.
Collapse
Affiliation(s)
- Alexandra Sipatchin
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; (S.W.); (K.R.)
| | - Siegfried Wahl
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; (S.W.); (K.R.)
- Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
| | - Katharina Rifai
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; (S.W.); (K.R.)
- Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
| |
Collapse
|
7
|
Gavas RD, Roy S, Chatterjee D, Tripathy SR, Chakravarty K, Sinha A. Enhancing the usability of low-cost eye trackers for rehabilitation applications. PLoS One 2018; 13:e0196348. [PMID: 29856798 PMCID: PMC5983534 DOI: 10.1371/journal.pone.0196348] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Accepted: 04/11/2018] [Indexed: 11/19/2022] Open
Abstract
Eye tracking is one of the most widely used technique for assessment, screening and human-machine interaction related applications. There are certain issues which limit the usage of eye trackers in practical scenarios, viz., i) need to perform multiple calibrations and ii) presence of inherent noise in the recorded data. To address these issues, we have proposed a protocol for one-time calibration against the "regular" or the "multiple" calibration phases. It is seen that though it is always desirable to perform multiple calibration, the one-time calibration also produces comparable results and might be better for individuals who are not able to perform multiple calibrations. In that case, "One-time calibration" can also be done by a participant and the calibration results are used for the rest of the participants, provided the chin rest and the eye tracker positions are unaltered. The second major issue is the presence of the inherent noise in the raw gaze data, leading to systematic and variable errors. We have proposed a signal processing chain to remove these two types of errors. Two different psychological stimuli-based tasks, namely, recall-recognition test and number gazing task are used as a case study for the same. It is seen that the proposed approach gives satisfactory results even with one-time calibration. The study is also extended to test the effect of long duration task on the performance of the proposed algorithm and the results confirm that the proposed methods work well in such scenarios too.
Collapse
Affiliation(s)
- Rahul Dasharath Gavas
- Embedded Systems & Robotics, TCS Research and Innovation, Tata Consultancy Services, Kolkata, India
| | - Sangheeta Roy
- Embedded Systems & Robotics, TCS Research and Innovation, Tata Consultancy Services, Kolkata, India
| | - Debatri Chatterjee
- Embedded Systems & Robotics, TCS Research and Innovation, Tata Consultancy Services, Kolkata, India
| | - Soumya Ranjan Tripathy
- Embedded Systems & Robotics, TCS Research and Innovation, Tata Consultancy Services, Kolkata, India
| | - Kingshuk Chakravarty
- Embedded Systems & Robotics, TCS Research and Innovation, Tata Consultancy Services, Kolkata, India
| | - Aniruddha Sinha
- Embedded Systems & Robotics, TCS Research and Innovation, Tata Consultancy Services, Kolkata, India
| |
Collapse
|