1
|
Banks A, Eldin Abdelaal A, Salcudean S. Head motion-corrected eye gaze tracking with the da Vinci surgical system. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03173-4. [PMID: 38888820 DOI: 10.1007/s11548-024-03173-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 05/01/2024] [Indexed: 06/20/2024]
Abstract
PURPOSE To facilitate the integration of point of gaze (POG) as an input modality for robot-assisted surgery, we introduce a robust head movement compensation gaze tracking system for the da Vinci Surgical System. Previous surgical eye gaze trackers require multiple recalibrations and suffer from accuracy loss when users move from the calibrated position. We investigate whether eye corner detection can reduce gaze estimation error in a robotic surgery context. METHODS A polynomial regressor is first used to estimate POG after an 8-point calibration, and then, using another regressor, the POG error from head movement is estimated from the shift in 2D eye corner location. Eye corners are computed by first detecting regions of interest using the You Only Look Once (YOLO) object detector trained on 1600 annotated eye images (open dataset included). Contours are then extracted from the bounding boxes and a derivative-based curvature detector refines the eye corner. RESULTS Through a user study (n = 24), our corner-contingent head compensation algorithm showed an error reduction in degrees of visual angle of 1.20∘ (p = 0.037) for the left eye and 1.26∘ (p = 0.079) for the right compared to the previous gold-standard POG error correction method. In addition, the eye corner pipeline showed a root-mean-squared error of 3.57 (SD = 1.92) pixels in detecting eye corners over 201 annotated frames. CONCLUSION We introduce an effective method of using eye corners to correct for eye gaze estimation, enabling the practical acquisition of POG in robotic surgery.
Collapse
Affiliation(s)
- Alexandre Banks
- Electrical and Computer Engineering Department, University of British Columbia, Main Mall, Vancouver, BC, V6T 1Z4, Canada.
| | - Alaa Eldin Abdelaal
- Electrical and Computer Engineering Department, University of British Columbia, Main Mall, Vancouver, BC, V6T 1Z4, Canada
- Mechanical Engineering Department, Stanford University, Escondido Mall, Stanford, CA, 94305, USA
| | - Septimiu Salcudean
- Electrical and Computer Engineering Department, University of British Columbia, Main Mall, Vancouver, BC, V6T 1Z4, Canada
| |
Collapse
|
2
|
Scharinger C. Task-irrelevant decorative pictures increase cognitive load during text processing but have no effects on learning or working memory performance: an EEG and eye-tracking study. PSYCHOLOGICAL RESEARCH 2024; 88:1362-1388. [PMID: 38502229 PMCID: PMC11142986 DOI: 10.1007/s00426-024-01939-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 02/08/2024] [Indexed: 03/21/2024]
Abstract
Decorative pictures (DP) are often used in multimedia task materials and are commonly considered so-called seductive details as they are commonly not task-relevant. Typically, DP result in mixed effects on behavioral performance measures. The current study focused on the effects of DP on the cognitive load during text reading and working memory task performance. The theta and alpha frequency band power of the electroencephalogram (EEG) and pupil dilation served as proxies of cognitive load. The number of fixations, mean fixation durations, and the number of transitions served as proxies of the attentional focus. For both, text reading and n-back working memory tasks, the presence and congruency of DP were manipulated in four task conditions. DP did neither affect behavioral performance nor subjective ratings of emotional-motivational factors. However, in both tasks, DP increased the cognitive load as revealed by the EEG alpha frequency band power and (at least to some extent) by subjective effort ratings. Notably, the EEG alpha frequency band power was a quite reliable and sensitive proxy of cognitive load. Analyzing the EEG data stimulus-locked and fixation-related, the EEG alpha frequency band power revealed a difference in global and local cognitive load. In sum, the current study underlines the feasibility and use of EEG for multimedia research, especially when combined with eye-tracking.
Collapse
Affiliation(s)
- Christian Scharinger
- Leibniz-Institut für Wissensmedien Tübingen, Schleichstr. 6, 72076, Tübingen, Germany.
| |
Collapse
|
3
|
Alben N, Arthur C. Pupil dilation as a function of pitch discrimination difficulty: A replication of Kahneman and Beatty, 1967. Atten Percept Psychophys 2024; 86:1435-1444. [PMID: 37684499 DOI: 10.3758/s13414-023-02765-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2023] [Indexed: 09/10/2023]
Abstract
In the present paper, we carry out a replication of a seminal paper by Kahneman, D. & Beatty, J. (1967). Perception & Psychophysics, 2(3),101-105 for using pupillometry as an implicit measure of auditory processing load, specifically, non-verbal auditory processing. While numerous papers since have supported the notion that pupillometry is a fairly reliable index of processing load in general (Zekveld, A. A., Koelewijn, T., and Kramer, S. E. (2018). Trends in Hearing, 22,1-25; Winn, M. B., Wendt, D., Koelewijn, T., and Kuchinsky, S. E. (2018). Trends in Hearing, 22,1-32), they typically have relied on memory recall, and/or more sophisticated cognitive tasks such as language comprehension or split attention. Kahneman and Beatty's paper, despite that it was published more than 50 years ago, continues to be the primary citation to support the claim that pupillometry is a reliable index of task difficulty for a simple non-verbal pitch discrimination task therefore giving us an implicit measure for listening effort (e.g.,Kramer, S. E., Lorens, A., Coninx, F., Zekveld, A. A., Piotrowska, A., & Skarzynski, H. (2013). Language and Cognitive Processes, 28(4),426-442; Schlemmer, K. B., Kulke, F., Kuchinke, L., & Van Der Meer, E. (2005). Psychophysiology, 42(4),465-472; Lisi, M., Bonato, M., and Zorzi, M. (2015). Biological Psychology, 112,39-45). This type of task takes very little explicit memory, is non-verbal, and relies heavily on more low-level, automatic perceptual processing. Using two different replication studies, one exact, and one modified, we only replicated the main result in the modified replication. The true replication failed to replicate on all nine statistical tests. Overall, our findings suggest that pupil dilation can be used as an implicit measure of task difficulty for a simple, non-semantic, auditory task, however, the robustness of the effect appears relatively weak in comparison with the original study, and the amount of variation across participants much greater.
Collapse
Affiliation(s)
- Noel Alben
- Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Claire Arthur
- Georgia Institute of Technology, Atlanta, GA, 30332, USA.
| |
Collapse
|
4
|
Gundler C, Temmen M, Gulberti A, Pötter-Nerger M, Ückert F. Improving Eye-Tracking Data Quality: A Framework for Reproducible Evaluation of Detection Algorithms. SENSORS (BASEL, SWITZERLAND) 2024; 24:2688. [PMID: 38732794 PMCID: PMC11085612 DOI: 10.3390/s24092688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 04/18/2024] [Accepted: 04/20/2024] [Indexed: 05/13/2024]
Abstract
High-quality eye-tracking data are crucial in behavioral sciences and medicine. Even with a solid understanding of the literature, selecting the most suitable algorithm for a specific research project poses a challenge. Empowering applied researchers to choose the best-fitting detector for their research needs is the primary contribution of this paper. We developed a framework to systematically assess and compare the effectiveness of 13 state-of-the-art algorithms through a unified application interface. Hence, we more than double the number of algorithms that are currently usable within a single software package and allow researchers to identify the best-suited algorithm for a given scientific setup. Our framework validation on retrospective data underscores its suitability for algorithm selection. Through a detailed and reproducible step-by-step workflow, we hope to contribute towards significantly improved data quality in scientific experiments.
Collapse
Affiliation(s)
- Christopher Gundler
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany;
| | | | - Alessandro Gulberti
- Department of Neurology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany; (A.G.); (M.P.-N.)
| | - Monika Pötter-Nerger
- Department of Neurology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany; (A.G.); (M.P.-N.)
| | - Frank Ückert
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany;
| |
Collapse
|
5
|
Wimmer M, Weidinger N, Veas E, Müller-Putz GR. Multimodal decoding of error processing in a virtual reality flight simulation. Sci Rep 2024; 14:9221. [PMID: 38649681 PMCID: PMC11035577 DOI: 10.1038/s41598-024-59278-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 04/09/2024] [Indexed: 04/25/2024] Open
Abstract
Technological advances in head-mounted displays (HMDs) facilitate the acquisition of physiological data of the user, such as gaze, pupil size, or heart rate. Still, interactions with such systems can be prone to errors, including unintended behavior or unexpected changes in the presented virtual environments. In this study, we investigated if multimodal physiological data can be used to decode error processing, which has been studied, to date, with brain signals only. We examined the feasibility of decoding errors solely with pupil size data and proposed a hybrid decoding approach combining electroencephalographic (EEG) and pupillometric signals. Moreover, we analyzed if hybrid approaches can improve existing EEG-based classification approaches and focused on setups that offer increased usability for practical applications, such as the presented game-like virtual reality flight simulation. Our results indicate that classifiers trained with pupil size data can decode errors above chance. Moreover, hybrid approaches yielded improved performance compared to EEG-based decoders in setups with a reduced number of channels, which is crucial for many out-of-the-lab scenarios. These findings contribute to the development of hybrid brain-computer interfaces, particularly in combination with wearable devices, which allow for easy acquisition of additional physiological data.
Collapse
Affiliation(s)
- Michael Wimmer
- Know-Center GmbH, Graz, Austria
- Institute of Neural Engineering, Graz University of Technology, Graz, Austria
| | | | - Eduardo Veas
- Know-Center GmbH, Graz, Austria
- Institute of Interactive Systems and Data Science, Graz University of Technology, Graz, Austria
| | - Gernot R Müller-Putz
- Institute of Neural Engineering, Graz University of Technology, Graz, Austria.
- BioTechMed-Graz, Graz, Austria.
| |
Collapse
|
6
|
Le Cunff AL, Dommett E, Giampietro V. Neurophysiological measures and correlates of cognitive load in attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD) and dyslexia: A scoping review and research recommendations. Eur J Neurosci 2024; 59:256-282. [PMID: 38109476 DOI: 10.1111/ejn.16201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 10/27/2023] [Accepted: 11/06/2023] [Indexed: 12/20/2023]
Abstract
Working memory is integral to a range of critical cognitive functions such as reasoning and decision-making. Although alterations in working memory have been observed in neurodivergent populations, there has been no review mapping how cognitive load is measured in common neurodevelopmental conditions such as attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD) and dyslexia. This scoping review explores the neurophysiological measures used to study cognitive load in these specific populations. Our findings highlight that electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) are the most frequently used methods, with a limited number of studies employing functional near-infrared spectroscopy (fNIRs), magnetoencephalography (MEG) or eye-tracking. Notably, eye-related measures are less commonly used, despite their prominence in cognitive load research among neurotypical individuals. The review also highlights potential correlates of cognitive load, such as neural oscillations in the theta and alpha ranges for EEG studies, blood oxygenation level-dependent (BOLD) responses in lateral and medial frontal brain regions for fMRI and fNIRS studies and eye-related measures such as pupil dilation and blink rate. Finally, critical issues for future studies are discussed, including the technical challenges associated with multimodal approaches, the possible impact of atypical features on cognitive load measures and balancing data richness with participant well-being. These insights contribute to a more nuanced understanding of cognitive load measurement in neurodivergent populations and point to important methodological considerations for future neuroscientific research in this area.
Collapse
Affiliation(s)
- Anne-Laure Le Cunff
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | - Eleanor Dommett
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | - Vincent Giampietro
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| |
Collapse
|
7
|
Velisar A, Shanidze NM. Noise estimation for head-mounted 3D binocular eye tracking using Pupil Core eye-tracking goggles. Behav Res Methods 2024; 56:53-79. [PMID: 37369939 PMCID: PMC11062346 DOI: 10.3758/s13428-023-02150-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/23/2023] [Indexed: 06/29/2023]
Abstract
Head-mounted, video-based eye tracking is becoming increasingly common and has promise in a range of applications. Here, we provide a practical and systematic assessment of the sources of measurement uncertainty for one such device - the Pupil Core - in three eye-tracking domains: (1) the 2D scene camera image; (2) the physical rotation of the eye relative to the scene camera 3D space; and (3) the external projection of the estimated gaze point location onto the target plane or in relation to world coordinates. We also assess eye camera motion during active tasks relative to the eye and the scene camera, an important consideration as the rigid arrangement of eye and scene camera is essential for proper alignment of the detected gaze. We find that eye camera motion, improper gaze point depth estimation, and erroneous eye models can all lead to added noise that must be considered in the experimental design. Further, while calibration accuracy and precision estimates can help assess data quality in the scene camera image, they may not be reflective of errors and variability in gaze point estimation. These findings support the importance of eye model constancy for comparisons across experimental conditions and suggest additional assessments of data reliability may be warranted for experiments that require the gaze point or measure eye movements relative to the external world.
Collapse
Affiliation(s)
- Anca Velisar
- The Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA.
| | - Natela M Shanidze
- The Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA
| |
Collapse
|
8
|
Lotze A, Love K, Velisar A, Shanidze NM. A low-cost robotic oculomotor simulator for assessing eye tracking accuracy in health and disease. Behav Res Methods 2024; 56:80-92. [PMID: 35948762 PMCID: PMC9911554 DOI: 10.3758/s13428-022-01938-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/21/2022] [Indexed: 12/24/2022]
Abstract
Eye tracking accuracy is affected in individuals with vision and oculomotor deficits, impeding our ability to answer important scientific and clinical questions about these disorders. It is difficult to disambiguate decreases in eye movement accuracy and changes in accuracy of the eye tracking itself. We propose the EyeRobot-a low-cost, robotic oculomotor simulator capable of emulating healthy and compromised eye movements to provide ground truth assessment of eye tracker performance, and how different aspects of oculomotor deficits might affect tracking accuracy and performance. The device can operate with eccentric optical axes or large deviations between the eyes, as well as simulate oculomotor pathologies, such as large fixational instabilities. We find that our design can provide accurate eye movements for both central and eccentric viewing conditions, which can be tracked by using a head-mounted eye tracker, Pupil Core. As proof of concept, we examine the effects of eccentric fixation on calibration accuracy and find that Pupil Core's existing eye tracking algorithm is robust to large fixation offsets. In addition, we demonstrate that the EyeRobot can simulate realistic eye movements like saccades and smooth pursuit that can be tracked using video-based eye tracking. These tests suggest that the EyeRobot, an easy to build and flexible tool, can aid with eye tracking validation and future algorithm development in healthy and compromised vision.
Collapse
Affiliation(s)
- Al Lotze
- Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA
| | | | - Anca Velisar
- Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA
| | - Natela M Shanidze
- Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA.
| |
Collapse
|
9
|
Fischer J, van der Merwe J, Vandenheever D. The influence of eye model parameter variations on simulated eye-tracking outcomes. J Eye Mov Res 2023; 11:10.16910/jemr.16.3.1. [PMID: 38116296 PMCID: PMC10730094 DOI: 10.16910/jemr.16.3.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2023] Open
Abstract
The simulated data used in eye-tracking-related research has been largely generated using normative eye models with little consideration of how the variations in eye biometry found in the population may influence eye-tracking outcomes. This study investigated the influence that variations in eye model parameters have on the ability of simulated data to predict real-world eye-tracking outcomes. The real-world experiments performed by two pertinent comparative studies were replicated in a simulated environment using a highcomplexity stochastic eye model that includes anatomically accurate distributions of eye biometry parameters. The outcomes showed that variations in anterior corneal asphericity significantly influence simulated eye-tracking outcomes of both interpolation and model-based gaze estimation algorithms. Other, more commonly varied parameters such as the corneal radius of curvature and foveal offset angle had little influence on simulated outcomes.
Collapse
|
10
|
Servais A, Préa N, Hurter C, Barbeau EJ. Why and when do you look away when trying to remember? Gaze aversion as a marker of the attentional switch to the internal world during memory retrieval. Acta Psychol (Amst) 2023; 240:104041. [PMID: 37774488 DOI: 10.1016/j.actpsy.2023.104041] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 09/15/2023] [Accepted: 09/21/2023] [Indexed: 10/01/2023] Open
Abstract
It is common to look away while trying to remember specific information, for example during autobiographical memory retrieval, a behavior referred to as gaze aversion. Given the competition between internal and external attention, gaze aversion is assumed to play a role in visual decoupling, i.e., suppressing environmental distractors during internal tasks. This suggests a link between gaze aversion and the attentional switch from the outside world to a temporary internal mental space that takes place during the initial stage of memory retrieval, but this assumption has never been verified so far. We designed a protocol where 33 participants answered 48 autobiographical questions while their eye movements were recorded with an eye-tracker and a camcorder. Results indicated that gaze aversion occurred early (median 1.09 s) and predominantly during the access phase of memory retrieval-i.e., the moment when the attentional switch is assumed to take place. In addition, gaze aversion lasted a relatively long time (on average 6 s), and was notably decoupled from concurrent head movements. These results support a role of gaze aversion in perceptual decoupling. Gaze aversion was also related to higher retrieval effort and was rare during memories which came spontaneously to mind. This suggests that gaze aversion might be required only when cognitive effort is required to switch the attention toward the internal world to help retrieving hard-to-access memories. Compared to eye vergence, another visual decoupling strategy, the association with the attentional switch seemed specific to gaze aversion. Our results provide for the first time several arguments supporting the hypothesis that gaze aversion is related to the attentional switch from the outside world to memory.
Collapse
Affiliation(s)
- Anaïs Servais
- Centre de recherche Cerveau et Cognition (CerCo), UMR5549 (CNRS-UPS), Pavillon Baudot, 31052 Toulouse, France; National Civil Aviation School (ENAC), 7 avenue Edouard Belin, 31055 Toulouse, France.
| | - Noémie Préa
- Centre de recherche Cerveau et Cognition (CerCo), UMR5549 (CNRS-UPS), Pavillon Baudot, 31052 Toulouse, France
| | - Christophe Hurter
- National Civil Aviation School (ENAC), 7 avenue Edouard Belin, 31055 Toulouse, France.
| | - Emmanuel J Barbeau
- Centre de recherche Cerveau et Cognition (CerCo), UMR5549 (CNRS-UPS), Pavillon Baudot, 31052 Toulouse, France.
| |
Collapse
|
11
|
Mathôt S, Vilotijević A. Methods in cognitive pupillometry: Design, preprocessing, and statistical analysis. Behav Res Methods 2023; 55:3055-3077. [PMID: 36028608 PMCID: PMC10556184 DOI: 10.3758/s13428-022-01957-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/12/2022] [Indexed: 11/08/2022]
Abstract
Cognitive pupillometry is the measurement of pupil size to investigate cognitive processes such as attention, mental effort, working memory, and many others. Currently, there is no commonly agreed-upon methodology for conducting cognitive-pupillometry experiments, and approaches vary widely between research groups and even between different experiments from the same group. This lack of consensus makes it difficult to know which factors to consider when conducting a cognitive-pupillometry experiment. Here we provide a comprehensive, hands-on guide to methods in cognitive pupillometry, with a focus on trial-based experiments in which the measure of interest is the task-evoked pupil response to a stimulus. We cover all methodological aspects of cognitive pupillometry: experimental design, preprocessing of pupil-size data, and statistical techniques to deal with multiple comparisons when testing pupil-size data. In addition, we provide code and toolboxes (in Python) for preprocessing and statistical analysis, and we illustrate all aspects of the proposed workflow through an example experiment and example scripts.
Collapse
Affiliation(s)
- Sebastiaan Mathôt
- Department of Psychology, University of Groningen, Grote Kruisstraat 2/1, 9712TS, Groningen, The Netherlands.
| | - Ana Vilotijević
- Department of Psychology, University of Groningen, Grote Kruisstraat 2/1, 9712TS, Groningen, The Netherlands
| |
Collapse
|
12
|
Wimmer M, Weidinger N, Veas E, Muller-Putz GR. Neural and Pupillometric Correlates of Error Perception in an Immersive VR Flight Simulation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083691 DOI: 10.1109/embc40787.2023.10340376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Algorithms detecting erroneous events, as used in brain-computer interfaces, usually rely solely on neural correlates of error perception. The increasing availability of wearable displays with built-in pupillometric sensors enables access to additional physiological data, potentially improving error detection. Hence, we measured both electroencephalographic (EEG) and pupillometric signals of 19 participants while performing a navigation task in an immersive virtual reality (VR) setting. We found EEG and pupillometric correlates of error perception and significant differences between distinct error types. Further, we found that actively performing tasks delays error perception. We believe that the results of this work could contribute to improving error detection, which has rarely been studied in the context of immersive VR.
Collapse
|
13
|
Cheng KY, Rehani M, Hebert JS. A scoping review of eye tracking metrics used to assess visuomotor behaviours of upper limb prosthesis users. J Neuroeng Rehabil 2023; 20:49. [PMID: 37095489 PMCID: PMC10127019 DOI: 10.1186/s12984-023-01180-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 04/19/2023] [Indexed: 04/26/2023] Open
Abstract
Advanced upper limb prostheses aim to restore coordinated hand and arm function. However, this objective can be difficult to quantify as coordinated movements require an intact visuomotor system. Eye tracking has recently been applied to study the visuomotor behaviours of upper limb prosthesis users by enabling the calculation of eye movement metrics. This scoping review aims to characterize the visuomotor behaviours of upper limb prosthesis users as described by eye tracking metrics, to summarize the eye tracking metrics used to describe prosthetic behaviour, and to identify gaps in the literature and potential areas for future research. A review of the literature was performed to identify articles that reported eye tracking metrics to evaluate the visual behaviours of individuals using an upper limb prosthesis. Data on the level of amputation, type of prosthetic device, type of eye tracker, primary eye metrics, secondary outcome metrics, experimental task, aims, and key findings were extracted. Seventeen studies were included in this scoping review. A consistently reported finding is that prosthesis users have a characteristic visuomotor behaviour that differs from that of individuals with intact arm function. Visual attention has been reported to be directed more towards the hand and less towards the target during object manipulation tasks. A gaze switching strategy and delay to disengage gaze from the current target has also been reported. Differences in the type of prosthetic device and experimental task have revealed some distinct gaze behaviours. Control factors have been shown to be related to gaze behaviour, while sensory feedback and training interventions have been demonstrated to reduce the visual attention associated with prosthesis use. Eye tracking metrics have also been used to assess the cognitive load and sense of agency of prosthesis users. Overall, there is evidence that eye tracking is an effective tool to quantitatively assess the visuomotor behaviour of prosthesis users and the recorded eye metrics are sensitive to change in response to various factors. Additional studies are needed to validate the eye metrics used to assess cognitive load and sense of agency in upper limb prosthesis users.
Collapse
Affiliation(s)
- Kodi Y Cheng
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada
- Department of Biomedical Engineering, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada
| | - Mayank Rehani
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada
| | - Jacqueline S Hebert
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada.
- Department of Biomedical Engineering, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada.
- Glenrose Rehabilitation Hospital, Alberta Health Services, Edmonton, AB, Canada.
| |
Collapse
|
14
|
Martin JT, Pinto J, Bulte D, Spitschan M. PyPlr: A versatile, integrated system of hardware and software for researching the human pupillary light reflex. Behav Res Methods 2022; 54:2720-2739. [PMID: 34918229 PMCID: PMC9729317 DOI: 10.3758/s13428-021-01759-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/25/2021] [Indexed: 12/16/2022]
Abstract
We introduce PyPlr-a versatile, integrated system of hardware and software to support a broad spectrum of research applications concerning the human pupillary light reflex (PLR). PyPlr is a custom Python library for integrating a research-grade video-based eye-tracker system with a light source and streamlining stimulus design, optimisation and delivery, device synchronisation, and extraction, cleaning, and analysis of pupil data. We additionally describe how full-field, homogenous stimulation of the retina can be realised with a low-cost integrating sphere that serves as an alternative to a more complex Maxwellian view setup. Users can integrate their own light source, but we provide full native software support for a high-end, commercial research-grade 10-primary light engine that offers advanced control over the temporal and spectral properties of light stimuli as well as spectral calibration utilities. Here, we describe the hardware and software in detail and demonstrate its capabilities with two example applications: (1) pupillometer-style measurement and parametrisation of the PLR to flashes of white light, and (2) comparing the post-illumination pupil response (PIPR) to flashes of long and short-wavelength light. The system holds promise for researchers who would favour a flexible approach to studying the PLR and the ability to employ a wide range of temporally and spectrally varying stimuli, including simple narrowband stimuli.
Collapse
Affiliation(s)
- Joel T Martin
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, OX3 7DQ, UK.
| | - Joana Pinto
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, OX3 7DQ, UK
| | - Daniel Bulte
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, OX3 7DQ, UK
| | - Manuel Spitschan
- Department of Experimental Psychology, University of Oxford, Oxford, OX2 6GG, UK
| |
Collapse
|
15
|
Ban S, Lee YJ, Kim KR, Kim JH, Yeo WH. Advances in Materials, Sensors, and Integrated Systems for Monitoring Eye Movements. BIOSENSORS 2022; 12:1039. [PMID: 36421157 PMCID: PMC9688058 DOI: 10.3390/bios12111039] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/11/2022] [Accepted: 11/13/2022] [Indexed: 06/16/2023]
Abstract
Eye movements show primary responses that reflect humans' voluntary intention and conscious selection. Because visual perception is one of the fundamental sensory interactions in the brain, eye movements contain critical information regarding physical/psychological health, perception, intention, and preference. With the advancement of wearable device technologies, the performance of monitoring eye tracking has been significantly improved. It also has led to myriad applications for assisting and augmenting human activities. Among them, electrooculograms, measured by skin-mounted electrodes, have been widely used to track eye motions accurately. In addition, eye trackers that detect reflected optical signals offer alternative ways without using wearable sensors. This paper outlines a systematic summary of the latest research on various materials, sensors, and integrated systems for monitoring eye movements and enabling human-machine interfaces. Specifically, we summarize recent developments in soft materials, biocompatible materials, manufacturing methods, sensor functions, systems' performances, and their applications in eye tracking. Finally, we discuss the remaining challenges and suggest research directions for future studies.
Collapse
Affiliation(s)
- Seunghyeb Ban
- School of Engineering and Computer Science, Washington State University, Vancouver, WA 98686, USA
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Yoon Jae Lee
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Ka Ram Kim
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Jong-Hoon Kim
- School of Engineering and Computer Science, Washington State University, Vancouver, WA 98686, USA
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| | - Woon-Hong Yeo
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Tech and Emory University School of Medicine, Atlanta, GA 30332, USA
- Neural Engineering Center, Institute for Materials, Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA 30332, USA
| |
Collapse
|