1
|
Bischof WF, Anderson NC, Kingstone A. A tutorial: Analyzing eye and head movements in virtual reality. Behav Res Methods 2024; 56:8396-8421. [PMID: 39117987 DOI: 10.3758/s13428-024-02482-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/16/2024] [Indexed: 08/10/2024]
Abstract
This tutorial provides instruction on how to use the eye tracking technology built into virtual reality (VR) headsets, emphasizing the analysis of head and eye movement data when an observer is situated in the center of an omnidirectional environment. We begin with a brief description of how VR eye movement research differs from previous forms of eye movement research, as well as identifying some outstanding gaps in the current literature. We then introduce the basic methodology used to collect VR eye movement data both in general and with regard to the specific data that we collected to illustrate different analytical approaches. We continue with an introduction of the foundational ideas regarding data analysis in VR, including frames of reference, how to map eye and head position, and event detection. In the next part, we introduce core head and eye data analyses focusing on determining where the head and eyes are directed. We then expand on what has been presented, introducing several novel spatial, spatio-temporal, and temporal head-eye data analysis techniques. We conclude with a reflection on what has been presented, and how the techniques introduced in this tutorial provide the scaffolding for extensions to more complex and dynamic VR environments.
Collapse
Affiliation(s)
- Walter F Bischof
- Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC, V6T 1Z4, Canada.
| | - Nicola C Anderson
- Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC, V6T 1Z4, Canada
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC, V6T 1Z4, Canada
| |
Collapse
|
2
|
Park H, Oh T, Kim I. Effects of driver's braking behavior by the real-time pedestrian scale warning system. ACCIDENT; ANALYSIS AND PREVENTION 2024; 205:107685. [PMID: 38897140 DOI: 10.1016/j.aap.2024.107685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 05/10/2024] [Accepted: 06/15/2024] [Indexed: 06/21/2024]
Abstract
A driver warning system can improve pedestrian safety by providing drivers with alerts about potential hazards. Most driver warning systems have primarily focused on detecting the presence of pedestrians, without considering other factors, such as the pedestrian's gender and speed, and whether pedestrians are carrying luggage, that can affect driver braking behavior. Therefore, this study aims to investigate how driver braking behavior changes based on the information about the number of pedestrians in a crowd and examine if a developed warning system based on this information can induce safe braking behavior. For this purpose, an experiment scenario was conducted using a virtual reality-based driving simulator and an eye tracker. The collected driver data were analyzed using mixed ANOVA to derive meaningful conclusions. The research findings indicate that providing information about the number of pedestrians in a crowd has a positive impact on driver braking behavior, including deceleration, yielding intention, and attention. Particularly, It was found that in scenarios with a larger number of pedestrians, the Time to Collision (TTC) and distance to the crosswalk were increased by 12%, and the pupil diameter was increased by 9%. This research also verified the applicability of the proposed warning system in complex road environments, especially under conditions with poor visibility such as nighttime. The system was able to induce safe braking behavior even at night and exhibited consistent performance regardless of gender. In conclusion, considering various factors that influence driver behavior, this research provides a comprehensive understanding of the potential and effectiveness of a driver warning system based on information about the number of pedestrians in a crowd in complex road environments.
Collapse
Affiliation(s)
- Hyunchul Park
- The Cho Chun Shik Graduate School, Korea Advanced Institute of Science and Technology, South Korea.
| | - Taeho Oh
- The Cho Chun Shik Graduate School, Korea Advanced Institute of Science and Technology, South Korea; School of Transportation, Southeast University, Nanjing, Jiangsu 210096, China.
| | - Inhi Kim
- The Cho Chun Shik Graduate School, Korea Advanced Institute of Science and Technology, South Korea.
| |
Collapse
|
3
|
Upasani S, Srinivasan D, Zhu Q, Du J, Leonessa A. Eye-Tracking in Physical Human-Robot Interaction: Mental Workload and Performance Prediction. HUMAN FACTORS 2024; 66:2104-2119. [PMID: 37793896 DOI: 10.1177/00187208231204704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/06/2023]
Abstract
BACKGROUND In Physical Human-Robot Interaction (pHRI), the need to learn the robot's motor-control dynamics is associated with increased cognitive load. Eye-tracking metrics can help understand the dynamics of fluctuating mental workload over the course of learning. OBJECTIVE The aim of this study was to test eye-tracking measures' sensitivity and reliability to variations in task difficulty, as well as their performance-prediction capability, in physical human-robot collaboration tasks involving an industrial robot for object comanipulation. METHODS Participants (9M, 9F) learned to coperform a virtual pick-and-place task with a bimanual robot over multiple trials. Joint stiffness of the robot was manipulated to increase motor-coordination demands. The psychometric properties of eye-tracking measures and their ability to predict performance was investigated. RESULTS Stationary Gaze Entropy and pupil diameter were the most reliable and sensitive measures of workload associated with changes in task difficulty and learning. Increased task difficulty was more likely to result in a robot-monitoring strategy. Eye-tracking measures were able to predict the occurrence of success or failure in each trial with 70% sensitivity and 71% accuracy. CONCLUSION The sensitivity and reliability of eye-tracking measures was acceptable, although values were lower than those observed in cognitive domains. Measures of gaze behaviors indicative of visual monitoring strategies were most sensitive to task difficulty manipulations, and should be explored further for the pHRI domain where motor-control and internal-model formation will likely be strong contributors to workload. APPLICATION Future collaborative robots can adapt to human cognitive state and skill-level measured using eye-tracking measures of workload and visual attention.
Collapse
Affiliation(s)
| | | | - Qi Zhu
- National Institute of Standards and Technology, Boulder, CO, USA
| | - Jing Du
- University of Florida, Gainesville, FL, USA
| | | |
Collapse
|
4
|
Zhu L, Chen J, Yang H, Zhou X, Gao Q, Loureiro R, Gao S, Zhao H. Wearable Near-Eye Tracking Technologies for Health: A Review. Bioengineering (Basel) 2024; 11:738. [PMID: 39061820 PMCID: PMC11273595 DOI: 10.3390/bioengineering11070738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Revised: 07/12/2024] [Accepted: 07/18/2024] [Indexed: 07/28/2024] Open
Abstract
With the rapid advancement of computer vision, machine learning, and consumer electronics, eye tracking has emerged as a topic of increasing interest in recent years. It plays a key role across diverse domains including human-computer interaction, virtual reality, and clinical and healthcare applications. Near-eye tracking (NET) has recently been developed to possess encouraging features such as wearability, affordability, and interactivity. These features have drawn considerable attention in the health domain, as NET provides accessible solutions for long-term and continuous health monitoring and a comfortable and interactive user interface. Herein, this work offers an inaugural concise review of NET for health, encompassing approximately 70 related articles published over the past two decades and supplemented by an in-depth examination of 30 literatures from the preceding five years. This paper provides a concise analysis of health-related NET technologies from aspects of technical specifications, data processing workflows, and the practical advantages and limitations. In addition, the specific applications of NET are introduced and compared, revealing that NET is fairly influencing our lives and providing significant convenience in daily routines. Lastly, we summarize the current outcomes of NET and highlight the limitations.
Collapse
Affiliation(s)
- Lisen Zhu
- HUB of Intelligent Neuro-Engineering, Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London, London HA7 4LP, UK; (L.Z.); (J.C.); (H.Y.); (X.Z.); (R.L.)
| | - Jianan Chen
- HUB of Intelligent Neuro-Engineering, Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London, London HA7 4LP, UK; (L.Z.); (J.C.); (H.Y.); (X.Z.); (R.L.)
| | - Huixin Yang
- HUB of Intelligent Neuro-Engineering, Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London, London HA7 4LP, UK; (L.Z.); (J.C.); (H.Y.); (X.Z.); (R.L.)
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing 100191, China;
| | - Xinkai Zhou
- HUB of Intelligent Neuro-Engineering, Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London, London HA7 4LP, UK; (L.Z.); (J.C.); (H.Y.); (X.Z.); (R.L.)
| | - Qihang Gao
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing 100191, China;
| | - Rui Loureiro
- HUB of Intelligent Neuro-Engineering, Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London, London HA7 4LP, UK; (L.Z.); (J.C.); (H.Y.); (X.Z.); (R.L.)
| | - Shuo Gao
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing 100191, China;
| | - Hubin Zhao
- HUB of Intelligent Neuro-Engineering, Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London, London HA7 4LP, UK; (L.Z.); (J.C.); (H.Y.); (X.Z.); (R.L.)
| |
Collapse
|
5
|
Hanke LI, Vradelis L, Boedecker C, Griesinger J, Demare T, Lindemann NR, Huettl F, Chheang V, Saalfeld P, Wachter N, Wollstädter J, Spranz M, Lang H, Hansen C, Huber T. Immersive virtual reality for interdisciplinary trauma management - initial evaluation of a training tool prototype. BMC MEDICAL EDUCATION 2024; 24:769. [PMID: 39026193 PMCID: PMC11264734 DOI: 10.1186/s12909-024-05764-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 07/11/2024] [Indexed: 07/20/2024]
Abstract
INTRODUCTION Emergency care of critically ill patients in the trauma room is an integral part of interdisciplinary work in hospitals. Live threatening injuries require swift diagnosis, prioritization, and treatment; thus, different medical specialties need to work together closely for optimal patient care. Training is essential to facilitate smooth performance. This study presents a training tool for familiarization with trauma room algorithms in immersive virtual reality (VR), and a first qualitative assessment. MATERIALS AND METHODS An interdisciplinary team conceptualized two scenarios and filmed these in the trauma room of the University Medical Center Mainz, Germany in 3D-360°. This video content was used to create an immersive VR experience. Participants of the Department of Anesthesiology were included in the study, questionnaires were obtained and eye movement was recorded. RESULTS 31 volunteers participated in the study, of which 10 (32,2%) had completed specialist training in anesthesiology. Participants reported a high rate of immersion (immersion(mean) = 6 out of 7) and low Visually Induced Motion Sickness (VIMS(mean) = 1,74 out of 20). Participants agreed that VR is a useful tool for medical education (mean = 1,26; 1 very useful, 7 not useful at all). Residents felt significantly more secure in the matter after training (p < 0,05), specialist showed no significant difference. DISCUSSION This study presents a novel tool for familiarization with trauma room procedures, which is especially helpful for less experienced residents. Training in VR was well accepted and may be a solution to enhance training in times of low resources for in person training.
Collapse
Affiliation(s)
- Laura Isabel Hanke
- Department of General, Visceral and Transplant Surgery, University Medical Center Mainz, Johannes Gutenberg-University, Mainz Langenbeckstraße 1, 55131, Mainz, Germany
| | - Lukas Vradelis
- Department of General, Visceral and Transplant Surgery, University Medical Center Mainz, Johannes Gutenberg-University, Mainz Langenbeckstraße 1, 55131, Mainz, Germany
| | - Christian Boedecker
- Department of General, Visceral and Transplant Surgery, University Medical Center Mainz, Johannes Gutenberg-University, Mainz Langenbeckstraße 1, 55131, Mainz, Germany
| | - Jan Griesinger
- Department of Anesthesiology, University Medical Center Mainz, Johannes Gutenberg-University, Mainz, Germany
| | - Tim Demare
- Department of Anesthesiology, University Medical Center Mainz, Johannes Gutenberg-University, Mainz, Germany
| | - Nicola Raphaele Lindemann
- Department of Anesthesiology, University Medical Center Mainz, Johannes Gutenberg-University, Mainz, Germany
| | - Florentine Huettl
- Department of General, Visceral and Transplant Surgery, University Medical Center Mainz, Johannes Gutenberg-University, Mainz Langenbeckstraße 1, 55131, Mainz, Germany
| | - Vuthea Chheang
- Virtual and Augmented Reality Group, Faculty of Computer Science, Otto-von-Guericke-University, Magdeburg, Germany
| | - Patrick Saalfeld
- Virtual and Augmented Reality Group, Faculty of Computer Science, Otto-von-Guericke-University, Magdeburg, Germany
| | - Nicolas Wachter
- Department of General, Visceral and Transplant Surgery, University Medical Center Mainz, Johannes Gutenberg-University, Mainz Langenbeckstraße 1, 55131, Mainz, Germany
| | - Jochen Wollstädter
- Department of Orthopedics and Trauma Surgery, University Medical Center Mainz, Johannes Gutenberg-University, Mainz, Germany
| | - Marike Spranz
- Department of Diagnostic and Interventional Radiology, University Medical Center Johannes Gutenberg-University, Mainz, Germany
| | - Hauke Lang
- Department of General, Visceral and Transplant Surgery, University Medical Center Mainz, Johannes Gutenberg-University, Mainz Langenbeckstraße 1, 55131, Mainz, Germany
| | - Christian Hansen
- Virtual and Augmented Reality Group, Faculty of Computer Science, Otto-von-Guericke-University, Magdeburg, Germany
| | - Tobias Huber
- Department of General, Visceral and Transplant Surgery, University Medical Center Mainz, Johannes Gutenberg-University, Mainz Langenbeckstraße 1, 55131, Mainz, Germany.
| |
Collapse
|
6
|
Jia Y, Zhou X, Yang J, Fu Q. Animated VR and 360-degree VR to assess and train team sports decision-making: a scoping review. Front Psychol 2024; 15:1410132. [PMID: 39077210 PMCID: PMC11284098 DOI: 10.3389/fpsyg.2024.1410132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Accepted: 06/25/2024] [Indexed: 07/31/2024] Open
Abstract
Introduction In team sports, athletes' ability to make quick decisions plays a crucial role. Decision-making proficiency relies on the intricate balance of athletes' perceptual and cognitive abilities, enabling them to assess the competitive environment swiftly and select the most appropriate actions from various options. Virtual reality (VR) technology is emerging as a valuable tool for evaluating and refining athletes' decision-making skills. This study systematically examined the integration of VR technology into decision-making processes in team sports, aiming to identify more effective methods for presenting and interacting with virtual decision-making systems, thus enhancing the evaluation and refinement of athletes' decision making abilities. Methods Following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines, a thorough search of respected research databases, including Web of Science, PubMed, SPORTDiscus, ScienceDirect, PsycINFO, and IEEE, was conducted using carefully selected keywords. Results Twenty research papers meeting predefined inclusion criteria were included after careful evaluation. These papers were systematically analyzed to delineate the attributes of virtual decision-making task environments, the interactive dynamics inherent in motor decision-making tasks, and the significant findings. Discussion This review indicate that (1) the effectiveness of VR technology in assessing and improving athletes' decision-making skills in team sports; (2) the construction of virtual environments using the Head-Mounted Display (HMD) system, characterized by enhanced ease and efficiency; (3) the potential for future investigations to explore computer simulations to create more expansive virtual motion scenarios, thus efficiently generating substantial task scenario material, diverging from the constraints posed by 360-degree panoramic videos; and (4) the integration of motion capture technology for identifying and monitoring athletes' decision-making behaviors, which not only enhances ecological validity but also augments the transfer validity of virtual sports decision-making systems. Future research endeavors could explore integrating eye-tracking technology with virtual reality to gain insights into the intrinsic cognitive-action associations exhibited by athletes.
Collapse
Affiliation(s)
| | | | | | - Quan Fu
- Capital University of Physical Education and Sports, Beijing, China
| |
Collapse
|
7
|
Nolte D, Vidal De Palol M, Keshava A, Madrid-Carvajal J, Gert AL, von Butler EM, Kömürlüoğlu P, König P. Combining EEG and eye-tracking in virtual reality: Obtaining fixation-onset event-related potentials and event-related spectral perturbations. Atten Percept Psychophys 2024:10.3758/s13414-024-02917-3. [PMID: 38977612 DOI: 10.3758/s13414-024-02917-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/03/2024] [Indexed: 07/10/2024]
Abstract
Extensive research conducted in controlled laboratory settings has prompted an inquiry into how results can be generalized to real-world situations influenced by the subjects' actions. Virtual reality lends itself ideally to investigating complex situations but requires accurate classification of eye movements, especially when combining it with time-sensitive data such as EEG. We recorded eye-tracking data in virtual reality and classified it into gazes and saccades using a velocity-based classification algorithm, and we cut the continuous data into smaller segments to deal with varying noise levels, as introduced in the REMoDNav algorithm. Furthermore, we corrected for participants' translational movement in virtual reality. Various measures, including visual inspection, event durations, and the velocity and dispersion distributions before and after gaze onset, indicate that we can accurately classify the continuous, free-exploration data. Combining the classified eye-tracking with the EEG data, we generated fixation-onset event-related potentials (ERPs) and event-related spectral perturbations (ERSPs), providing further evidence for the quality of the eye-movement classification and timing of the onset of events. Finally, investigating the correlation between single trials and the average ERP and ERSP identified that fixation-onset ERSPs are less time sensitive, require fewer repetitions of the same behavior, and are potentially better suited to study EEG signatures in naturalistic settings. We modified, designed, and tested an algorithm that allows the combination of EEG and eye-tracking data recorded in virtual reality.
Collapse
Affiliation(s)
- Debora Nolte
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany.
| | - Marc Vidal De Palol
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Ashima Keshava
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - John Madrid-Carvajal
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Anna L Gert
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Eva-Marie von Butler
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Pelin Kömürlüoğlu
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
8
|
Xie Z, Zou L, Wang Q, Chen Y, Li L, Liao Y, Chen F. Application of immersive virtual reality (IVR) for the care of common diseases of older adults course: A mixed methods study. Geriatr Nurs 2024; 58:399-409. [PMID: 38889574 DOI: 10.1016/j.gerinurse.2024.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 05/31/2024] [Accepted: 06/03/2024] [Indexed: 06/20/2024]
Abstract
OBJECTIVE This study aimed to enhance understanding, engagement, and learning efficiency in the course "The Care of Common Diseases of Older Adults" using a developed Immersive Virtual Reality(IVR) system. METHODS A mixed-methods study with 32 students was conducted. The quantitative part involved a randomized controlled trial, and the qualitative part included thematic interviews with students and teachers. RESULTS The intervention group using the IVR system showed significant improvements in positivity and performance evaluation scores (P < 0.05) compared to the control group. Negative affect scores also decreased significantly (P < 0.05). Qualitative data from interviews supported the quantitative findings, highlighting increased curiosity, learning enthusiasm, and academic performance. CONCLUSION IVR significantly enhances learning by stimulating curiosity and active participation, making education more accessible and improving student performance. Future IVR enhancements should focus on user-friendliness and empathetic feedback in adult care.
Collapse
Affiliation(s)
- Zhiquan Xie
- School of Public Health, Zhaoqing Medical College, Mainly Engaged in Geriatric Medicine Education, China
| | - Liqin Zou
- School of Public Health, Zhaoqing Medical College, China
| | - Quan Wang
- School of Public Health, Peking University, China
| | - Yufang Chen
- School of Public Health, Zhaoqing Medical College, China
| | - Limei Li
- School of Public Health, Zhaoqing Medical College, China
| | - Yuping Liao
- School of Public Health, Zhaoqing Medical College, China
| | - Fangjun Chen
- School of Public Health, Zhaoqing Medical College, China.
| |
Collapse
|
9
|
Béatrice S H, Groner R. Introduction to the Special Thematic Issue "Virtual Reality and Eye Tracking". J Eye Mov Res 2024; 15:10.16910/jemr.15.3.1. [PMID: 38863778 PMCID: PMC11165939 DOI: 10.16910/jemr.15.3.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/13/2024] Open
Abstract
Technological advancements have made it possible to integrate eye tracking in virtual reality (VR) and augmented reality (AR). Many new VR/AR headsets already include eye tracking as a standard feature. While its application previously has been mostly limited to research, we now see installations of eye tracking into consumer level VR products in entertainment, training, and therapy.
The combination of eye tracking and VR creates new opportunities for end users, creators, and researchers alike: The high level of immersion – while shielded from visual distractions of the physical environment – leads to natural behavior inside the virtual environment. This enables researchers to study how humans perceive and interact with three-dimensional environments in experimentally controlled, ecologically valid settings. Simultaneously, eye tracking in VR poses new challenges to gaze analyses and requires the establishment of new tools and best practices in gaze interaction and psychological research from controlling influence factors, such as simulator sickness, to adaptations of algorithms in various situations.
This thematic special issue introduces and discusses novel applications, challenges and possibilities of eye tracking and gaze interaction in VR from an interdisciplinary perspective, including contributions from the fields of psychology, human-computer interaction, human factors, engineering, neuroscience, and education. It addresses a variety of issues and topics, such as practical guidelines for VR-based eye tracking technologies, exploring new research avenues, evaluation of gaze-based assessments, and training interventions.
Collapse
Affiliation(s)
- Hasler Béatrice S
- University of Applied Sciences and Arts Northwestern Switzerland, School of Applied Psychology, Institute Humans in Complex Systems
| | | |
Collapse
|
10
|
Stark P, Hasenbein L, Kasneci E, Göllner R. Gaze-based attention network analysis in a virtual reality classroom. MethodsX 2024; 12:102662. [PMID: 38577409 PMCID: PMC10993185 DOI: 10.1016/j.mex.2024.102662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 03/11/2024] [Indexed: 04/06/2024] Open
Abstract
This article provides a step-by-step guideline for measuring and analyzing visual attention in 3D virtual reality (VR) environments based on eye-tracking data. We propose a solution to the challenges of obtaining relevant eye-tracking information in a dynamic 3D virtual environment and calculating interpretable indicators of learning and social behavior. With a method called "gaze-ray casting," we simulated 3D-gaze movements to obtain information about the gazed objects. This information was used to create graphical models of visual attention, establishing attention networks. These networks represented participants' gaze transitions between different entities in the VR environment over time. Measures of centrality, distribution, and interconnectedness of the networks were calculated to describe the network structure. The measures, derived from graph theory, allowed for statistical inference testing and the interpretation of participants' visual attention in 3D VR environments. Our method provides useful insights when analyzing students' learning in a VR classroom, as reported in a corresponding evaluation article with N = 274 participants. •Guidelines on implementing gaze-ray casting in VR using the Unreal Engine and the HTC VIVE Pro Eye.•Creating gaze-based attention networks and analyzing their network structure.•Implementation tutorials and the Open Source software code are provided via OSF: https://osf.io/pxjrc/?view_only=1b6da45eb93e4f9eb7a138697b941198.
Collapse
Affiliation(s)
- Philipp Stark
- University of Tübingen, Hector Research Institute, Europastraße 6, 72072 Tübingen, Germany
| | - Lisa Hasenbein
- University of Tübingen, Hector Research Institute, Europastraße 6, 72072 Tübingen, Germany
| | - Enkelejda Kasneci
- Technical University of Munich, Chair for Human-Centered Technologies for Learning, Arcisstraße 21, 80333 München, Germany
| | - Richard Göllner
- University of Tübingen, Hector Research Institute, Europastraße 6, 72072 Tübingen, Germany
- University of Regensburg, Institute of Educational Science, Universitätsstraße 31, 93053 Regensburg, Germany
| |
Collapse
|
11
|
Stark P, Bozkir E, Sójka W, Huff M, Kasneci E, Göllner R. The impact of presentation modes on mental rotation processing: a comparative analysis of eye movements and performance. Sci Rep 2024; 14:12329. [PMID: 38811593 DOI: 10.1038/s41598-024-60370-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 04/22/2024] [Indexed: 05/31/2024] Open
Abstract
Mental rotation is the ability to rotate mental representations of objects in space. Shepard and Metzler's shape-matching tasks, frequently used to test mental rotation, involve presenting pictorial representations of 3D objects. This stimulus material has raised questions regarding the ecological validity of the test for mental rotation with actual visual 3D objects. To systematically investigate differences in mental rotation with pictorial and visual stimuli, we compared data of N = 54 university students from a virtual reality experiment. Comparing both conditions within subjects, we found higher accuracy and faster reaction times for 3D visual figures. We expected eye tracking to reveal differences in participants' stimulus processing and mental rotation strategies induced by the visual differences. We statistically compared fixations (locations), saccades (directions), pupil changes, and head movements. Supplementary Shapley values of a Gradient Boosting Decision Tree algorithm were analyzed, which correctly classified the two conditions using eye and head movements. The results indicated that with visual 3D figures, the encoding of spatial information was less demanding, and participants may have used egocentric transformations and perspective changes. Moreover, participants showed eye movements associated with more holistic processing for visual 3D figures and more piecemeal processing for pictorial 2D figures.
Collapse
Affiliation(s)
- Philipp Stark
- Hector Research Institute of Education Sciences and Psychology, University of Tübingen, Europastraße 6, 72072, Tübingen, Germany.
| | - Efe Bozkir
- Human-Computer Interaction, University of Tübingen, Sand 14, 72076, Tübingen, Germany
- Human-Centered Technologies for Learning, Technical University of Munich, Arcisstraße 21, 80333, Munich, Germany
| | - Weronika Sójka
- Hector Research Institute of Education Sciences and Psychology, University of Tübingen, Europastraße 6, 72072, Tübingen, Germany
| | - Markus Huff
- Department of Psychology, University of Tübingen, Schleichstraße 4, 72076, Tübingen, Germany
- Perception and Action Lab, Leibniz-Institut für Wissensmedien, Schleichstraße 6, 72076, Tübingen, Germany
| | - Enkelejda Kasneci
- Human-Centered Technologies for Learning, Technical University of Munich, Arcisstraße 21, 80333, Munich, Germany
| | - Richard Göllner
- Hector Research Institute of Education Sciences and Psychology, University of Tübingen, Europastraße 6, 72072, Tübingen, Germany
- Institute of Educational Science, Faculty of Human Sciences, University of Regensburg, Universitätsstraße 31, 93053, Regensburg, Germany
| |
Collapse
|
12
|
Lafortune D, Dubé S, Lapointe V, Bonneau J, Champoux C, Sigouin N. Virtual Reality Could Help Assess Sexual Aversion Disorder. JOURNAL OF SEX RESEARCH 2024; 61:588-602. [PMID: 37556729 DOI: 10.1080/00224499.2023.2241860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
Virtual reality (VR) may improve our understanding of sexual dysfunctions' manifestations, although research in this area remains limited. This study assessed the potential use of a VR Behavior Avoidance Test (VR-BAT) as a tool for examining the clinical features of Sexual Aversion Disorder (SAD): the experience of fear, disgust, and avoidance when facing sexual cues/contexts. A sample of 55 adults (≥ 18y) with (n = 27) and without SAD (n = 28) completed a self-report measure of sexual avoidance. Their anxiety, disgust, electrodermal activity, heart rate, and visual and behavioral avoidance were then examined during two VR-BATs involving sexual or non-sexual stimuli. Mixed repeated measures ANOVAs, t-tests, and correlational analyses were performed. Results showed that individuals in the SAD group reported greater anxiety and disgust compared to their non-SAD counterparts during the sexual stimuli condition. Sexual avoidance scores were largely positively related to anxiety and disgust during the VR sexual condition, and moderately negatively related to the time spent touching the virtual character's genitals. This study is important given the prevalence of sexual difficulties, such as SAD, and the new research avenues offered by emerging technologies, like VR.
Collapse
Affiliation(s)
- D Lafortune
- Department of Sexology, Université du Québec à Montréal
| | - S Dubé
- Department of Psychology, Concordia University
| | - V Lapointe
- Department of Psychology, Université du Québec à Montréal
| | - J Bonneau
- School of Media, Université du Québec à Montréal
| | - C Champoux
- School of Media, Université du Québec à Montréal
| | - N Sigouin
- School of Media, Université du Québec à Montréal
| |
Collapse
|
13
|
Aizenman AM, Gegenfurtner KR, Goettker A. Oculomotor routines for perceptual judgments. J Vis 2024; 24:3. [PMID: 38709511 PMCID: PMC11078167 DOI: 10.1167/jov.24.5.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 03/09/2024] [Indexed: 05/07/2024] Open
Abstract
In everyday life we frequently make simple visual judgments about object properties, for example, how big or wide is a certain object? Our goal is to test whether there are also task-specific oculomotor routines that support perceptual judgments, similar to the well-established exploratory routines for haptic perception. In a first study, observers saw different scenes with two objects presented in a photorealistic virtual reality environment. Observers were asked to judge which of two objects was taller or wider while gaze was tracked. All tasks were performed with the same set of virtual objects in the same scenes, so that we can compare spatial characteristics of exploratory gaze behavior to quantify oculomotor routines for each task. Width judgments showed fixations around the center of the objects with larger horizontal spread. In contrast, for height judgments, gaze was shifted toward the top of the objects with larger vertical spread. These results suggest specific strategies in gaze behavior that presumably are used for perceptual judgments. To test the causal link between oculomotor behavior and perception, in a second study, observers could freely gaze at the object or we introduced a gaze-contingent setup forcing observers to fixate specific positions on the object. Discrimination performance was similar between free-gaze and the gaze-contingent conditions for width and height judgments. These results suggest that although gaze is adapted for different tasks, performance seems to be based on a perceptual strategy, independent of potential cues that can be provided by the oculomotor system.
Collapse
Affiliation(s)
- Avi M Aizenman
- Psychology Department Giessen University, Giessen, Germany
- http://aviaizenman.com/
| | - Karl R Gegenfurtner
- Psychology Department Giessen University, Giessen, Germany
- https://www.allpsych.uni-giessen.de/karl/
| | - Alexander Goettker
- Psychology Department Giessen University, Giessen, Germany
- https://alexgoettker.com/
| |
Collapse
|
14
|
Bawa Z, McCartney D, Bedoya-Pérez M, Lau NS, Fox R, MacDougall H, McGregor IS. Effects of cannabidiol on psychosocial stress, situational anxiety and nausea in a virtual reality environment: a protocol for a single-centre randomised clinical trial. BMJ Open 2024; 14:e082927. [PMID: 38531572 DOI: 10.1136/bmjopen-2023-082927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 03/28/2024] Open
Abstract
INTRODUCTION The non-intoxicating plant-derived cannabinoid, cannabidiol (CBD), has demonstrated therapeutic potential in a number of clinical conditions. Most successful clinical trials have used relatively high (≥300 mg) oral doses of CBD. Relatively few studies have investigated the efficacy of lower (<300 mg) oral doses, typical of those available in over-the-counter CBD products. METHODS We present a protocol for a randomised, double-blind, placebo-controlled, parallel-group clinical trial investigating the effects of a low oral dose (150 mg) of CBD on acute psychosocial stress, situational anxiety, motion sickness and cybersickness in healthy individuals. Participants (n=74) will receive 150 mg of CBD or a matched placebo 90 min before completing three virtual reality (VR) challenges (tasks) designed to induce transient stress and motion sickness: (a) a 15 min 'Public Speaking' task; (b) a 5 min 'Walk the Plank' task (above a sheer drop); and (c) a 5 min 'Rollercoaster Ride' task. The primary outcomes will be self-reported stress and nausea measured on 100 mm Visual Analogue Scales. Secondary outcomes will include salivary cortisol concentrations, skin conductance, heart rate and vomiting episodes (if any). Statistical analyses will test the hypothesis that CBD reduces nausea and attenuates subjective, endocrine and physiological responses to stress compared with placebo. This study will indicate whether low-dose oral CBD has positive effects in reducing acute psychosocial stress, situational anxiety, motion sickness and cybersickness. ETHICS AND DISSEMINATION The University of Sydney Human Research Ethics Committee has granted approval (2023/307, version 1.6, 16 February 2024). Study findings will be disseminated in a peer-reviewed journal and at academic conferences. TRIAL REGISTRATION NUMBER Australian New Zealand Clinical Trials Registry (ACTRN12623000872639).
Collapse
Affiliation(s)
- Zeeta Bawa
- The Lambert Initiative for Cannabinoid Therapeutics, The University of Sydney, Sydney, New South Wales, Australia
- The Brain and Mind Centre, The University of Sydney, Sydney, New South Wales, Australia
- Sydney Pharmacy School, The University of Sydney, Sydney, New South Wales, Australia
| | - Danielle McCartney
- The Lambert Initiative for Cannabinoid Therapeutics, The University of Sydney, Sydney, New South Wales, Australia
- The Brain and Mind Centre, The University of Sydney, Sydney, New South Wales, Australia
- School of Psychology, The University of Sydney, Sydney, New South Wales, Australia
| | - Miguel Bedoya-Pérez
- The Lambert Initiative for Cannabinoid Therapeutics, The University of Sydney, Sydney, New South Wales, Australia
- The Brain and Mind Centre, The University of Sydney, Sydney, New South Wales, Australia
- School of Psychology, The University of Sydney, Sydney, New South Wales, Australia
| | - Namson S Lau
- The Boden Initiative, Charles Perkins Centre, The University of Sydney, Sydney, New South Wales, Australia
| | - Richard Fox
- Yellow Dog Man Studios s.r.o, Ostrava-jih-Zábřeh, Czechia
| | - Hamish MacDougall
- RPA Institute of Academic Surgery, Sydney Local Health District, Sydney, New South Wales, Australia
| | - Iain S McGregor
- The Lambert Initiative for Cannabinoid Therapeutics, The University of Sydney, Sydney, New South Wales, Australia
- The Brain and Mind Centre, The University of Sydney, Sydney, New South Wales, Australia
- School of Psychology, The University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
15
|
Prummer F, Sidenmark L, Gellersen H. Dynamics of Eye Dominance Behavior in Virtual Reality. J Eye Mov Res 2024; 17:10.16910/jemr.17.3.2. [PMID: 38826772 PMCID: PMC11139049 DOI: 10.16910/jemr.17.3.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/04/2024] Open
Abstract
Prior research has shown that sighting eye dominance is a dynamic behavior and dependent on horizontal viewing angle. Virtual reality (VR) offers high flexibility and control for studying eye movement and human behavior, yet eye dominance has not been given significant attention within this domain. In this work, we replicate Khan and Crawford's (2001) original study in VR to confirm their findings within this specific context. Additionally, this study extends its scope to study alignment with objects presented at greater depth in the visual field. Our results align with previous results, remaining consistent when targets are presented at greater distances in the virtual scene. Using greater target distances presents opportunities to investigate alignment with objects at varying depths, providing greater flexibility for the design of methods that infer eye dominance from interaction in VR.
Collapse
|
16
|
Uimonen J, Villarreal S, Laari S, Arola A, Ijäs P, Salmi J, Hietanen M. Virtual reality tasks with eye tracking for mild spatial neglect assessment: a pilot study with acute stroke patients. Front Psychol 2024; 15:1319944. [PMID: 38348259 PMCID: PMC10860750 DOI: 10.3389/fpsyg.2024.1319944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 01/03/2024] [Indexed: 02/15/2024] Open
Abstract
Objective Increasing evidence shows that traditional neuropsychological tests are insensitive for detecting mild unilateral spatial neglect (USN), lack ecological validity, and are unable to clarify USN in all different spatial domains. Here we present a new, fully immersive virtual reality (VR) task battery with integrated eye tracking for mild visual USN and extinction assessment in the acute state of stroke to overthrow these limitations. Methods We included 11 right-sided stroke patients and 10 healthy controls aged 18-75 years. Three VR tasks named the Extinction, the Storage and the Shoot the target tasks were developed to assess USN. Furthermore, neuropsychological assessment examining various parts of cognitive functioning was conducted to measure general abilities. We compared VR and neuropsychological task performance in stroke patients - those with (USN+, n = 5) and without USN (USN-, n = 6) - to healthy controls (n = 10) and tentatively reported the usability of VR system in the acute state of stroke. Results Patients had mostly mild neurological and USN symptoms. Nonetheless, we found several differences between the USN+ and healthy control groups in VR task performance. Compared to controls, USN+ patients showed visual extinction and asymmetry in gaze behavior and detection times in distinct spatial locations. Extinction was most evident in the extrapersonal space and delayed detection times on the extreme left and on the left upper parts. Also, USN+ patients needed more time to complete TMT A compared with USN- patients and TMT B compared with controls. VR system usability and acceptance were rated high; no relevant adverse effects occurred. Conclusion New VR technology with eye tracking enables ecologically valid and objective assessment methods with various exact measures for mild USN and thus could potentially improve future clinical assessments.
Collapse
Affiliation(s)
- Jenni Uimonen
- Department of Neuropsychology, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Sanna Villarreal
- Department of Neuropsychology, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Siiri Laari
- Department of Neuropsychology, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Anne Arola
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Petra Ijäs
- Department of Neurology, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Juha Salmi
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Marja Hietanen
- Department of Neuropsychology, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| |
Collapse
|
17
|
Liu W, Andrade G, Schulze J, Courtney KE. Evaluation of Gaze-to-Object Mapping Algorithms for Use in "Real-World" Translatable Neuropsychological Paradigms. PSYCHOLOGY & NEUROSCIENCE 2023; 16:339-348. [PMID: 38298524 PMCID: PMC10827330 DOI: 10.1037/pne0000324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2024]
Abstract
Objective Eye-tracking technology is commonly used for identifying objects of visual attention. However, applying this technology to virtual reality (VR) applications is challenging. This report analyzes the performance of two gaze-to-object mapping (GTOM) algorithms applied to eye-gaze data acquired during a "real-world" VR cue-reactivity paradigm. Methods Two groups of participants completed a VR paradigm using an HTC Vive Pro Eye. The gazed objects were determined by the reported gaze rays and one of two GTOM algorithms - naïve ray-casting (n=18) or a combination of ray-casting and Tobii's G2OM algorithm (n=18). Percent gaze duration was calculated from 1-second intervals before each object interaction to estimate gaze accuracy. The object volume of maximal divergence between algorithms was determined by maximizing the difference in Hedge's G effect sizes between small and large percent gaze duration distributions. Differences in percent gaze duration based on algorithm and target object size were tested with a mixed ANOVA. Results The maximum Hedge's G effect sizes differentiating large and small target objects was observed at an 800cm3 threshold. The combination algorithm performed better than the naïve ray-casting algorithm (p=.003, ηp2=.23), and large objects (>800cm3) were associated with a higher gaze duration percentage than small objects (≤800cm3; p<.001, ηp2=.76). No significant interaction between algorithm and size was observed. Conclusions Results demonstrated that Tobii's G2OM method outperformed naïve ray-casting in this "real-world" paradigm. As both algorithms show a clear decrease in performance for detecting objects with volumes <800cm3, we recommend using gaze-interactable objects >800cm3 for future HTC Vive Pro Eye applications.
Collapse
Affiliation(s)
- Weichen Liu
- Department of Computer Science and Engineering, University of California San Diego, La Jolla, CA, USA
| | - Gianna Andrade
- Department of Psychiatry, University of California San Diego, La Jolla, CA, USA
| | - Jurgen Schulze
- Department of Computer Science and Engineering, University of California San Diego, La Jolla, CA, USA
| | - Kelly E. Courtney
- Department of Psychiatry, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
18
|
Hooge ITC, Niehorster DC, Hessels RS, Benjamins JS, Nyström M. How robust are wearable eye trackers to slow and fast head and body movements? Behav Res Methods 2023; 55:4128-4142. [PMID: 36326998 PMCID: PMC10700439 DOI: 10.3758/s13428-022-02010-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2022] [Indexed: 06/16/2023]
Abstract
How well can modern wearable eye trackers cope with head and body movement? To investigate this question, we asked four participants to stand still, walk, skip, and jump while fixating a static physical target in space. We did this for six different eye trackers. All the eye trackers were capable of recording gaze during the most dynamic episodes (skipping and jumping). The accuracy became worse as movement got wilder. During skipping and jumping, the biggest error was 5.8∘. However, most errors were smaller than 3∘. We discuss the implications of decreased accuracy in the context of different research scenarios.
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, and Social, Health and Organisational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| |
Collapse
|
19
|
Müller MM, Scherer J, Unterbrink P, Bertrand OJN, Egelhaaf M, Boeddeker N. The Virtual Navigation Toolbox: Providing tools for virtual navigation experiments. PLoS One 2023; 18:e0293536. [PMID: 37943845 PMCID: PMC10635524 DOI: 10.1371/journal.pone.0293536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 10/16/2023] [Indexed: 11/12/2023] Open
Abstract
Spatial navigation research in humans increasingly relies on experiments using virtual reality (VR) tools, which allow for the creation of highly flexible, and immersive study environments, that can react to participant interaction in real time. Despite the popularity of VR, tools simplifying the creation and data management of such experiments are rare and often restricted to a specific scope-limiting usability and comparability. To overcome those limitations, we introduce the Virtual Navigation Toolbox (VNT), a collection of interchangeable and independent tools for the development of spatial navigation VR experiments using the popular Unity game engine. The VNT's features are packaged in loosely coupled and reusable modules, facilitating convenient implementation of diverse experimental designs. Here, we depict how the VNT fulfils feature requirements of different VR environments and experiments, guiding through the implementation and execution of a showcase study using the toolbox. The presented showcase study reveals that homing performance in a classic triangle completion task is invariant to translation velocity of the participant's avatar, but highly sensitive to the number of landmarks. The VNT is freely available under a creative commons license, and we invite researchers to contribute, extending and improving tools using the provided repository.
Collapse
Affiliation(s)
- Martin M. Müller
- Department of Neurobiology, Bielefeld University, Bielefeld, NRW, Germany
| | - Jonas Scherer
- Department of Neurobiology, Bielefeld University, Bielefeld, NRW, Germany
| | - Patrick Unterbrink
- Department of Neurobiology, Bielefeld University, Bielefeld, NRW, Germany
| | | | - Martin Egelhaaf
- Department of Neurobiology, Bielefeld University, Bielefeld, NRW, Germany
| | - Norbert Boeddeker
- Department of Cognitive Neuroscience, Bielefeld University, Bielefeld, NRW, Germany
| |
Collapse
|
20
|
Soans RS, Renken RJ, Saxena R, Tandon R, Cornelissen FW, Gandhi TK. A Framework for the Continuous Evaluation of 3D Motion Perception in Virtual Reality. IEEE Trans Biomed Eng 2023; 70:2933-2942. [PMID: 37104106 DOI: 10.1109/tbme.2023.3271288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/28/2023]
Abstract
OBJECTIVE We present a novel framework for the detection and continuous evaluation of 3D motion perception by deploying a virtual reality environment with built-in eye tracking. METHODS We created a biologically-motivated virtual scene that involved a ball moving in a restricted Gaussian random walk against a background of 1/f noise. Sixteen visually healthy participants were asked to follow the moving ball while their eye movements were monitored binocularly using the eye tracker. We calculated the convergence positions of their gaze in 3D using their fronto-parallel coordinates and linear least-squares optimization. Subsequently, to quantify 3D pursuit performance, we employed a first-order linear kernel analysis known as the Eye Movement Correlogram technique to separately analyze the horizontal, vertical and depth components of the eye movements. Finally, we checked the robustness of our method by adding systematic and variable noise to the gaze directions and re-evaluating 3D pursuit performance. RESULTS We found that the pursuit performance in the motion-through depth component was reduced significantly compared to that for fronto-parallel motion components. We found that our technique was robust in evaluating 3D motion perception, even when systematic and variable noise was added to the gaze directions. CONCLUSION The proposed framework enables the assessment of 3D Motion perception by evaluating continuous pursuit performance through eye-tracking. SIGNIFICANCE Our framework paves the way for a rapid, standardized and intuitive assessment of 3D motion perception in patients with various eye disorders.
Collapse
|
21
|
Zeka F, Clemmensen L, Arnfred BT, Nordentoft M, Glenthøj LB. Examination of gaze behaviour in social anxiety disorder using a virtual reality eye-tracking paradigm: protocol for a case -control study. BMJ Open 2023; 13:e071927. [PMID: 37620268 PMCID: PMC10450086 DOI: 10.1136/bmjopen-2023-071927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 08/08/2023] [Indexed: 08/26/2023] Open
Abstract
INTRODUCTION Social anxiety disorder (SAD) has an early onset, a high lifetime prevalence, and may be a risk factor for developing other mental disorders. Gaze behaviour is considered an aberrant feature of SAD. Eye-tracking, a novel technology device, enables recording eye movements in real time, making it a direct and objective measure of gaze behaviour. Virtual reality (VR) is a promising tool for assessment and diagnostic purposes. Developing an objective screening tool based on examination of gaze behaviour in SAD may potentially aid early detection. The objective of this current study is, therefore to examine gaze behaviour in SAD utilising VR. METHODS AND ANALYSIS A case-control study design is employed in which a clinical sample of 29 individuals with SAD will be compared with a matched healthy control group of 29 individuals. In the VR-based eye-tracking paradigm, participants will be presented to stimuli consisting of high-res 360° 3D stereoscopic videos of three social-evaluative tasks designed to elicit social anxiety. The study will investigate between-group gaze behaviour differences during stimuli presentation. ETHICS AND DISSEMINATION The study has been approved by the National Committee on Health Research Ethics for the Capital Region of Denmark (H-22041443). The study has been preregistered on OSF registries: https://doi.org/10.17605/OSF.IO/XCTAKAll participants will be provided with written and oral information. Informed consent is required for all the participants. Participation is voluntarily, and the participants can at any time terminate their participation without any consequences. Study results; positive, negative or inconclusive will be published in relevant scientific journals.
Collapse
Affiliation(s)
- Fatime Zeka
- VIRTU Research Group, Copenhagen Research Centre on Mental Health (CORE), Hellerup, Denmark
- Department of Psychology, University of Copenhagen, Copenhagen, Denmark
| | - Lars Clemmensen
- VIRTU Research Group, Copenhagen Research Centre on Mental Health (CORE), Hellerup, Denmark
| | | | - Merete Nordentoft
- Copenhagen Research Centre on Mental Health (CORE), Mental Health Center Copenhagen, Copenhagen University Hospital, Copenhagen, Denmark
| | - Louise Birkedal Glenthøj
- VIRTU Research Group, Copenhagen Research Centre on Mental Health (CORE), Hellerup, Denmark
- Department of Psychology, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
22
|
Mehringer W, Stoeve M, Krauss D, Ring M, Steussloff F, Güttes M, Zott J, Hohberger B, Michelson G, Eskofier B. Virtual reality for assessing stereopsis performance and eye characteristics in Post-COVID. Sci Rep 2023; 13:13167. [PMID: 37574496 PMCID: PMC10423723 DOI: 10.1038/s41598-023-40263-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 08/08/2023] [Indexed: 08/15/2023] Open
Abstract
In 2019, we faced a pandemic due to the coronavirus disease (COVID-19), with millions of confirmed cases and reported deaths. Even in recovered patients, symptoms can be persistent over weeks, termed Post-COVID. In addition to common symptoms of fatigue, muscle weakness, and cognitive impairments, visual impairments have been reported. Automatic classification of COVID and Post-COVID is researched based on blood samples and radiation-based procedures, among others. However, a symptom-oriented assessment for visual impairments is still missing. Thus, we propose a Virtual Reality environment in which stereoscopic stimuli are displayed to test the patient's stereopsis performance. While performing the visual tasks, the eyes' gaze and pupil diameter are recorded. We collected data from 15 controls and 20 Post-COVID patients in a study. Therefrom, we extracted features of three main data groups, stereopsis performance, pupil diameter, and gaze behavior, and trained various classifiers. The Random Forest classifier achieved the best result with 71% accuracy. The recorded data support the classification result showing worse stereopsis performance and eye movement alterations in Post-COVID. There are limitations in the study design, comprising a small sample size and the use of an eye tracking system.
Collapse
Affiliation(s)
- Wolfgang Mehringer
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Bavaria, Germany.
| | - Maike Stoeve
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Bavaria, Germany
| | - Daniel Krauss
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Bavaria, Germany
| | - Matthias Ring
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Bavaria, Germany
| | - Fritz Steussloff
- Department of Ophthalmology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
| | - Moritz Güttes
- Department of Ophthalmology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
| | - Julia Zott
- Department of Ophthalmology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
| | - Bettina Hohberger
- Department of Ophthalmology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
| | - Georg Michelson
- Department of Ophthalmology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
- Talkingeyes & More GmbH, 91052, Erlangen, Bavaria, Germany
| | - Bjoern Eskofier
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Bavaria, Germany
| |
Collapse
|
23
|
Lei Y, Deng Y, Dong L, Li X, Li X, Su Z. A Novel Sensor Fusion Approach for Precise Hand Tracking in Virtual Reality-Based Human-Computer Interaction. Biomimetics (Basel) 2023; 8:326. [PMID: 37504214 PMCID: PMC10807483 DOI: 10.3390/biomimetics8030326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 07/06/2023] [Accepted: 07/08/2023] [Indexed: 07/29/2023] Open
Abstract
The rapidly evolving field of Virtual Reality (VR)-based Human-Computer Interaction (HCI) presents a significant demand for robust and accurate hand tracking solutions. Current technologies, predominantly based on single-sensing modalities, fall short in providing comprehensive information capture due to susceptibility to occlusions and environmental factors. In this paper, we introduce a novel sensor fusion approach combined with a Long Short-Term Memory (LSTM)-based algorithm for enhanced hand tracking in VR-based HCI. Our system employs six Leap Motion controllers, two RealSense depth cameras, and two Myo armbands to yield a multi-modal data capture. This rich data set is then processed using LSTM, ensuring the accurate real-time tracking of complex hand movements. The proposed system provides a powerful tool for intuitive and immersive interactions in VR environments.
Collapse
Affiliation(s)
- Yu Lei
- College of Humanities and Arts, Hunan International Economics University, Changsha 410012, China;
| | - Yi Deng
- College of Physical Education, Hunan International Economics University, Changsha 410012, China
| | - Lin Dong
- Institute of Sports Artificial Intelligence, Capital University of Physical Education and Sports, Beijing 100091, China
| | - Xiaohui Li
- Department of Wushu and China, Songshan Shaolin Wushu College, Zhengzhou 452470, China
- Department of History and Pakistan, University of the Punjab, Lahore 54000, Pakistan
| | - Xiangnan Li
- Yantai Science and Technology Innovation Promotion Center, Yantai 264005, China
| | - Zhi Su
- Department of Information, School of Design and Art, Changsha University of Science and Technology, Changsha 410076, China;
| |
Collapse
|
24
|
Josupeit J. Let's get it started: Eye Tracking in VR with the Pupil Labs Eye Tracking Add-On for the HTC Vive. J Eye Mov Res 2023; 15:10.16910/jemr.15.3.10. [PMID: 39139654 PMCID: PMC11321899 DOI: 10.16910/jemr.15.3.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/15/2024] Open
Abstract
Combining eye tracking and virtual reality (VR) is a promising approach to tackle various applied research questions. As this approach is relatively new, routines are not established yet and the first steps can be full of potential pitfalls. The present paper gives a practice example to lower the boundaries for getting started. More specifically, I focus on an affordable add-on technology, the Pupil Labs eye tracking add-on for the HTC Vive. As add-on technology with all relevant source code available on GitHub, a high degree of freedom in preprocessing, visualizing, and analyzing eye tracking data in VR can be achieved. At the same time, some extra preparatory steps for the setup of hardware and software are necessary. Therefore, specifics of eye tracking in VR from unboxing, software integration, and procedures to analyzing the data and maintaining the hardware will be addressed. The Pupil Labs eye tracking add-on for the HTC Vive represents a highly transparent approach to existing alternatives. Characteristics of eye tracking in VR in contrast to other headmounded and remote eye trackers applied in the physical world will be discussed. In conclusion, the paper contributes to the idea of open science in two ways: First, by making the necessary routines transparent and therefore reproducible. Second, by stressing the benefits of using open source software.
Collapse
|
25
|
Souchet AD, Lourdeaux D, Burkhardt JM, Hancock PA. Design guidelines for limiting and eliminating virtual reality-induced symptoms and effects at work: a comprehensive, factor-oriented review. Front Psychol 2023; 14:1161932. [PMID: 37359863 PMCID: PMC10288216 DOI: 10.3389/fpsyg.2023.1161932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 05/16/2023] [Indexed: 06/28/2023] Open
Abstract
Virtual reality (VR) can induce side effects known as virtual reality-induced symptoms and effects (VRISE). To address this concern, we identify a literature-based listing of these factors thought to influence VRISE with a focus on office work use. Using those, we recommend guidelines for VRISE amelioration intended for virtual environment creators and users. We identify five VRISE risks, focusing on short-term symptoms with their short-term effects. Three overall factor categories are considered: individual, hardware, and software. Over 90 factors may influence VRISE frequency and severity. We identify guidelines for each factor to help reduce VR side effects. To better reflect our confidence in those guidelines, we graded each with a level of evidence rating. Common factors occasionally influence different forms of VRISE. This can lead to confusion in the literature. General guidelines for using VR at work involve worker adaptation, such as limiting immersion times to between 20 and 30 min. These regimens involve taking regular breaks. Extra care is required for workers with special needs, neurodiversity, and gerontechnological concerns. In addition to following our guidelines, stakeholders should be aware that current head-mounted displays and virtual environments can continue to induce VRISE. While no single existing method fully alleviates VRISE, workers' health and safety must be monitored and safeguarded when VR is used at work.
Collapse
Affiliation(s)
- Alexis D. Souchet
- Heudiasyc UMR 7253, Alliance Sorbonne Université, Université de Technologie de Compiègne, CNRS, Compiègne, France
- Institute for Creative Technologies, University of Southern California, Los Angeles, CA, United States
| | - Domitile Lourdeaux
- Heudiasyc UMR 7253, Alliance Sorbonne Université, Université de Technologie de Compiègne, CNRS, Compiègne, France
| | | | - Peter A. Hancock
- Department of Psychology, University of Central Florida, Orlando, FL, United States
| |
Collapse
|
26
|
Freytag SC, Zechner R, Kamps M. A systematic performance comparison of two Smooth Pursuit detection algorithms in Virtual Reality depending on target number, distance, and movement patterns. J Eye Mov Res 2023; 15:10.16910/jemr.15.3.9. [PMID: 39139653 PMCID: PMC11321744 DOI: 10.16910/jemr.15.3.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/15/2024] Open
Abstract
We compared the performance of two smooth-pursuit-based object selection algorithms in Virtual Reality (VR). To assess the best algorithm for a range of configurations, we systematically varied the number of targets to choose from, their distance, and their movement pattern (linear and circular). Performance was operationalized as the ratio of hits, misses and non-detections. Averaged over all distances, the correlation-based algorithm performed better for circular movement patterns compared to linear ones (F(1,11) = 24.27, p < .001, η² = .29). This was not found for the difference-based algorithm (F(1,11) = 0.98, p = .344, η² = .01). Both algorithms performed better in close distances compared to larger ones (F(1,11) = 190.77, p < .001, η² = .75 correlation-based, and F(1,11) = 148.20, p < .001, η² = .42, difference-based). An interaction effect for distance x movement emerged. After systematically varying the number of targets, these results could be replicated, with a slightly smaller effect. Based on performance levels, we introduce the concept of an optimal threshold algorithm, suggesting the best detection algorithm for the individual target configuration. Learnings of adding the third dimension to the detection algorithms and the role of distractors are discussed and suggestions for future research added.
Collapse
|
27
|
Robert FM, Abiven B, Sinou M, Heggarty K, Adam L, Nourrit V, de Bougrenet de la Tocnaye JL. Contact lens embedded holographic pointer. Sci Rep 2023; 13:6919. [PMID: 37106122 PMCID: PMC10140282 DOI: 10.1038/s41598-023-33420-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 04/12/2023] [Indexed: 04/29/2023] Open
Abstract
In this paper we present an infrared laser pointer, consisting of a vertical-cavity surface-emitting laser (VCSEL) and a diffractive optical element (DOE), encapsulated into a scleral contact lens (SCL). The VCSEL is powered remotely by inductive coupling from a primary antenna embedded into an eyewear frame. The DOE is used either to collimate the laser beam or to project a pattern image at a chosen distance in front of the eye. We detail the different SCL constitutive blocks, how they are manufactured and assembled. We particularly emphasize the various technological challenges related to their encapsulation in the reduced volume of the SCL, while keeping the pupil free. Finally, we describe how the laser pointer operates, what are its performances (e.g. collimation, image formation) and how it can be used efficiently in various application fields such as visual assistance and augmented reality.
Collapse
Affiliation(s)
- François-Maël Robert
- Département Optique, IMT Atlantique, Technopôle Brest-Iroise, 655 Avenue du Technopôle, CS 83818 - 29238, Brest Cedex 3, France
| | - Bernard Abiven
- Département Optique, IMT Atlantique, Technopôle Brest-Iroise, 655 Avenue du Technopôle, CS 83818 - 29238, Brest Cedex 3, France
| | - Maïna Sinou
- Département Optique, IMT Atlantique, Technopôle Brest-Iroise, 655 Avenue du Technopôle, CS 83818 - 29238, Brest Cedex 3, France
| | - Kevin Heggarty
- Département Optique, IMT Atlantique, Technopôle Brest-Iroise, 655 Avenue du Technopôle, CS 83818 - 29238, Brest Cedex 3, France
| | - Laure Adam
- LCS, 14 Place Gardin, 14000, Caen, France
| | - Vincent Nourrit
- Département Optique, IMT Atlantique, Technopôle Brest-Iroise, 655 Avenue du Technopôle, CS 83818 - 29238, Brest Cedex 3, France.
| | | |
Collapse
|
28
|
Chuang HC, Tseng HY, Tang DL. An eye tracking study of the application of gestalt theory in photography. J Eye Mov Res 2023; 16:10.16910/jemr.16.1.5. [PMID: 38022898 PMCID: PMC10644408 DOI: 10.16910/jemr.16.1.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2023] Open
Abstract
Photography is an art form where integration of the human visual perception and psychological experiences result in aesthetic pleasure. This research utilizes eye tracking to explore the impact of the properties of Gestalt in photography on people's visual cognitive process in order to understand the psychological processes and patterns of photography appreciation. This study found that images with Gestalt qualities can significantly affect fixation, sightline distribution, and subjective evaluation of aesthetics and complexity. Closure composition images seem to make cognition simpler, resulting in the least number of fixation and saccades, longer fixation duration, and more concentrated sightline indicating stronger feeling of beauty, while images which portray similarity results in the greatest fixation and saccades, longest saccade duration, and greater scattering of sightline, indicating feelings of complexity and unsightliness. The results of this research are closely related to the theories of art and design, and have reference value for photography theory and application.
Collapse
|
29
|
Tang Z, Liu X, Huo H, Tang M, Qiao X, Chen D, Dong Y, Fan L, Wang J, Du X, Guo J, Tian S, Fan Y. Eye movement characteristics in a mental rotation task presented in virtual reality. Front Neurosci 2023; 17:1143006. [PMID: 37051147 PMCID: PMC10083294 DOI: 10.3389/fnins.2023.1143006] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 03/13/2023] [Indexed: 03/28/2023] Open
Abstract
IntroductionEye-tracking technology provides a reliable and cost-effective approach to characterize mental representation according to specific patterns. Mental rotation tasks, referring to the mental representation and transformation of visual information, have been widely used to examine visuospatial ability. In these tasks, participants visually perceive three-dimensional (3D) objects and mentally rotate them until they identify whether the paired objects are identical or mirrored. In most studies, 3D objects are presented using two-dimensional (2D) images on a computer screen. Currently, visual neuroscience tends to investigate visual behavior responding to naturalistic stimuli rather than image stimuli. Virtual reality (VR) is an emerging technology used to provide naturalistic stimuli, allowing the investigation of behavioral features in an immersive environment similar to the real world. However, mental rotation tasks using 3D objects in immersive VR have been rarely reported.MethodsHere, we designed a VR mental rotation task using 3D stimuli presented in a head-mounted display (HMD). An eye tracker incorporated into the HMD was used to examine eye movement characteristics during the task synchronically. The stimuli were virtual paired objects oriented at specific angular disparities (0, 60, 120, and 180°). We recruited thirty-three participants who were required to determine whether the paired 3D objects were identical or mirrored.ResultsBehavioral results demonstrated that the response times when comparing mirrored objects were longer than identical objects. Eye-movement results showed that the percent fixation time, the number of within-object fixations, and the number of saccades for the mirrored objects were significantly lower than that for the identical objects, providing further explanations for the behavioral results.DiscussionIn the present work, we examined behavioral and eye movement characteristics during a VR mental rotation task using 3D stimuli. Significant differences were observed in response times and eye movement metrics between identical and mirrored objects. The eye movement data provided further explanation for the behavioral results in the VR mental rotation task.
Collapse
Affiliation(s)
- Zhili Tang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Xiaoyu Liu
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
- *Correspondence: Xiaoyu Liu,
| | - Hongqiang Huo
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Min Tang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Xiaofeng Qiao
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Duo Chen
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Ying Dong
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Linyuan Fan
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Jinghui Wang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Xin Du
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Jieyi Guo
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Shan Tian
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Yubo Fan
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
- Yubo Fan,
| |
Collapse
|
30
|
Yuan J, Hassan SS, Wu J, Koger CR, Packard RRS, Shi F, Fei B, Ding Y. Extended reality for biomedicine. NATURE REVIEWS. METHODS PRIMERS 2023; 3:15. [PMID: 37051227 PMCID: PMC10088349 DOI: 10.1038/s43586-023-00208-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
Extended reality (XR) refers to an umbrella of methods that allows users to be immersed in a three-dimensional (3D) or a 4D (spatial + temporal) virtual environment to different extents, including virtual reality (VR), augmented reality (AR), and mixed reality (MR). While VR allows a user to be fully immersed in a virtual environment, AR and MR overlay virtual objects over the real physical world. The immersion and interaction of XR provide unparalleled opportunities to extend our world beyond conventional lifestyles. While XR has extensive applications in fields such as entertainment and education, its numerous applications in biomedicine create transformative opportunities in both fundamental research and healthcare. This Primer outlines XR technology from instrumentation to software computation methods, delineating the biomedical applications that have been advanced by state-of-the-art techniques. We further describe the technical advances overcoming current limitations in XR and its applications, providing an entry point for professionals and trainees to thrive in this emerging field.
Collapse
Affiliation(s)
- Jie Yuan
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, TX, United States
| | - Sohail S. Hassan
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, TX, United States
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Casey R. Koger
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, TX, United States
| | - René R. Sevag Packard
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, United States
- Ronald Reagan UCLA Medical Center, Los Angeles, CA United States
- Veterans Affairs West Los Angeles Medical Center, Los Angeles, CA, United States
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Baowei Fei
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, TX, United States
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX, United States
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, Richardson, TX, United States
| | - Yichen Ding
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, TX, United States
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, Richardson, TX, United States
- Hamon Center for Regenerative Science and Medicine, UT Southwestern Medical Center, Dallas, TX, United States
| |
Collapse
|
31
|
Speidel R, Schneider A, Walter S, Grab-Kroll C, Oechsner W. Immersive medium for early clinical exposure - knowledge acquisition, spatial orientation and the unexpected role of annotation in 360° VR photos. GMS JOURNAL FOR MEDICAL EDUCATION 2023; 40:Doc8. [PMID: 36923314 PMCID: PMC10010766 DOI: 10.3205/zma001590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 10/28/2022] [Accepted: 11/08/2022] [Indexed: 06/18/2023]
Abstract
AIM 360° VR photos could be a low-threshold possibility to increase early clinical exposure. Apart from granting insights into local routines and premises, the medium should facilitate knowledge acquisition and spatial orientation depending on its design. This assumption, however, is not yet substantiated empirically. Thus, three hypotheses were tested in consideration of Mayer's modality principle: 1) Providing 360° VR photos as visual reference improves retention and comprehension of information. 2) The annotation of text boxes in 360° VR photos compromises spatial orientation and presence. 3) Annotated audio commentary is superior to annotated text boxes in terms of cognitive load and knowledge acquisition. METHODS Using head-mounted displays, students of human (N=53) and dental medicine (N=8) completed one of three virtual tours through a surgical unit, which were created with 360° VR photos. In the first two variants, information about the facilities, medical devices and clinical procedures was annotated either as text boxes or audio commentary comprising 67 words on average (SD=6.67). In the third variant, the same information was given separately on a printed handout before the virtual tour. Taking user experience and individual learner characteristics into account, differences between conditions were measured regarding retention, comprehension, spatial orientation, cognitive load, and presence. RESULTS Concerning retention and comprehension of information, annotated text boxes outperformed annotated audio commentary and the handout condition. Although annotated audio commentary exhibited the lowest knowledge test scores, students preferred listening over reading. Students with an interest in VR and 360° media reported higher levels of enjoyment and presence. Regarding spatial orientation and presence, no significant group differences were found. CONCLUSIONS 360° VR photos can convey information and a sense of spatial orientation effectively in the same learning scenario. For students, their use is both enjoyable and instructive. Unexpectedly, the ideal mode of annotation is not dictated by Mayer's modality principle. For information like in this study, annotated text boxes are better for knowledge acquisition than the subjectively preferred audio commentary. This finding is probably contingent on the length and the quality of the annotated text. To identify boundary conditions and to validate the findings, more research is required on the design and educational use of 360° VR photos.
Collapse
Affiliation(s)
- Robert Speidel
- Ulm University, Medical Faculty, Division of Learning and Teaching, Competence Center eEducation in Medicine, Ulm, Germany
| | - Achim Schneider
- Ulm University, Medical Faculty, Division of Learning and Teaching, Ulm, Germany
| | - Steffen Walter
- University Hospital Ulm, Department of Medical Psychology, Ulm, Germany
| | - Claudia Grab-Kroll
- Ulm University, Medical Faculty, Division of Learning and Teaching, Ulm, Germany
| | - Wolfgang Oechsner
- University Hospital Ulm, Clinic for Anesthesiology and Intensive-Care Medicine, Ulm, Germany
| |
Collapse
|
32
|
Schuetz I, Karimpur H, Fiehler K. vexptoolbox: A software toolbox for human behavior studies using the Vizard virtual reality platform. Behav Res Methods 2023; 55:570-582. [PMID: 35322350 PMCID: PMC10027796 DOI: 10.3758/s13428-022-01831-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/09/2022] [Indexed: 11/08/2022]
Abstract
Virtual reality (VR) is a powerful tool for researchers due to its potential to study dynamic human behavior in highly naturalistic environments while retaining full control over the presented stimuli. Due to advancements in consumer hardware, VR devices are now very affordable and have also started to include technologies such as eye tracking, further extending potential research applications. Rendering engines such as Unity, Unreal, or Vizard now enable researchers to easily create complex VR environments. However, implementing the experimental design can still pose a challenge, and these packages do not provide out-of-the-box support for trial-based behavioral experiments. Here, we present a Python toolbox, designed to facilitate common tasks when developing experiments using the Vizard VR platform. It includes functionality for common tasks like creating, randomizing, and presenting trial-based experimental designs or saving results to standardized file formats. Moreover, the toolbox greatly simplifies continuous recording of eye and body movements using any hardware supported in Vizard. We further implement and describe a simple goal-directed reaching task in VR and show sample data recorded from five volunteers. The toolbox, example code, and data are all available on GitHub under an open-source license. We hope that our toolbox can simplify VR experiment development, reduce code duplication, and aid reproducibility and open-science efforts.
Collapse
Affiliation(s)
- Immo Schuetz
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany.
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany.
| | - Harun Karimpur
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
33
|
Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, Drieghe D, Dunn MJ, Ettinger U, Fiedler S, Foulsham T, van der Geest JN, Hansen DW, Hutton SB, Kasneci E, Kingstone A, Knox PC, Kok EM, Lee H, Lee JY, Leppänen JM, Macknik S, Majaranta P, Martinez-Conde S, Nuthmann A, Nyström M, Orquin JL, Otero-Millan J, Park SY, Popelka S, Proudlock F, Renkewitz F, Roorda A, Schulte-Mecklenbeck M, Sharif B, Shic F, Shovman M, Thomas MG, Venrooij W, Zemblys R, Hessels RS. Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2023; 55:364-416. [PMID: 35384605 PMCID: PMC9535040 DOI: 10.3758/s13428-021-01762-8] [Citation(s) in RCA: 55] [Impact Index Per Article: 55.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Department of Psychology, Nicolaus Copernicus University, Torun, Poland.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Department of Psychology, Regensburg University, Regensburg, Germany.
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Social, Health and Organizational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | | | - Lewis L Chuang
- Department of Ergonomics, Leibniz Institute for Working Environments and Human Factors, Dortmund, Germany
- Institute of Informatics, LMU Munich, Munich, Germany
| | | | - Denis Drieghe
- School of Psychology, University of Southampton, Southampton, UK
| | - Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | - Susann Fiedler
- Vienna University of Economics and Business, Vienna, Austria
| | - Tom Foulsham
- Department of Psychology, University of Essex, Essex, UK
| | | | - Dan Witzner Hansen
- Machine Learning Group, Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | | | - Enkelejda Kasneci
- Human-Computer Interaction, University of Tübingen, Tübingen, Germany
| | | | - Paul C Knox
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Ellen M Kok
- Department of Education and Pedagogy, Division Education, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, The Netherlands
| | - Helena Lee
- University of Southampton, Southampton, UK
| | - Joy Yeonjoo Lee
- School of Health Professions Education, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Jukka M Leppänen
- Department of Psychology and Speed-Language Pathology, University of Turku, Turku, Finland
| | - Stephen Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Päivi Majaranta
- TAUCHI Research Center, Computing Sciences, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jacob L Orquin
- Department of Management, Aarhus University, Aarhus, Denmark
- Center for Research in Marketing and Consumer Psychology, Reykjavik University, Reykjavik, Iceland
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, Vienna, Austria
| | - Stanislav Popelka
- Department of Geoinformatics, Palacký University Olomouc, Olomouc, Czech Republic
| | - Frank Proudlock
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Frank Renkewitz
- Department of Psychology, University of Erfurt, Erfurt, Germany
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of General Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Mark Shovman
- Eyeviation Systems, Herzliya, Israel
- Department of Industrial Design, Bezalel Academy of Arts and Design, Jerusalem, Israel
| | - Mervyn G Thomas
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Ward Venrooij
- Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Enschede, The Netherlands
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
34
|
Chirico A, Gaggioli A. Virtual Reality for Awe and Imagination. Curr Top Behav Neurosci 2023; 65:233-254. [PMID: 36802035 DOI: 10.1007/7854_2023_417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/19/2023]
Abstract
Empirical research has explored the potential of the emotion of awe to shape creativity, while theoretical work has sought to understand the link between this emotion and transformation in terms of imagining new possible worlds. This branch of study relies on the transformative potential of virtual reality (VR) to examine and invite cognitive and emotional components of transformative experiences (TEs) within the interdisciplinary model of Transformative Experience Design (TED) and the Appraisal-Tendency Framework (ATF). TED suggests using the epistemic and emotional affordances of interactive technologies, such as VR, to invite TEs. The ATF can provide insight into the nature of these affordances and their relationship. This line of research draws on empirical evidence of the awe-creativity link to broaden the discourse and consider the potential impact of this emotion on core beliefs about the world. The combination of VR with these theoretical and design-oriented approaches may enable a new generation of potentially transformative experiences that remind people that they can aspire to more and inspire them to work toward imagining and creating a new possible world.
Collapse
Affiliation(s)
- Alice Chirico
- Department of Psychology, Research Center in Communication Psychology, Universitá Cattolica del Sacro Cuore, Milan, Italy.
| | - Andrea Gaggioli
- Department of Psychology, Research Center in Communication Psychology, Universitá Cattolica del Sacro Cuore, Milan, Italy
- Applied Technology for Neuropsychology Lab (ATNP-Lab), Italian Auxologico Institute of Milan, Milan, Italy
| |
Collapse
|
35
|
Jeong D, Jeong M, Yang U, Han K. Eyes on me: Investigating the role and influence of eye-tracking data on user modeling in virtual reality. PLoS One 2022; 17:e0278970. [PMID: 36580442 PMCID: PMC9799296 DOI: 10.1371/journal.pone.0278970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 11/24/2022] [Indexed: 12/30/2022] Open
Abstract
Research has shown that sensor data generated by a user during a VR experience is closely related to the user's behavior or state, meaning that the VR user can be quantitatively understood and modeled. Eye-tracking as a sensor signal has been studied in prior research, but its usefulness in a VR context has been less examined, and most extant studies have dealt with eye-tracking within a single environment. Our goal is to expand the understanding of the relationship between eye-tracking data and user modeling in VR. In this paper, we examined the role and influence of eye-tracking data in predicting a level of cybersickness and types of locomotion. We developed and applied the same structure of a deep learning model to the multi-sensory data collected from two different studies (cybersickness and locomotion) with a total of 50 participants. The experiment results highlight not only a high applicability of our model to sensor data in a VR context, but also a significant relevance of eye-tracking data as a potential supplement to improving the model's performance and the importance of eye-tracking data in learning processes overall. We conclude by discussing the relevance of these results to potential future studies on this topic.
Collapse
Affiliation(s)
- Dayoung Jeong
- Department of Artificial Intelligence, Hanyang University, Seoul, Republic of Korea
| | - Mingon Jeong
- Department of Artificial Intelligence, Hanyang University, Seoul, Republic of Korea
| | - Ungyeon Yang
- Electronics and Telecommunications Research Institute, Daejeon, Republic of Korea
| | - Kyungsik Han
- Department of Artificial Intelligence, Hanyang University, Seoul, Republic of Korea
- * E-mail:
| |
Collapse
|
36
|
Lencastre P, Bhurtel S, Yazidi A, E Mello GBM, Denysov S, Lind PG. EyeT4Empathy: Dataset of foraging for visual information, gaze typing and empathy assessment. Sci Data 2022; 9:752. [PMID: 36463232 PMCID: PMC9719458 DOI: 10.1038/s41597-022-01862-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 11/23/2022] [Indexed: 12/05/2022] Open
Abstract
We present a dataset of eye-movement recordings collected from 60 participants, along with their empathy levels, towards people with movement impairments. During each round of gaze recording, participants were divided into two groups, each one completing one task. One group performed a task of free exploration of structureless images, and a second group performed a task consisting of gaze typing, i.e. writing sentences using eye-gaze movements on a card board. The eye-tracking data recorded from both tasks is stored in two datasets, which, besides gaze position, also include pupil diameter measurements. The empathy levels of participants towards non-verbal movement-impaired people were assessed twice through a questionnaire, before and after each task. The questionnaire is composed of forty questions, extending a established questionnaire of cognitive and affective empathy. Finally, our dataset presents an opportunity for analysing and evaluating, among other, the statistical features of eye-gaze trajectories in free-viewing as well as how empathy is reflected in eye features.
Collapse
Affiliation(s)
- Pedro Lencastre
- Dep. Computer Science, OsloMet - Oslo Metropolitan University, P.O. Box 4 St. Olavs plass, N-0130, Oslo, Norway.
- OsloMet Artificial Intelligence lab, OsloMet, Pilestredet 52, N-0166, Oslo, Norway.
- NordSTAR - Nordic Center for Sustainable and Trustworthy AI Research, Pilestredet 52, N-0166, Oslo, Norway.
| | - Samip Bhurtel
- Dep. Computer Science, OsloMet - Oslo Metropolitan University, P.O. Box 4 St. Olavs plass, N-0130, Oslo, Norway
- OsloMet Artificial Intelligence lab, OsloMet, Pilestredet 52, N-0166, Oslo, Norway
| | - Anis Yazidi
- Dep. Computer Science, OsloMet - Oslo Metropolitan University, P.O. Box 4 St. Olavs plass, N-0130, Oslo, Norway
- OsloMet Artificial Intelligence lab, OsloMet, Pilestredet 52, N-0166, Oslo, Norway
- NordSTAR - Nordic Center for Sustainable and Trustworthy AI Research, Pilestredet 52, N-0166, Oslo, Norway
| | - Gustavo B M E Mello
- Dep. Computer Science, OsloMet - Oslo Metropolitan University, P.O. Box 4 St. Olavs plass, N-0130, Oslo, Norway
- OsloMet Artificial Intelligence lab, OsloMet, Pilestredet 52, N-0166, Oslo, Norway
- NordSTAR - Nordic Center for Sustainable and Trustworthy AI Research, Pilestredet 52, N-0166, Oslo, Norway
| | - Sergiy Denysov
- Dep. Computer Science, OsloMet - Oslo Metropolitan University, P.O. Box 4 St. Olavs plass, N-0130, Oslo, Norway
- OsloMet Artificial Intelligence lab, OsloMet, Pilestredet 52, N-0166, Oslo, Norway
- NordSTAR - Nordic Center for Sustainable and Trustworthy AI Research, Pilestredet 52, N-0166, Oslo, Norway
| | - Pedro G Lind
- Dep. Computer Science, OsloMet - Oslo Metropolitan University, P.O. Box 4 St. Olavs plass, N-0130, Oslo, Norway
- OsloMet Artificial Intelligence lab, OsloMet, Pilestredet 52, N-0166, Oslo, Norway
- NordSTAR - Nordic Center for Sustainable and Trustworthy AI Research, Pilestredet 52, N-0166, Oslo, Norway
| |
Collapse
|
37
|
Ban S, Lee YJ, Kim KR, Kim JH, Yeo WH. Advances in Materials, Sensors, and Integrated Systems for Monitoring Eye Movements. BIOSENSORS 2022; 12:1039. [PMID: 36421157 PMCID: PMC9688058 DOI: 10.3390/bios12111039] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/11/2022] [Accepted: 11/13/2022] [Indexed: 06/16/2023]
Abstract
Eye movements show primary responses that reflect humans' voluntary intention and conscious selection. Because visual perception is one of the fundamental sensory interactions in the brain, eye movements contain critical information regarding physical/psychological health, perception, intention, and preference. With the advancement of wearable device technologies, the performance of monitoring eye tracking has been significantly improved. It also has led to myriad applications for assisting and augmenting human activities. Among them, electrooculograms, measured by skin-mounted electrodes, have been widely used to track eye motions accurately. In addition, eye trackers that detect reflected optical signals offer alternative ways without using wearable sensors. This paper outlines a systematic summary of the latest research on various materials, sensors, and integrated systems for monitoring eye movements and enabling human-machine interfaces. Specifically, we summarize recent developments in soft materials, biocompatible materials, manufacturing methods, sensor functions, systems' performances, and their applications in eye tracking. Finally, we discuss the remaining challenges and suggest research directions for future studies.
Collapse
Affiliation(s)
- Seunghyeb Ban
- School of Engineering and Computer Science, Washington State University, Vancouver, WA 98686, USA
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Yoon Jae Lee
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Ka Ram Kim
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Jong-Hoon Kim
- School of Engineering and Computer Science, Washington State University, Vancouver, WA 98686, USA
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| | - Woon-Hong Yeo
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Tech and Emory University School of Medicine, Atlanta, GA 30332, USA
- Neural Engineering Center, Institute for Materials, Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA 30332, USA
| |
Collapse
|
38
|
Schuetz I, Fiehler K. Eye Tracking in Virtual Reality: Vive Pro Eye Spatial Accuracy, Precision, and Calibration Reliability. J Eye Mov Res 2022; 15:10.16910/jemr.15.3.3. [PMID: 37125009 PMCID: PMC10136368 DOI: 10.16910/jemr.15.3.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
A growing number of virtual reality devices now include eye tracking technology, which can facilitate oculomotor and cognitive research in VR and enable use cases like foveated rendering. These applications require different tracking performance, often measured as spatial accuracy and precision. While manufacturers report data quality estimates for their devices, these typically represent ideal performance and may not reflect real-world data quality. Additionally, it is unclear how accuracy and precision change across sessions within the same participant or between devices, and how performance is influenced by vision correction. Here, we measured spatial accuracy and precision of the Vive Pro Eye built-in eye tracker across a range of 30 visual degrees horizontally and vertically. Participants completed ten measurement sessions over multiple days, allowing to evaluate calibration reliability. Accuracy and precision were highest for central gaze and decreased with greater eccentricity in both axes. Calibration was successful in all participants, including those wearing contacts or glasses, but glasses yielded significantly lower performance. We further found differences in accuracy (but not precision) between two Vive Pro Eye headsets, and estimated participants' inter-pupillary distance. Our metrics suggest high calibration reliability and can serve as a baseline for expected eye tracking performance in VR experiments.
Collapse
|
39
|
Ong J, Tavakkoli A, Zaman N, Kamran SA, Waisberg E, Gautam N, Lee AG. Terrestrial health applications of visual assessment technology and machine learning in spaceflight associated neuro-ocular syndrome. NPJ Microgravity 2022; 8:37. [PMID: 36008494 PMCID: PMC9411571 DOI: 10.1038/s41526-022-00222-7] [Citation(s) in RCA: 42] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 08/01/2022] [Indexed: 02/05/2023] Open
Abstract
The neuro-ocular effects of long-duration spaceflight have been termed Spaceflight Associated Neuro-Ocular Syndrome (SANS) and are a potential challenge for future, human space exploration. The underlying pathogenesis of SANS remains ill-defined, but several emerging translational applications of terrestrial head-mounted, visual assessment technology and machine learning frameworks are being studied for potential use in SANS. To develop such technology requires close consideration of the spaceflight environment which is limited in medical resources and imaging modalities. This austere environment necessitates the utilization of low mass, low footprint technology to build a visual assessment system that is comprehensive, accessible, and efficient. In this paper, we discuss the unique considerations for developing this technology for SANS and translational applications on Earth. Several key limitations observed in the austere spaceflight environment share similarities to barriers to care for underserved areas on Earth. We discuss common terrestrial ophthalmic diseases and how machine learning and visual assessment technology for SANS can help increase screening for early intervention. The foundational developments with this novel system may help protect the visual health of both astronauts and individuals on Earth.
Collapse
Affiliation(s)
- Joshua Ong
- University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Sharif Amit Kamran
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Ethan Waisberg
- University College Dublin School of Medicine, Belfield, Dublin, Ireland
| | - Nikhil Gautam
- Department of Computer Science, Rice University, Houston, TX, USA
| | - Andrew G Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX, USA. .,Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, USA. .,The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX, USA. .,Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY, USA. .,Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX, USA. .,University of Texas MD Anderson Cancer Center, Houston, TX, USA. .,Texas A&M College of Medicine, Bryan, TX, USA. .,Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, IA, USA.
| |
Collapse
|
40
|
Hollett RC, Rogers SL, Florido P, Mosdell B. Body Gaze as a Marker of Sexual Objectification: A New Scale for Pervasive Gaze and Gaze Provocation Behaviors in Heterosexual Women and Men. ARCHIVES OF SEXUAL BEHAVIOR 2022; 51:2759-2780. [PMID: 35348918 PMCID: PMC9363378 DOI: 10.1007/s10508-022-02290-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 01/09/2022] [Accepted: 01/10/2022] [Indexed: 05/10/2023]
Abstract
Body gaze behavior is assumed to be a key feature of sexual objectification. However, there are few self-report gaze measures available and none capturing behavior which seeks to invite body gaze from others. Across two studies, we used existing self-report instruments and measurement of eye movements to validate a new self-report scale to measure pervasive body gaze behavior and body gaze provocation behavior in heterosexual women and men. In Study 1, participants (N = 1021) completed a survey with newly created items related to pervasive body gaze and body gaze provocation behavior. Participants also completed preexisting measures of body attitudes, sexual assault attitudes, pornography use, and relationship status. Exploratory and confirmatory factor analyses across independent samples suggested a 12-item scale for men and women to separately measure pervasive body gaze (5 items) and body gaze provocation (7 items) toward the opposite sex. The two scales yielded excellent internal consistency estimates (.86-.89) and promising convergent validity via positive correlations with body and sexual attitudes. In Study 2, a subsample (N = 167) of participants from Study 1 completed an eye-tracking task to capture their gaze behavior toward matched images of partially and fully dressed female and male subjects. Men exhibited body-biased gaze behavior toward all the female imagery, whereas women exhibited head-biased gaze behavior toward fully clothed male imagery. Importantly, self-reported body gaze correlated positively with some aspects of objectively measured body gaze behavior. Both scales showed good test-retest reliability and were positively correlated with sexual assault attitudes.
Collapse
Affiliation(s)
- Ross C Hollett
- Cognition Research Group, Psychology and Criminology, Edith Cowan University, 270 Joondalup Drive, Joondalup, WA, 6027, Australia.
| | - Shane L Rogers
- Cognition Research Group, Psychology and Criminology, Edith Cowan University, 270 Joondalup Drive, Joondalup, WA, 6027, Australia
| | - Prudence Florido
- Cognition Research Group, Psychology and Criminology, Edith Cowan University, 270 Joondalup Drive, Joondalup, WA, 6027, Australia
| | - Belinda Mosdell
- Cognition Research Group, Psychology and Criminology, Edith Cowan University, 270 Joondalup Drive, Joondalup, WA, 6027, Australia
| |
Collapse
|
41
|
A Study on Attention Attracting Elements of 360-Degree Videos Based on VR Eye-Tracking System. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022. [DOI: 10.3390/mti6070054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In 360-degree virtual reality (VR) videos, users possess increased freedom in terms of gaze movement. As a result, the users’ attention may not move according to the narrative intended by the director and miss out on important parts of the narrative of the 360-degree video. Therefore, it is necessary to study a directing technique that can attract user attention in 360-degree VR videos. In this study, we analyzed the directing elements that can attract users’ attention in a 360-degree VR video and developed a 360 VR eye-tracking system to investigate the effect of the attention-attracting elements on the user. Elements that can attract user attention were classified into five categories: object movement, hand gesture, GUI insertion, camera movement, and gaze angle variation. We developed a 360 VR eye-tracking system to analyze whether five attention-attracting elements influence the user’s attention. Based on the eye tracking system, an experiment was conducted to analyze whether the user’s attention moves according to the five attention-attracting elements. Based on the experimental results, it can be seen that ‘hand gesture’ attracted the second most attention shift of the subjects, and ‘GUI insertion’ induced the smallest shift of attention of the subjects.
Collapse
|
42
|
Employing Eye Tracking to Study Visual Attention to Live Streaming: A Case Study of Facebook Live. SUSTAINABILITY 2022. [DOI: 10.3390/su14127494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In recent years, the COVID-19 pandemic has led to the development of a new business model, “Live Streaming + Ecommerce”, which is a new method for commercial sales that shares the goal of sustainable economic growth (SDG 8). As information technology finds its way into the digital lives of internet users, the real-time and interactive nature of live streaming has overturned the traditional entertainment experience of audio and video content, moving towards a more nuanced division of labor with multiple applications. This study used a portable eye tracker to collect eye movement information from participants watching Facebook Live, with 31 participants who had experience using the live streaming platform. The four eye movement indicators, namely, latency of first fixation (LFF), duration of first fixation (DFF), total fixation durations (TFD), and the number of fixations (NOF), were used to analyze the distribution of the visual attention in each region of interest (ROI) and explore the study questions based on the ROIs. The findings of this study were as follows: (1) the fixation order of the ROIs in the live ecommerce platform differed between participants of different sexes; (2) the DFF of the ROIs in the live ecommerce platform differed among participants of different sexes; and (3) regarding the ROIs of participants on the live ecommerce platform, participants of different sexes showed the same attention to the live products according to the TFD and NOF eye movement indicators. This study explored the visual search behaviors of existing consumers watching live ecommerce and provides the results as a reference for operators and researchers of live streaming platforms.
Collapse
|
43
|
Cavedoni S, Cipresso P, Mancuso V, Bruni F, Pedroli E. Virtual reality for the assessment and rehabilitation of neglect: where are we now? A 6-year review update. VIRTUAL REALITY 2022; 26:1663-1704. [PMID: 35669614 PMCID: PMC9148943 DOI: 10.1007/s10055-022-00648-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 03/24/2022] [Indexed: 06/13/2023]
Abstract
Unilateral spatial neglect (USN) is a frequent repercussion of a cerebrovascular accident, typically a stroke. USN patients fail to orient their attention to the contralesional side to detect auditory, visual, and somatosensory stimuli, as well as to collect and purposely use this information. Traditional methods for USN assessment and rehabilitation include paper-and-pencil procedures, which address cognitive functions as isolated from other aspects of patients' functioning within a real-life context. This might compromise the ecological validity of these procedures and limit their generalizability; moreover, USN evaluation and treatment currently lacks a gold standard. The field of technology has provided several promising tools that have been integrated within the clinical practice; over the years, a "first wave" has promoted computerized methods, which cannot provide an ecological and realistic environment and tasks. Thus, a "second wave" has fostered the implementation of virtual reality (VR) devices that, with different degrees of immersiveness, induce a sense of presence and allow patients to actively interact within the life-like setting. The present paper provides an updated, comprehensive picture of VR devices in the assessment and rehabilitation of USN, building on the review of Pedroli et al. (2015). The present paper analyzes the methodological and technological aspects of the studies selected, considering the issue of usability and ecological validity of virtual environments and tasks. Despite the technological advancement, the studies in this field lack methodological rigor as well as a proper evaluation of VR usability and should improve the ecological validity of VR-based assessment and rehabilitation of USN.
Collapse
Affiliation(s)
- S. Cavedoni
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - P. Cipresso
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
- Department of Psychology, University of Turin, Via Verdi, 10, 10124 Turin, TO Italy
| | - V. Mancuso
- Faculty of Psychology, eCampus University, Novedrate, Italy
| | - F. Bruni
- Faculty of Psychology, eCampus University, Novedrate, Italy
| | - E. Pedroli
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
- Faculty of Psychology, eCampus University, Novedrate, Italy
| |
Collapse
|
44
|
Robles M, Namdarian N, Otto J, Wassiljew E, Navab N, Falter-Wagner C, Roth D. A Virtual Reality Based System for the Screening and Classification of Autism. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2168-2178. [PMID: 35171773 DOI: 10.1109/tvcg.2022.3150489] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Autism - also known as Autism Spectrum Disorders or Autism Spectrum Conditions - is a neurodevelopmental condition characterized by repetitive behaviours and differences in communication and social interaction. As a consequence, many autistic individuals may struggle in everyday life, which sometimes manifests in depression, unemployment, or addiction. One crucial problem in patient support and treatment is the long waiting time to diagnosis, which was approximated to thirteen months on average. Yet, the earlier an intervention can take place the better the patient can be supported, which was identified as a crucial factor. We propose a system to support the screening of Autism Spectrum Disorders based on a virtual reality social interaction, namely a shopping experience, with an embodied agent. During this everyday interaction, behavioral responses are tracked and recorded. We analyze this behavior with machine learning approaches to classify participants from an autistic participant sample in comparison to a typically developed individuals control sample with high accuracy, demonstrating the feasibility of the approach. We believe that such tools can strongly impact the way mental disorders are assessed and may help to further find objective criteria and categorization.
Collapse
|
45
|
Brouwer VHEW, Stuit S, Hoogerbrugge A, Ten Brink AF, Gosselt IK, Van der Stigchel S, Nijboer TCW. Applying machine learning to dissociate between stroke patients and healthy controls using eye movement features obtained from a virtual reality task. Heliyon 2022; 8:e09207. [PMID: 35399377 PMCID: PMC8991384 DOI: 10.1016/j.heliyon.2022.e09207] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/27/2021] [Accepted: 03/24/2022] [Indexed: 12/03/2022] Open
Abstract
Conventional neuropsychological tests do not represent the complex and dynamic situations encountered in daily life. Immersive virtual reality simulations can be used to simulate dynamic and interactive situations in a controlled setting. Adding eye tracking to such simulations may provide highly detailed outcome measures, and has great potential for neuropsychological assessment. Here, participants (83 stroke patients and 103 healthy controls) we instructed to find either 3 or 7 items from a shopping list in a virtual super market environment while eye movements were being recorded. Using Logistic Regression and Support Vector Machine models, we aimed to predict the task of the participant and whether they belonged to the stroke or the control group. With a limited number of eye movement features, our models achieved an average Area Under the Curve (AUC) of .76 in predicting whether each participant was assigned a short or long shopping list (3 or 7 items). Identifying participant as either stroke patients and controls led to an AUC of .64. In both classification tasks, the frequency with which aisles were revisited was the most dissociating feature. As such, eye movement data obtained from a virtual reality simulation contain a rich set of signatures for detecting cognitive deficits, opening the door to potential clinical applications.
Collapse
Affiliation(s)
- Veerle H E W Brouwer
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, Netherlands
| | - Sjoerd Stuit
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, Netherlands
| | - Alex Hoogerbrugge
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, Netherlands
| | - Antonia F Ten Brink
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, Netherlands
| | - Isabel K Gosselt
- Center of Excellence for Rehabilitation Medicine, UMC Utrecht Brain Center, University Medical Center Utrecht, De Hoogstraat Rehabilitation, Heidelberglaan 100, 3584 CX, Utrecht, Netherlands
| | - Stefan Van der Stigchel
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, Netherlands
| | - Tanja C W Nijboer
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, Netherlands.,Center of Excellence for Rehabilitation Medicine, UMC Utrecht Brain Center, University Medical Center Utrecht, De Hoogstraat Rehabilitation, Heidelberglaan 100, 3584 CX, Utrecht, Netherlands.,Department of Rehabilitation, Physical Therapy Science & Sports, UMC Utrecht Brain Center, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, Netherlands
| |
Collapse
|
46
|
A Perspective Review on Integrating VR/AR with Haptics into STEM Education for Multi-Sensory Learning. ROBOTICS 2022. [DOI: 10.3390/robotics11020041] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
As a result of several governments closing educational facilities in reaction to the COVID-19 pandemic in 2020, almost 80% of the world’s students were not in school for several weeks. Schools and universities are thus increasing their efforts to leverage educational resources and provide possibilities for remote learning. A variety of educational programs, platforms, and technologies are now accessible to support student learning; while these tools are important for society, they are primarily concerned with the dissemination of theoretical material. There is a lack of support for hands-on laboratory work and practical experience. This is particularly important for all disciplines related to science, technology, engineering, and mathematics (STEM), where labs and pedagogical assets must be continuously enhanced in order to provide effective study programs. In this study, we describe a unique perspective to achieving multi-sensory learning through the integration of virtual and augmented reality (VR/AR) with haptic wearables in STEM education. We address the implications of a novel viewpoint on established pedagogical notions. We want to encourage worldwide efforts to make fully immersive, open, and remote laboratory learning a reality.
Collapse
|
47
|
Vandevoorde K, Vollenkemper L, Schwan C, Kohlhase M, Schenck W. Using Artificial Intelligence for Assistance Systems to Bring Motor Learning Principles into Real World Motor Tasks. SENSORS (BASEL, SWITZERLAND) 2022; 22:2481. [PMID: 35408094 PMCID: PMC9002555 DOI: 10.3390/s22072481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/18/2022] [Accepted: 03/20/2022] [Indexed: 11/03/2022]
Abstract
Humans learn movements naturally, but it takes a lot of time and training to achieve expert performance in motor skills. In this review, we show how modern technologies can support people in learning new motor skills. First, we introduce important concepts in motor control, motor learning and motor skill learning. We also give an overview about the rapid expansion of machine learning algorithms and sensor technologies for human motion analysis. The integration between motor learning principles, machine learning algorithms and recent sensor technologies has the potential to develop AI-guided assistance systems for motor skill training. We give our perspective on this integration of different fields to transition from motor learning research in laboratory settings to real world environments and real world motor tasks and propose a stepwise approach to facilitate this transition.
Collapse
Affiliation(s)
- Koenraad Vandevoorde
- Center for Applied Data Science (CfADS), Faculty of Engineering and Mathematics, Bielefeld University of Applied Sciences, 33619 Bielefeld, Germany; (L.V.); (C.S.); (M.K.)
| | | | | | | | - Wolfram Schenck
- Center for Applied Data Science (CfADS), Faculty of Engineering and Mathematics, Bielefeld University of Applied Sciences, 33619 Bielefeld, Germany; (L.V.); (C.S.); (M.K.)
| |
Collapse
|
48
|
The Static and Dynamic Analyses of Drivers’ Gaze Movement Using VR Driving Simulator. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12052362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Drivers collect information of road and traffic conditions through a visual search while driving to avoid any potential hazards they perceive. Novice drivers with lack of driving experience may be involved in a car accident as they misjudge the information obtained by insufficient visual search with a narrower field of vision than experienced drivers do. In this regard, the current study compared and identified the gap between novice and experienced drivers in regard to the information they obtained in a visual search of gaze movement and visual attention. A combination of a static analysis, based on the dwell time, fixation duration, the number of fixations and stationary gaze entropy in visual search, and a dynamic analysis using gaze transition entropy was applied. The static analysis on gaze indicated that the group of novice drivers showed a longer dwell time on the traffic lights, pedestrians, and passing vehicles, and a longer fixation duration on the navigation system and the dashboard than the experienced ones. Also, the novice had their eyes fixed on the area of interests straight ahead more frequently while driving at an intersection. In addition, the novice group demonstrated less information at 2.60 bits out of the maximum stationary gaze entropy of 3.32 bits that a driver can exhibit, which indicated that their gaze fixations were concentrated. Meanwhile, the experienced group displayed approx. 3.09 bits, showing that their gaze was not narrowed on a certain area of interests, but was relatively evenly distributed. The dynamic analysis results showed that the novice group conducted the most gaze transitions between traffic lights, pedestrians and passing vehicles, whereas experienced drivers displayed the most transitions between the right- and left-side mirrors, passing vehicles, pedestrians, and traffic lights to find more out about the surrounding traffic conditions. In addition, the experienced group (3.04 bits) showed a higher gaze transition entropy than the novice group (2.21 bits). This indicated that a larger entropy was required to understand the visual search data because visual search strategies changed depending on the situations.
Collapse
|
49
|
Valsecchi M, Codispoti M. Eye tracking applied to tobacco smoking: current directions and future perspectives. J Eye Mov Res 2022; 15:10.16910/jemr.15.1.2. [PMID: 35440972 PMCID: PMC9014256 DOI: 10.16910/jemr.15.1.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Over the years the general awareness of the health costs associated with tobacco smoking has motivated scientists to apply the measurement of eye movements to this form of addiction. On one hand they have investigated whether smokers attend and look preferentially at smoking related scenes and objects. In parallel, on the other hand eye tracking has been used to test how smokers and nonsmokers interact with the different types of health warning that policymakers have mandated in tobacco advertisements and packages. Here we provide an overview of the main findings from the different lines of research, such as the evidence related to the attentional bias for smoking cues in smokers and the evidence that graphic warning labels and plain packages measurably increase the salience of the warning labels. We point to some open questions, such as the conditions that determine whether heavy smokers exhibit a tendency to actively avoid looking at graphic warning labels. Finally we argue that the research applied to gaze exploration of warning labels would benefit from a more widespread use of the more naturalistic testing conditions (e.g. mobile eye tracking or virtual reality) that have been introduced to study the smokers' attentional bias for tobacco-related objects when freely exploring the surrounding environment.
Collapse
|
50
|
Eye-Tracking in Interactive Virtual Environments: Implementation and Evaluation. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031027] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Not all eye-tracking methodology and data processing are equal. While the use of eye-tracking is intricate because of its grounding in visual physiology, traditional 2D eye-tracking methods are supported by software, tools, and reference studies. This is not so true for eye-tracking methods applied in virtual reality (imaginary 3D environments). Previous research regarded the domain of eye-tracking in 3D virtual reality as an untamed realm with unaddressed issues. The present paper explores these issues, discusses possible solutions at a theoretical level, and offers example implementations. The paper also proposes a workflow and software architecture that encompasses an entire experimental scenario, including virtual scene preparation and operationalization of visual stimuli, experimental data collection and considerations for ambiguous visual stimuli, post-hoc data correction, data aggregation, and visualization. The paper is accompanied by examples of eye-tracking data collection and evaluation based on ongoing research of indoor evacuation behavior.
Collapse
|