1
|
Slomianka V, Dau T, Ahrens A. Acoustic scene complexity affects motion behavior during speech perception in audio-visual multi-talker virtual environments. Sci Rep 2024; 14:19028. [PMID: 39152193 PMCID: PMC11329770 DOI: 10.1038/s41598-024-70026-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Accepted: 08/09/2024] [Indexed: 08/19/2024] Open
Abstract
In real-world listening situations, individuals typically utilize head and eye movements to receive and enhance sensory information while exploring acoustic scenes. However, the specific patterns of such movements have not yet been fully characterized. Here, we studied how movement behavior is influenced by scene complexity, varied in terms of reverberation and the number of concurrent talkers. Thirteen normal-hearing participants engaged in a speech comprehension and localization task, requiring them to indicate the spatial location of a spoken story in the presence of other stories in virtual audio-visual scenes. We observed delayed initial head movements when more simultaneous talkers were present in the scene. Both reverberation and a higher number of talkers extended the search period, increased the number of fixated source locations, and resulted in more gaze jumps. The period preceding the participants' responses was prolonged when more concurrent talkers were present, and listeners continued to move their eyes in the proximity of the target talker. In scenes with more reverberation, the final head position when making the decision was farther away from the target. These findings demonstrate that the complexity of the acoustic scene influences listener behavior during speech comprehension and localization in audio-visual scenes.
Collapse
Affiliation(s)
- Valeska Slomianka
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, 2800, Kgs. Lyngby, Denmark.
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, 2800, Kgs. Lyngby, Denmark
| | - Axel Ahrens
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, 2800, Kgs. Lyngby, Denmark
| |
Collapse
|
2
|
Zhu L, Chen J, Yang H, Zhou X, Gao Q, Loureiro R, Gao S, Zhao H. Wearable Near-Eye Tracking Technologies for Health: A Review. Bioengineering (Basel) 2024; 11:738. [PMID: 39061820 PMCID: PMC11273595 DOI: 10.3390/bioengineering11070738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Revised: 07/12/2024] [Accepted: 07/18/2024] [Indexed: 07/28/2024] Open
Abstract
With the rapid advancement of computer vision, machine learning, and consumer electronics, eye tracking has emerged as a topic of increasing interest in recent years. It plays a key role across diverse domains including human-computer interaction, virtual reality, and clinical and healthcare applications. Near-eye tracking (NET) has recently been developed to possess encouraging features such as wearability, affordability, and interactivity. These features have drawn considerable attention in the health domain, as NET provides accessible solutions for long-term and continuous health monitoring and a comfortable and interactive user interface. Herein, this work offers an inaugural concise review of NET for health, encompassing approximately 70 related articles published over the past two decades and supplemented by an in-depth examination of 30 literatures from the preceding five years. This paper provides a concise analysis of health-related NET technologies from aspects of technical specifications, data processing workflows, and the practical advantages and limitations. In addition, the specific applications of NET are introduced and compared, revealing that NET is fairly influencing our lives and providing significant convenience in daily routines. Lastly, we summarize the current outcomes of NET and highlight the limitations.
Collapse
Affiliation(s)
- Lisen Zhu
- HUB of Intelligent Neuro-Engineering, Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London, London HA7 4LP, UK; (L.Z.); (J.C.); (H.Y.); (X.Z.); (R.L.)
| | - Jianan Chen
- HUB of Intelligent Neuro-Engineering, Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London, London HA7 4LP, UK; (L.Z.); (J.C.); (H.Y.); (X.Z.); (R.L.)
| | - Huixin Yang
- HUB of Intelligent Neuro-Engineering, Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London, London HA7 4LP, UK; (L.Z.); (J.C.); (H.Y.); (X.Z.); (R.L.)
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing 100191, China;
| | - Xinkai Zhou
- HUB of Intelligent Neuro-Engineering, Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London, London HA7 4LP, UK; (L.Z.); (J.C.); (H.Y.); (X.Z.); (R.L.)
| | - Qihang Gao
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing 100191, China;
| | - Rui Loureiro
- HUB of Intelligent Neuro-Engineering, Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London, London HA7 4LP, UK; (L.Z.); (J.C.); (H.Y.); (X.Z.); (R.L.)
| | - Shuo Gao
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing 100191, China;
| | - Hubin Zhao
- HUB of Intelligent Neuro-Engineering, Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London, London HA7 4LP, UK; (L.Z.); (J.C.); (H.Y.); (X.Z.); (R.L.)
| |
Collapse
|
3
|
Baltaretu BR, Schuetz I, Võ MLH, Fiehler K. Scene semantics affects allocentric spatial coding for action in naturalistic (virtual) environments. Sci Rep 2024; 14:15549. [PMID: 38969745 PMCID: PMC11226608 DOI: 10.1038/s41598-024-66428-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 07/01/2024] [Indexed: 07/07/2024] Open
Abstract
Interacting with objects in our environment requires determining their locations, often with respect to surrounding objects (i.e., allocentrically). According to the scene grammar framework, these usually small, local objects are movable within a scene and represent the lowest level of a scene's hierarchy. How do higher hierarchical levels of scene grammar influence allocentric coding for memory-guided actions? Here, we focused on the effect of large, immovable objects (anchors) on the encoding of local object positions. In a virtual reality study, participants (n = 30) viewed one of four possible scenes (two kitchens or two bathrooms), with two anchors connected by a shelf, onto which were presented three local objects (congruent with one anchor) (Encoding). The scene was re-presented (Test) with 1) local objects missing and 2) one of the anchors shifted (Shift) or not (No shift). Participants, then, saw a floating local object (target), which they grabbed and placed back on the shelf in its remembered position (Response). Eye-tracking data revealed that both local objects and anchors were fixated, with preference for local objects. Additionally, anchors guided allocentric coding of local objects, despite being task-irrelevant. Overall, anchors implicitly influence spatial coding of local object locations for memory-guided actions within naturalistic (virtual) environments.
Collapse
Affiliation(s)
- Bianca R Baltaretu
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany.
| | - Immo Schuetz
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany
| | - Melissa L-H Võ
- Department of Psychology, Goethe University Frankfurt, 60323, Frankfurt am Main, Hesse, Germany
| | - Katja Fiehler
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany
| |
Collapse
|
4
|
Selvan K, Mina M, Abdelmeguid H, Gulsha M, Vincent A, Sarhan A. Virtual reality headsets for perimetry testing: a systematic review. Eye (Lond) 2024; 38:1041-1064. [PMID: 38036608 PMCID: PMC11009299 DOI: 10.1038/s41433-023-02843-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/05/2023] [Accepted: 11/13/2023] [Indexed: 12/02/2023] Open
Abstract
Standard automated perimetery is considered the gold standard for evaluating a patient's visual field. However, it is costly and requires a fixed testing environment. In response, perimetric devices using virtual reality (VR) headsets have emerged as an alternative way to measure visual fields in patients. This systematic review aims to characterize both novel and established VR headsets in the literature and explore their potential applications within visual field testing. A search was conducted using MEDLINE, Embase, CINAHL, and the Core Collection (Web of Science) for articles published until January 2023. Subject headings and keywords related to virtual reality and visual field were used to identify studies specific to this topic. Records were first screened by title/abstract and then by full text using predefined criteria. Data was extracted accordingly. A total of 2404 records were identified from the databases. After deduplication and the two levels of screening, 64 studies describing 36 VR headset perimetry devices were selected for extraction. These devices encompassed various visual field measurement techniques, including static and kinetic perimetry, with some offering vision rehabilitation capabilities. This review reveals a growing consensus that VR headset perimetry devices perform comparably to, or even better than, standard automated perimetry. They are better tolerated by patients in terms of gaze fixation, more cost-effective, and generally more accessible for patients with limited mobility.
Collapse
Affiliation(s)
- Kavin Selvan
- Genetics and Genome Biology (GGB) Program, The Hospital for Sick Children Research Institute, Toronto, Ontario, Canada.
- Department of Ophthalmology and Vision Sciences, The Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada.
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.
| | - Mina Mina
- Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
| | - Hana Abdelmeguid
- Schulich School of Medicine and Dentistry, The University of Western Ontario, London, Canada
| | - Muhammad Gulsha
- Schulich School of Medicine and Dentistry, The University of Western Ontario, London, Canada
| | - Ajoy Vincent
- Genetics and Genome Biology (GGB) Program, The Hospital for Sick Children Research Institute, Toronto, Ontario, Canada
- Department of Ophthalmology and Vision Sciences, The Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Abdullah Sarhan
- Department of Clinical Neurosciences, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
- RetinaLogik Inc., Calgary, Alberta, Canada
| |
Collapse
|
5
|
Wisher I, Pettitt P, Kentridge R. The deep past in the virtual present: developing an interdisciplinary approach towards understanding the psychological foundations of palaeolithic cave art. Sci Rep 2023; 13:19009. [PMID: 37923922 PMCID: PMC10624876 DOI: 10.1038/s41598-023-46320-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 10/30/2023] [Indexed: 11/06/2023] Open
Abstract
Virtual Reality (VR) has vast potential for developing systematic, interdisciplinary studies to understand ephemeral behaviours in the archaeological record, such as the emergence and development of visual culture. Upper Palaeolithic cave art forms the most robust record for investigating this and the methods of its production, themes, and temporal and spatial changes have been researched extensively, but without consensus over its functions or meanings. More compelling arguments draw from visual psychology and posit that the immersive, dark conditions of caves elicited particular psychological responses, resulting in the perception-and depiction-of animals on suggestive features of cave walls. Our research developed and piloted a novel VR experiment that allowed participants to perceive 3D models of cave walls, with the Palaeolithic art digitally removed, from El Castillo cave (Cantabria, Spain). Results indicate that modern participants' visual attention corresponded to the same topographic features of cave walls utilised by Palaeolithic artists, and that they perceived such features as resembling animals. Although preliminary, our results support the hypothesis that pareidolia-a product of our cognitive evolution-was a key mechanism in Palaeolithic art making, and demonstrates the potential of interdisciplinary VR research for understanding the evolution of art, and demonstrate the potential efficacy of the methodology.
Collapse
Affiliation(s)
- Izzy Wisher
- Department of Linguistics, Cognitive Science and Semiotics, Aarhus University, Aarhus, Denmark.
- Department of Archaeology and Heritage Studies, Aarhus University, Aarhus, Denmark.
| | - Paul Pettitt
- Department of Archaeology, Durham University, Durham, UK
| | | |
Collapse
|
6
|
Kim S, Han S, Jung JH. Binocular see-through configuration and eye movement attenuate visual rivalry in peripheral wearable displays. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12449:124490T. [PMID: 36970500 PMCID: PMC10037227 DOI: 10.1117/12.2648481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/18/2023]
Abstract
Visual confusion occurs when two dissimilar images are superimposed onto the same retinal location. In the context of wearable displays, it can be used to provide multiple sources of information to users on top of the real-world scene. While useful, visual confusion may cause visual rivalry that can suppress one of the sources. If two different images are projected to each eye (i.e., monocular displays), it provokes binocular rivalry wherein visual perception intermittently switches between the two images. When a semi-transparent image is superimposed (i.e., see-through displays), monocular rivalry results, causing perceptual alternations between the foreground and the background images. Here, we investigated how these rivalries influence the visibility of the peripheral target using three configurations of wearable displays (i.e., monocular opaque, monocular see-through, and binocular see-through) with three eye movement conditions (i.e., saccades, smooth pursuit, and central fixation). Using the HTC VIVE Eye Pro headset, subjects viewed a forward vection of a 3D corridor with a horizontally moving vertical grating at 10° above the center fixation. During each trial (~1 min), subjects followed a fixation cross that varied in location to induce eye movements and simultaneously reported whether the peripheral target was visible. Results showed that the binocular display had significantly higher target visibility than both monocular displays, and the monocular see-through display had the lowest target visibility. Target visibility was also higher when eye movements were executed, suggesting that the effects of rivalry are attenuated by eye movements and binocular see-through displays.
Collapse
Affiliation(s)
- Sujin Kim
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
| | - Shui’Er Han
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
- Agency for Science, Technology and Research (A*STAR), Singapore
| | - Jae-Hyun Jung
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
| |
Collapse
|
7
|
Nezvadovitz JR, Rao HM. Using Natural Head Movements to Continually Calibrate EOG Signals. J Eye Mov Res 2022; 15:10.16910/jemr.15.5.6. [PMID: 37846295 PMCID: PMC10576893 DOI: 10.16910/jemr.15.5.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2023] Open
Abstract
Electrooculography (EOG) is the measurement of eye movements using surface electrodes adhered around the eye. EOG systems can be designed to have an unobtrusive form-factor that is ideal for eye tracking in free-living over long durations, but the relationship between voltage and gaze direction requires frequent re-calibration as the skin-electrode impedance and retinal adaptation vary over time. Here we propose a method for automatically calibrating the EOG-gaze relationship by fusing EOG signals with gyroscopic measurements of head movement whenever the vestibulo-ocular reflex (VOR) is active. The fusion is executed as recursive inference on a hidden Markov model that accounts for all rotational degrees-of-freedom and uncertainties simultaneously. This enables continual calibration using natural eye and head movements while minimizing the impact of sensor noise. No external devices like monitors or cameras are needed. On average, our method's gaze estimates deviate by 3.54° from those of an industry-standard desktop video-based eye tracker. Such discrepancy is on par with the latest mobile video eye trackers. Future work is focused on automatically detecting moments of VOR in free-living.
Collapse
Affiliation(s)
| | - Hrishikesh M Rao
- Massachusetts Institute of Technology Lincoln Laboratory, MA, USA
| |
Collapse
|
8
|
Evaluation of expert skills in refinery patrol inspection: visual attention and head positioning behavior. Heliyon 2022; 8:e12117. [PMID: 36544846 PMCID: PMC9761707 DOI: 10.1016/j.heliyon.2022.e12117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 10/02/2022] [Accepted: 11/28/2022] [Indexed: 12/12/2022] Open
Abstract
We aimed to clarify expert skills in refinery patrol inspection using data collected through a virtual reality experimental system. As body positioning and postural changes are relevant factors during refinery patrol inspection tasks, we measured and analyzed both visual attention and head positioning behavior among experts and "knowledgeable novices" who were engaged in the engineering of the refinery but had less inspection experience. The participants performed a simulated inspection task, and the results showed that 1) expert inspectors could find more defects compared to knowledgeable novices, 2) visual attention behavior was similar between knowledgeable novices and experts, and 3) experts tended to position their heads at various heights and further from the inspection target to obtain visual information more effectively from the target compared to knowledgeable novices. This study presented the differences in head positioning behavior between expert and novice inspectors for the first time. These results suggest that to evaluate the skills used in inspecting relatively larger targets, both visual attention and head positioning behavior of the inspectors must be measured.
Collapse
|
9
|
Schuetz I, Fiehler K. Eye Tracking in Virtual Reality: Vive Pro Eye Spatial Accuracy, Precision, and Calibration Reliability. J Eye Mov Res 2022; 15:10.16910/jemr.15.3.3. [PMID: 37125009 PMCID: PMC10136368 DOI: 10.16910/jemr.15.3.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
A growing number of virtual reality devices now include eye tracking technology, which can facilitate oculomotor and cognitive research in VR and enable use cases like foveated rendering. These applications require different tracking performance, often measured as spatial accuracy and precision. While manufacturers report data quality estimates for their devices, these typically represent ideal performance and may not reflect real-world data quality. Additionally, it is unclear how accuracy and precision change across sessions within the same participant or between devices, and how performance is influenced by vision correction. Here, we measured spatial accuracy and precision of the Vive Pro Eye built-in eye tracker across a range of 30 visual degrees horizontally and vertically. Participants completed ten measurement sessions over multiple days, allowing to evaluate calibration reliability. Accuracy and precision were highest for central gaze and decreased with greater eccentricity in both axes. Calibration was successful in all participants, including those wearing contacts or glasses, but glasses yielded significantly lower performance. We further found differences in accuracy (but not precision) between two Vive Pro Eye headsets, and estimated participants' inter-pupillary distance. Our metrics suggest high calibration reliability and can serve as a baseline for expected eye tracking performance in VR experiments.
Collapse
|
10
|
Ong J, Tavakkoli A, Zaman N, Kamran SA, Waisberg E, Gautam N, Lee AG. Terrestrial health applications of visual assessment technology and machine learning in spaceflight associated neuro-ocular syndrome. NPJ Microgravity 2022; 8:37. [PMID: 36008494 PMCID: PMC9411571 DOI: 10.1038/s41526-022-00222-7] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 08/01/2022] [Indexed: 02/05/2023] Open
Abstract
The neuro-ocular effects of long-duration spaceflight have been termed Spaceflight Associated Neuro-Ocular Syndrome (SANS) and are a potential challenge for future, human space exploration. The underlying pathogenesis of SANS remains ill-defined, but several emerging translational applications of terrestrial head-mounted, visual assessment technology and machine learning frameworks are being studied for potential use in SANS. To develop such technology requires close consideration of the spaceflight environment which is limited in medical resources and imaging modalities. This austere environment necessitates the utilization of low mass, low footprint technology to build a visual assessment system that is comprehensive, accessible, and efficient. In this paper, we discuss the unique considerations for developing this technology for SANS and translational applications on Earth. Several key limitations observed in the austere spaceflight environment share similarities to barriers to care for underserved areas on Earth. We discuss common terrestrial ophthalmic diseases and how machine learning and visual assessment technology for SANS can help increase screening for early intervention. The foundational developments with this novel system may help protect the visual health of both astronauts and individuals on Earth.
Collapse
Affiliation(s)
- Joshua Ong
- University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Sharif Amit Kamran
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Ethan Waisberg
- University College Dublin School of Medicine, Belfield, Dublin, Ireland
| | - Nikhil Gautam
- Department of Computer Science, Rice University, Houston, TX, USA
| | - Andrew G Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX, USA. .,Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, USA. .,The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX, USA. .,Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY, USA. .,Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX, USA. .,University of Texas MD Anderson Cancer Center, Houston, TX, USA. .,Texas A&M College of Medicine, Bryan, TX, USA. .,Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, IA, USA.
| |
Collapse
|
11
|
Sipatchin A, García García M, Sauer Y, Wahl S. Application of Spatial Cues and Optical Distortions as Augmentations during Virtual Reality (VR) Gaming: The Multifaceted Effects of Assistance for Eccentric Viewing Training. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:9571. [PMID: 35954927 PMCID: PMC9368505 DOI: 10.3390/ijerph19159571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 07/31/2022] [Accepted: 08/02/2022] [Indexed: 11/16/2022]
Abstract
The present study investigates the effects of peripheral spatial cues and optically distorting augmentations over eccentric vision mechanisms in normally sighted participants with simulated scotoma. Five different augmentations were tested inside a virtual reality (VR)-gaming environment. Three were monocular spatial cues, and two were binocular optical distortions. Each was divided into three conditions: baseline with normal viewing, augmentation with one of the assistance methods positioned around the scotoma, and one with only the simulated central scotoma. The study found that the gaming scenario induced eccentric viewing for the cued augmentation groups, even when the peripheral assistance was removed, while for the optical distortions group, the eccentric behavior disappeared after the augmentation removal. Additionally, an upwards directionality of gaze relative to target during regular gaming was found. The bias was maintained and implemented during and after the cued augmentations but not after the distorted ones. The results suggest that monocular peripheral cues could be better candidates for implementing eccentric viewing training in patients. At the same time, it showed that optical distortions might disrupt such behavior. Such results are noteworthy since distortions such as zoom are known to help patients with macular degeneration see targets of interest.
Collapse
Affiliation(s)
| | | | - Yannick Sauer
- Institute for Ophthalmic Research, 72076 Tübingen, Germany
| | - Siegfried Wahl
- Institute for Ophthalmic Research, 72076 Tübingen, Germany
- Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
| |
Collapse
|
12
|
Rauschnabel PA, Felix R, Hinsch C, Shahab H, Alt F. What is XR? Towards a Framework for Augmented and Virtual Reality. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107289] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
13
|
A Study on Attention Attracting Elements of 360-Degree Videos Based on VR Eye-Tracking System. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022. [DOI: 10.3390/mti6070054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In 360-degree virtual reality (VR) videos, users possess increased freedom in terms of gaze movement. As a result, the users’ attention may not move according to the narrative intended by the director and miss out on important parts of the narrative of the 360-degree video. Therefore, it is necessary to study a directing technique that can attract user attention in 360-degree VR videos. In this study, we analyzed the directing elements that can attract users’ attention in a 360-degree VR video and developed a 360 VR eye-tracking system to investigate the effect of the attention-attracting elements on the user. Elements that can attract user attention were classified into five categories: object movement, hand gesture, GUI insertion, camera movement, and gaze angle variation. We developed a 360 VR eye-tracking system to analyze whether five attention-attracting elements influence the user’s attention. Based on the eye tracking system, an experiment was conducted to analyze whether the user’s attention moves according to the five attention-attracting elements. Based on the experimental results, it can be seen that ‘hand gesture’ attracted the second most attention shift of the subjects, and ‘GUI insertion’ induced the smallest shift of attention of the subjects.
Collapse
|
14
|
Ha J, Park S, Im CH. Novel Hybrid Brain-Computer Interface for Virtual Reality Applications Using Steady-State Visual-Evoked Potential-Based Brain-Computer Interface and Electrooculogram-Based Eye Tracking for Increased Information Transfer Rate. Front Neuroinform 2022; 16:758537. [PMID: 35281718 PMCID: PMC8908008 DOI: 10.3389/fninf.2022.758537] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Accepted: 01/27/2022] [Indexed: 11/13/2022] Open
Abstract
Brain-computer interfaces (BCIs) based on electroencephalogram (EEG) have recently attracted increasing attention in virtual reality (VR) applications as a promising tool for controlling virtual objects or generating commands in a "hands-free" manner. Video-oculography (VOG) has been frequently used as a tool to improve BCI performance by identifying the gaze location on the screen, however, current VOG devices are generally too expensive to be embedded in practical low-cost VR head-mounted display (HMD) systems. In this study, we proposed a novel calibration-free hybrid BCI system combining steady-state visual-evoked potential (SSVEP)-based BCI and electrooculogram (EOG)-based eye tracking to increase the information transfer rate (ITR) of a nine-target SSVEP-based BCI in VR environment. Experiments were repeated on three different frequency configurations of pattern-reversal checkerboard stimuli arranged in a 3 × 3 matrix. When a user was staring at one of the nine visual stimuli, the column containing the target stimulus was first identified based on the user's horizontal eye movement direction (left, middle, or right) classified using horizontal EOG recorded from a pair of electrodes that can be readily incorporated with any existing VR-HMD systems. Note that the EOG can be recorded using the same amplifier for recording SSVEP, unlike the VOG system. Then, the target visual stimulus was identified among the three visual stimuli vertically arranged in the selected column using the extension of multivariate synchronization index (EMSI) algorithm, one of the widely used SSVEP detection algorithms. In our experiments with 20 participants wearing a commercial VR-HMD system, it was shown that both the accuracy and ITR of the proposed hybrid BCI were significantly increased compared to those of the traditional SSVEP-based BCI in VR environment.
Collapse
Affiliation(s)
- Jisoo Ha
- Department of HY-KIST Bio-Convergence, Hanyang University, Seoul, South Korea
| | - Seonghun Park
- Department of Electronic Engineering, Hanyang University, Seoul, South Korea
| | - Chang-Hwan Im
- Department of HY-KIST Bio-Convergence, Hanyang University, Seoul, South Korea
- Department of Electronic Engineering, Hanyang University, Seoul, South Korea
- Department of Biomedical Engineering, Hanyang University, Seoul, South Korea
| |
Collapse
|
15
|
Eye-Tracking in Interactive Virtual Environments: Implementation and Evaluation. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031027] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Not all eye-tracking methodology and data processing are equal. While the use of eye-tracking is intricate because of its grounding in visual physiology, traditional 2D eye-tracking methods are supported by software, tools, and reference studies. This is not so true for eye-tracking methods applied in virtual reality (imaginary 3D environments). Previous research regarded the domain of eye-tracking in 3D virtual reality as an untamed realm with unaddressed issues. The present paper explores these issues, discusses possible solutions at a theoretical level, and offers example implementations. The paper also proposes a workflow and software architecture that encompasses an entire experimental scenario, including virtual scene preparation and operationalization of visual stimuli, experimental data collection and considerations for ambiguous visual stimuli, post-hoc data correction, data aggregation, and visualization. The paper is accompanied by examples of eye-tracking data collection and evaluation based on ongoing research of indoor evacuation behavior.
Collapse
|
16
|
Cojocaru D, Manta LF, Pană CF, Dragomir A, Mariniuc AM, Vladu IC. The Design of an Intelligent Robotic Wheelchair Supporting People with Special Needs, Including for Their Visual System. Healthcare (Basel) 2021; 10:healthcare10010013. [PMID: 35052177 PMCID: PMC8774883 DOI: 10.3390/healthcare10010013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 12/17/2021] [Accepted: 12/19/2021] [Indexed: 11/29/2022] Open
Abstract
The paper aims to study the applicability and limitations of the solution resulting from a design process for an intelligent system supporting people with special needs who are not physically able to control a wheelchair using classical systems. The intelligent system uses information from smart sensors and offers a control system that replaces the use of a joystick. The necessary movements of the chair in the environment can be determined by an intelligent vision system analyzing the direction of the patient’s gaze and point of view, as well as the actions of the head. In this approach, an important task is to detect the destination target in the 3D workspace. This solution has been evaluated, outdoor and indoor, under different lighting conditions. In order to design the intelligent wheelchair, and because sometimes people with special needs also have specific problems with their optical system (e.g., strabismus, Nystagmus) the system was tested on different subjects, some of them wearing eyeglasses. During the design process of the intelligent system, all the tests involving human subjects were performed in accordance with specific rules of medical security and ethics. In this sense, the process was supervised by a company specialized in health activities that involve people with special needs. The main results and findings are as follows: validation of the proposed solution for all indoor lightning conditions; methodology to create personal profiles, used to improve the HMI efficiency and to adapt it to each subject needs; a primary evaluation and validation for the use of personal profiles in real life, indoor conditions. The conclusion is that the proposed solution can be used for persons who are not physically able to control a wheelchair using classical systems, having with minor vision deficiencies or major vision impairment affecting one of the eyes.
Collapse
|
17
|
Pawassar CM, Tiberius V. Virtual Reality in Health Care: Bibliometric Analysis. JMIR Serious Games 2021; 9:e32721. [PMID: 34855606 PMCID: PMC8686483 DOI: 10.2196/32721] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 09/20/2021] [Accepted: 09/24/2021] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Research into the application of virtual reality technology in the health care sector has rapidly increased, resulting in a large body of research that is difficult to keep up with. OBJECTIVE We will provide an overview of the annual publication numbers in this field and the most productive and influential countries, journals, and authors, as well as the most used, most co-occurring, and most recent keywords. METHODS Based on a data set of 356 publications and 20,363 citations derived from Web of Science, we conducted a bibliometric analysis using BibExcel, HistCite, and VOSviewer. RESULTS The strongest growth in publications occurred in 2020, accounting for 29.49% of all publications so far. The most productive countries are the United States, the United Kingdom, and Spain; the most influential countries are the United States, Canada, and the United Kingdom. The most productive journals are the Journal of Medical Internet Research (JMIR), JMIR Serious Games, and the Games for Health Journal; the most influential journals are Patient Education and Counselling, Medical Education, and Quality of Life Research. The most productive authors are Riva, del Piccolo, and Schwebel; the most influential authors are Finset, del Piccolo, and Eide. The most frequently occurring keywords other than "virtual" and "reality" are "training," "trial," and "patients." The most relevant research themes are communication, education, and novel treatments; the most recent research trends are fitness and exergames. CONCLUSIONS The analysis shows that the field has left its infant state and its specialization is advancing, with a clear focus on patient usability.
Collapse
Affiliation(s)
| | - Victor Tiberius
- Faculty of Economics and Social Sciences, University of Potsdam, Potsdam, Germany
| |
Collapse
|
18
|
Kunumpol P, Lerthirunvibul N, Phienphanich P, Munthuli A, Tantisevi V, Manassakorn A, Chansangpetch S, Itthipanichpong R, Ratanawongphaibol K, Rojanapongpun P, Tantibundhit C. GlauCUTU: Virtual Reality Visual Field Test. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:7416-7421. [PMID: 34892811 DOI: 10.1109/embc46164.2021.9629827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This study proposed a virtual reality (VR) head-mounted visual field (VF) test system, or also known as the GlauCUTU VF test, for a reaction time (RT) perimetry with moving visual stimuli that progressively increase in intensity. The test entailed 24-2 VF protocol and was examined on 2 study groups, controls with normal fields and subjects with glaucoma. To collect reaction times, participants were urged to respond to the stimulus by pressing on the clicker as fast as possible. Performance of the GlauCUTU VF test was compared to the gold standard Humphrey Visual Field Analyzer (HFA). The HFA showed a significant difference between the GlauCUTU and HFA with mean duration of 254.41 and 609, respectively [t(16) = 15.273, p<0.05]. Likewise, our system also effectively differentiated glaucomatous eyes from normal eyes for the left eye and right eye, respectively. When compared to the HFA, the GlauCUTU test produced a significantly shorter average test duration by 354 seconds which reduced test-induced eye fatigue. The portable and inexpensive GlauCUTU perimetry system proves to be a promising method for increasing accessibility to glaucoma screening.Clinical relevance- GlauCUTU, an automated head-mounted VR perimetry device for VF test, is portable, cost-effective, and suitable for low resource settings. Unlike the conventional HFA test, GlauCUTU VF test reports in terms of subjects RT which is reportedly higher in glaucoma patients.
Collapse
|
19
|
Guest editorial. INFORMATION AND LEARNING SCIENCES 2021. [DOI: 10.1108/ils-07-2021-262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
20
|
Target Maintenance in Gaming via Saliency Augmentation: An Early-Stage Scotoma Simulation Study Using Virtual Reality (VR). APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11157164] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
This study addresses the importance of salience placement before or after scotoma development for an efficient target allocation in the visual field. Pre-allocation of attention is a mechanism known to induce a better gaze positioning towards the target. Three different conditions were tested: a simulated central scotoma, a salience augmentation surrounding the scotoma and a baseline condition without any simulation. All conditions were investigated within a virtual reality VR gaming environment. Participants were tested in two different orders, either the salient cue was applied together with the scotoma before being presented with the scotoma alone or the scotoma in the wild was presented before and, then, with the augmentation around it. Both groups showed a change in gaze behaviour when saliency was applied. However, in the second group, salient augmentation also induced changes in gaze behaviour for the scotoma condition without augmentation, gazing above and outside the scotoma following previous literature. These preliminary results indicate salience placement before developing an advanced stage of scotoma can induce effective and rapid training for efficient target maintenance during VR gaming. The study shows the potential of salience and VR gaming as therapy for early AMD patients.
Collapse
|