1
|
Gibaldi A, Liu Y, Kaspiris-Rousellis C, Mahadevan MS, Read JCA, Vlaskamp BNS, Maus GW. Eye posture and screen alignment with simulated see-through head-mounted displays. J Vis 2025; 25:9. [PMID: 39786732 PMCID: PMC11725991 DOI: 10.1167/jov.25.1.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Accepted: 10/18/2024] [Indexed: 01/12/2025] Open
Abstract
When rendering the visual scene for near-eye head-mounted displays, accurate knowledge of the geometry of the displays, scene objects, and eyes is required for the correct generation of the binocular images. Despite possible design and calibration efforts, these quantities are subject to positional and measurement errors, resulting in some misalignment of the images projected to each eye. Previous research investigated the effects in virtual reality (VR) setups that triggered such symptoms as eye strain and nausea. This work aimed at investigating the effects of binocular vertical misalignment (BVM) in see-through augmented reality (AR). In such devices, two conflicting environments coexist. One environment corresponds to the real world, which lies in the background and forms geometrically aligned images on the retinas. The other environment corresponds to the augmented content, which stands out as foreground and might be subject to misalignment. We simulated a see-through AR environment using a standard three-dimensional (3D) stereoscopic display to have full control and high accuracy of the real and augmented contents. Participants were involved in a visual search task that forced them to alternatively interact with the real and the augmented contents while being exposed to different amounts of BVM. The measured eye posture indicated that the compensation for vertical misalignment is equally shared by the sensory (binocular fusion) and the motor (vertical vergence) components of binocular vision. The sensitivity of each participant varied, both in terms of perceived discomfort and misalignment tolerance, suggesting that a per-user calibration might be useful for a comfortable visual experience.
Collapse
Affiliation(s)
| | | | | | | | - Jenny C A Read
- Biosciences Institute, Newcastle University Newcastle upon Tyne, UK
| | | | | |
Collapse
|
2
|
Krauze L, Panke K, Krumina G, Pladere T. Comparative Analysis of Physiological Vergence Angle Calculations from Objective Measurements of Gaze Position. SENSORS (BASEL, SWITZERLAND) 2024; 24:8198. [PMID: 39771937 PMCID: PMC11678997 DOI: 10.3390/s24248198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Revised: 12/19/2024] [Accepted: 12/20/2024] [Indexed: 01/11/2025]
Abstract
Eccentric photorefractometry is widely used to measure eye refraction, accommodation, gaze position, and pupil size. While the individual calibration of refraction and accommodation data has been extensively studied, gaze measurements have received less attention. PowerRef 3 does not incorporate individual calibration for gaze measurements, resulting in a divergent offset between the measured and expected gaze positions. To address this, we proposed two methods to calculate the physiological vergence angle based on the visual vergence data obtained from PowerRef 3. Twenty-three participants aged 25 ± 4 years viewed Maltese cross stimuli at distances of 25, 30, 50, 70, and 600 cm. The expected vergence angles were calculated considering the individual interpupillary distance at far. Our results demonstrate that the PowerRef 3 gaze data deviated from the expected vergence angles by 9.64 ± 2.73° at 25 cm and 9.25 ± 3.52° at 6 m. The kappa angle calibration method reduced the discrepancy to 3.93 ± 1.19° at 25 cm and 3.70 ± 0.36° at 600 cm, whereas the linear regression method further improved the accuracy to 3.30 ± 0.86° at 25 cm and 0.26 ± 0.01° at 600 cm. Both methods improved the gaze results, with the linear regression calibration method showing greater overall accuracy.
Collapse
Affiliation(s)
- Linda Krauze
- Department of Optometry and Vision Science, Faculty of Science and Technology, University of Latvia, Jelgavas Street 1, LV-1004 Riga, Latvia; (G.K.); (T.P.)
| | - Karola Panke
- Department of Optometry and Vision Science, Faculty of Science and Technology, University of Latvia, Jelgavas Street 1, LV-1004 Riga, Latvia; (G.K.); (T.P.)
| | | | | |
Collapse
|
3
|
Bhansali K, Lago MA, Beams R, Zhao C. Evaluation of monocular and binocular contrast perception on virtual reality head-mounted displays. J Med Imaging (Bellingham) 2024; 11:062605. [PMID: 39280782 PMCID: PMC11401613 DOI: 10.1117/1.jmi.11.6.062605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 08/23/2024] [Accepted: 08/26/2024] [Indexed: 09/18/2024] Open
Abstract
Purpose Visualization of medical images on a virtual reality (VR) head-mounted display (HMD) requires binocular fusion of a stereoscopic pair of graphical views. However, current image quality assessment on VR HMDs for medical applications has been primarily limited to time-consuming monocular optical bench measurement on a single eyepiece. Approach As an alternative to optical bench measurement to quantify the image quality on VR HMDs, we developed a WebXR test platform to perform contrast perceptual experiments that can be used for binocular image quality assessment. We obtained monocular and binocular contrast sensitivity responses (CSRs) from participants on a Meta Quest 2 VR HMD using varied interpupillary distance (IPD) configurations. Results The perceptual result shows that contrast perception on VR HMDs is primarily affected by optical aberration of the VR HMD. As a result, monocular CSR degrades at a high spatial frequency greater than 4 cycles per degree when gazing at the periphery of the display field of view, especially for mismatched IPD settings consistent with optical bench measurements. On the contrary, binocular contrast perception is dominated by the monocular view with superior image quality measured by the contrast. Conclusions We developed a test platform to investigate monocular and binocular contrast perception by performing perceptual experiments. The test method can be used to evaluate monocular and/or binocular image quality on VR HMDs for potential medical applications without extensive optical bench measurements.
Collapse
Affiliation(s)
- Khushi Bhansali
- US Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Miguel A Lago
- US Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Ryan Beams
- US Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Chumin Zhao
- US Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| |
Collapse
|
4
|
Zhao C, Bhansali K, Beams R, Lago MA, Badano A. Integrating eye rotation and contrast sensitivity into image quality evaluation of virtual reality head-mounted displays. OPTICS EXPRESS 2024; 32:24968-24984. [PMID: 39538921 DOI: 10.1364/oe.527660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 06/07/2024] [Indexed: 11/16/2024]
Abstract
Visual perception on virtual reality head-mounted displays (VR HMDs) involves human vision in the imaging pipeline. Image quality evaluation of VR HMDs may need to be expanded from optical bench testing by incorporating human visual perception. In this study, we implement a 5-degree-of-freedom (5DoF) experimental setup that simulates the human eye geometry and rotation mechanism. Optical modulation transfer function (MTF) measurements are performed using various camera rotation configurations namely pupil rotation, eye rotation, and eye rotation with angle kappa of the human visual system. The measured MTFs of the VR HMD are inserted into a human eye contrast sensitivity model to predict the perceptual contrast sensitivity function (CSF) on a VR HMD. At the same time, we develop a WebXR test platform to perform human observer experiments. Monocular CSFs of human subjects with different interpupillary distance (IPD) are extracted and compared with those calculated from optical MTF measurements. The result shows that image quality, measured as MTF and CSF, degrades at the periphery of display field of view, especially for subjects with an IPD different than that of the HMD. We observed that both the shift of visual point on the HMD eyepiece and the angle between the optical axes of the eye and eyepiece degrade image quality due to optical aberration. The computed CSFs from optical measurement correlates with those of the human observer experiment, with the optimal correlation achieved using the eye rotation with angle kappa setup. The finding demonstrates that more precise image quality assessment can be achieved by integrating eye rotation and human eye contrast sensitivity into optical bench testing.
Collapse
|
5
|
Díaz-Barrancas F, Rodríguez RG, Bayer FS, Aizenman A, Gegenfurtner KR. High-fidelity color characterization in virtual reality across head mounted displays, game engines, and materials. OPTICS EXPRESS 2024; 32:22388-22409. [PMID: 39538726 DOI: 10.1364/oe.520168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 04/30/2024] [Indexed: 11/16/2024]
Abstract
We present a comprehensive colorimetric analysis of three head mounted displays (HMDs) - HTC Vive Pro Eye, Pimax 8K X DMAS, and Varjo Aero - focusing on their color calibration and uniformity across different game engines (Unity and Unreal) and for different materials/shaders. We developed a robust methodology combining hardware and software tools, including spectroradiometry and imaging colorimetry, to characterize and calibrate these HMDs for accurate color reproduction. The study showcases substantial advancements in colorimetric accuracy, with a reduction in the average deltaE00 of 90% or more across all tested HMDs and conditions. This level of color reproduction quality is below human discrimination thresholds, ensuring that any color inaccuracies remain imperceptible to the human eye. We also identified key areas for improvement, particularly in display uniformity, which could impact peripheral color reproduction. By making our tools and code publicly available, this study aims to facilitate future research and development in virtual reality (VR) technology, emphasizing the importance of color fidelity in virtual environments. The new insight enabled by our work is the extension and application of a traditional calibration method to currently available HMDs.
Collapse
|
6
|
Hooge ITC, Niehorster DC, Hessels RS, Benjamins JS, Nyström M. How robust are wearable eye trackers to slow and fast head and body movements? Behav Res Methods 2023; 55:4128-4142. [PMID: 36326998 PMCID: PMC10700439 DOI: 10.3758/s13428-022-02010-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2022] [Indexed: 06/16/2023]
Abstract
How well can modern wearable eye trackers cope with head and body movement? To investigate this question, we asked four participants to stand still, walk, skip, and jump while fixating a static physical target in space. We did this for six different eye trackers. All the eye trackers were capable of recording gaze during the most dynamic episodes (skipping and jumping). The accuracy became worse as movement got wilder. During skipping and jumping, the biggest error was 5.8∘. However, most errors were smaller than 3∘. We discuss the implications of decreased accuracy in the context of different research scenarios.
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, and Social, Health and Organisational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| |
Collapse
|
7
|
Kalou K, Sedda G, Gibaldi A, Sabatini SP. Learning bio-inspired head-centric representations of 3D shapes in an active fixation setting. Front Robot AI 2022; 9:994284. [PMID: 36329691 PMCID: PMC9623882 DOI: 10.3389/frobt.2022.994284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 09/13/2022] [Indexed: 12/03/2022] Open
Abstract
When exploring the surrounding environment with the eyes, humans and primates need to interpret three-dimensional (3D) shapes in a fast and invariant way, exploiting a highly variant and gaze-dependent visual information. Since they have front-facing eyes, binocular disparity is a prominent cue for depth perception. Specifically, it serves as computational substrate for two ground mechanisms of binocular active vision: stereopsis and binocular coordination. To this aim, disparity information, which is expressed in a retinotopic reference frame, is combined along the visual cortical pathways with gaze information and transformed in a head-centric reference frame. Despite the importance of this mechanism, the underlying neural substrates still remain widely unknown. In this work, we investigate the capabilities of the human visual system to interpret the 3D scene exploiting disparity and gaze information. In a psychophysical experiment, human subjects were asked to judge the depth orientation of a planar surface either while fixating a target point or while freely exploring the surface. Moreover, we used the same stimuli to train a recurrent neural network to exploit the responses of a modelled population of cortical (V1) cells to interpret the 3D scene layout. The results for both human performance and from the model network show that integrating disparity information across gaze directions is crucial for a reliable and invariant interpretation of the 3D geometry of the scene.
Collapse
Affiliation(s)
- Katerina Kalou
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Giulia Sedda
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Agostino Gibaldi
- University of California Berkeley, School of Optometry, Berkeley, CA, United States
| | - Silvio P. Sabatini
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| |
Collapse
|