1
|
Nassief S, Al Ali H, Towers A, Field J, Martin N. Dental students' perceptions of the use of two-dimensional and three-dimensional vision in dental education using a three-dimensional haptic simulator: A qualitative study. J Dent Educ 2024. [PMID: 39075768 DOI: 10.1002/jdd.13682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 05/16/2024] [Accepted: 06/21/2024] [Indexed: 07/31/2024]
Abstract
OBJECTIVE To assess the perceptions of novice and experienced undergraduate dental students of virtual learning with two-dimensional (2D) and three-dimensional (3D) vision. MATERIALS AND METHODS This qualitative study involved 21 students from the second and fourth years of a 5-year BDS program. They first performed three operative tasks in virtual reality (VR) training sessions using both 2D and 3D vision. Subsequently, they participated in one of four online focus group discussions (FGDs). The FGDs were recorded and transcribed, and the data obtained from the transcriptions were coded and thematically analyzed. RESULTS Three main themes emerged from the focus groups. With regard to their perceptions of 2D and 3D vision, most of the participants preferred 3D over 2D vision, mainly due to an improved ability to perceive depth. With regard to the theme of practicing 3D vision in the VR environment, some participants performed their tasks faster with 3D vision than with 2D vision, while others did not perceive any difference between them. Under the same main theme, some participants experienced headaches and eye fatigue with 3D vision. With regard to their perception of technical aspects, with 3D glasses, the participants experienced unpleasant sensations and saw darker images. CONCLUSION All the participants placed greater value on practicing with 3D than with 2D vision in the VR environment. They believed that VR training should be used in the early years of dental education as an adjunct to the phantom head as it helps students acquire the skills needed by dental professionals.
Collapse
Affiliation(s)
- Sarah Nassief
- College of Dental Medicine, Umm Al-Qura University, Makkah, Saudi Arabia
| | - Huda Al Ali
- Qassim Health Cluster, Unayzah, Saudi Arabia
- School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| | - Ashley Towers
- School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| | - James Field
- School of Dentistry, Cardiff University, Cardiff, UK
| | - Nicolas Martin
- School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| |
Collapse
|
2
|
Coq R, Neveu P, Plantier J, Legras R. Accommodative response and visual fatigue following a non-congruent visual task in non-asthenopic and asthenopic individuals. Ophthalmic Physiol Opt 2024; 44:925-935. [PMID: 38533853 DOI: 10.1111/opo.13304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 03/06/2024] [Accepted: 03/07/2024] [Indexed: 03/28/2024]
Abstract
PURPOSE Asthenopia is related to near vision activities or visual tasks that dissociate accommodation from vergence. Since the results of previous studies using objective measures to diagnose asthenopia are inconsistent, this study compared optometric tests and objective metrics of accommodation in non-asthenopic and asthenopic young adults before and after a visual fatigue task. METHODS The accommodative response was recorded objectively for 6 min at a 3.33 D accommodative demand using an autorefractor, before and after a 5-min non-congruent visual task. Accommodation was disassociated from vergence with a ±2.00 D accommodative flipper while reading at the same distance. Optometric tests and subjective evaluations of asthenopia were performed before and after the task. Twenty-six non-presbyopic adults (23.15 ± 2.56 years) were included and identified as asthenopic (n = 14) or non-asthenopic (n = 12) based on their score on the Computer Vision Syndrome Questionnaire. RESULTS A mixed ANOVA found no significant difference between the groups for objective (accommodative response) or subjective metrics (feeling of fatigue, optometric tests), although all participants reported greater visual fatigue after the task. A significant effect of time (before and after the non-congruent task) was identified for the overall sample for mean accommodative lag (+0.10 D, p = 0.01), subjective visual fatigue (+1.18, p < 0.01), negative relative accommodation (-0.20 D, p = 0.02) and near negative fusional reserve (blur: +2.46Δ, p < 0.01; break: +1.89Δ, p < 0.01; recovery: +3.34Δ, p = 0.02). CONCLUSIONS The task-induced asthenopia, measured both objectively and subjectively, was accompanied by a change in accommodative lag, greater visual fatigue and a decrease in negative relative accommodation. Conversely, near negative fusional reserves seem to adapt to the task. No significant differences were found between the two groups with respect to accommodative metrics (objective) or subjective and optometric tests.
Collapse
Affiliation(s)
- Rémi Coq
- French Armed Forces Biomedical Research Institute, Bretigny-sur-orge, France
- LuMIn, CNRS, ENS Paris-Saclay, CentraleSupelec, Université Paris-Saclay, Orsay, France
| | - Pascaline Neveu
- French Armed Forces Biomedical Research Institute, Bretigny-sur-orge, France
| | - Justin Plantier
- French Armed Forces Biomedical Research Institute, Bretigny-sur-orge, France
| | - Richard Legras
- LuMIn, CNRS, ENS Paris-Saclay, CentraleSupelec, Université Paris-Saclay, Orsay, France
| |
Collapse
|
3
|
Gu Z, Bao X, Zhu Y, Wang Q, Gao W, Wang D, Tian Y, Li Y. Rejecting small! Accepting larger? Optimizing the font size of Chinese characters basing legibility and visual fatigue in OST-HMDs. ERGONOMICS 2024:1-14. [PMID: 38934640 DOI: 10.1080/00140139.2024.2369205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Accepted: 06/11/2024] [Indexed: 06/28/2024]
Abstract
Font size is highly related to the legibility and visual fatigue in OST-HMDs, but the effects of font size on these factors remain further explored. An experiment was conducted to investigate the effects of a wider range of Chinese character font size (0.32°-1°) on legibility and visual fatigue, as well as to determine the optimal font size. Results showed that 0.32° had the worst legibility, but there was no continuous improvement as font size increased. A larger font size was found to be beneficial in reducing visual fatigue until it reached 0.95°, beyond which visual fatigue would relatively increase. Font size smaller than 0.32° should be rejected while a larger font size does not always provide more benefits. Considering legibility, visual fatigue and efficiency of text presentation, 0.84° is a relatively optimal Chinese character font size.
Collapse
Affiliation(s)
- Zhengyin Gu
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Xinle Bao
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Ying Zhu
- Hangzhou ROBAM Appliances Co., Ltd, Hangzhou, China
| | - Qijun Wang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Wei Gao
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Duming Wang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Yu Tian
- National Key Laboratory of Human Factors Engineering, China Astronaut Research and Training Center, Beijing, China
| | - You Li
- National Key Laboratory of Human Factors Engineering, China Astronaut Research and Training Center, Beijing, China
| |
Collapse
|
4
|
Al Ali H, Nassief S, Towers A, Field J, Martin N. The value of stereoscopic three-dimensional vision on dental students' performance in a virtual reality simulator. J Dent Educ 2024. [PMID: 38923493 DOI: 10.1002/jdd.13630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 04/21/2024] [Accepted: 06/01/2024] [Indexed: 06/28/2024]
Abstract
PURPOSE/OBJECTIVES The aim of this study was to quantitatively investigate the impact of stereoscopic three-dimensional (3D) vision on students' performance when compared with that of two-dimensional (2D) vision in a 3D virtual reality (VR) simulator. METHODS Twenty-four dental students (second- and fourth-year BDS) were assigned to perform three operative tasks under 3D and 2D viewing conditions on a Virteasy (HRV) simulator. Groups were crossed over and all students performed the same tasks under the alternate viewing conditions. The performance was evaluated by (1) accuracy, (2) outside target area removal, and (3) tooth cutting time, automatically using the generated feedback. RESULTS Twenty-one participants completed all sessions. The results revealed a statistically significant effect of 3D vision over 2D vision on students' performance in terms of accuracy (p = 0.035). Stereoscopic 3D vision showed significant effect on outside target area removal in the first task (p = 0.035). Tooth cutting time was the same under both conditions (p = 0.766). The findings revealed improvement in accuracy score and reduction in outside target area removal over the course of the experiment under both conditions. Comparing the difference in 3D effect in the early and advanced learning groups revealed no significant difference among the groups (p > 0.05). CONCLUSION Utilizing stereoscopic 3D vision in the training session improved students' perception of depth which led to more accurate tooth cutting within the target area, and less outside target area removal. However, 3D shows a limited impact on task completion time.
Collapse
Affiliation(s)
- Huda Al Ali
- School of Clinical Dentistry, The University of Sheffield, Sheffield, UK
| | - Sarah Nassief
- College of dental medicine, Umm Al-Qura University, Makkah, Saudi Arabia
| | - Ashley Towers
- School of Clinical Dentistry, The University of Sheffield, Sheffield, UK
| | | | - Nicolas Martin
- School of Clinical Dentistry, The University of Sheffield, Sheffield, UK
| |
Collapse
|
5
|
Emond W, Bohrmann D, Zare M. Will visual cues help alleviating motion sickness in automated cars? A review article. ERGONOMICS 2024; 67:772-800. [PMID: 37981841 DOI: 10.1080/00140139.2023.2286187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 11/16/2023] [Indexed: 11/21/2023]
Abstract
This paper examines the feasibility of incorporating visual cueing systems within vehicles to mitigate the risk of experiencing motion sickness. The objective is to enhance passenger awareness and the ability to anticipate the forces associated with car travel motion. Through a comprehensive literature review, the findings demonstrate that visual cues can mitigate motion sickness for particular in-vehicle configurations, whereas their influence on situational awareness is not clear yet. Each type of visual cue proved more effective when presented in the peripheral field of view rather than solely in the central vision. Promising applications can be found within interactive screens and ambient lighting, while the use of extended reality shows potential for future investigations. In addition, integrating such systems into highly automated vehicles shows potential to improve their overall user acceptance.
Collapse
Affiliation(s)
- William Emond
- UTBM, ELLIADD-ERCOS, Belfort Cedex, France
- Mercedes-Benz AG, Mercedes Technology Center, Sindelfingen, Germany
| | - Dominique Bohrmann
- Mercedes-Benz AG, Mercedes Technology Center, Sindelfingen, Germany
- University of Trier, Trier, Germany
| | | |
Collapse
|
6
|
Novotny J, Laidlaw DH. Evaluating Text Reading Speed in VR Scenes and 3D Particle Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2602-2612. [PMID: 38437104 DOI: 10.1109/tvcg.2024.3372093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
This work reports how text size and other rendering conditions affect reading speeds in a virtual reality environment and a scientific data analysis application. Displaying text legibly yet space-efficiently is a challenging problem in immersive displays. Effective text displays that enable users to read at their maximum speed must consider the variety of virtual reality (VR) display hardware and possible visual exploration tasks. We investigate how text size and display parameters affect reading speed and legibility in three state-of-the-art VR displays: two head-mounted displays and one CAVE. In our perception experiments, we establish limits where reading speed declines as the text size approaches the so-called critical print sizes (CPS) of individual displays, which can inform the design of uniform reading experiences across different VR systems. We observe an inverse correlation between display resolution and CPS. Yet, even in high-fidelity VR systems, the measured CPS was larger than in comparable physical text displays, highlighting the value of increased VR display resolutions in certain visualization scenarios. Our findings indicate that CPS can be an effective metric for evaluating VR display usability. Additionally, we evaluate the effects of text panel placement, orientation, and occlusion-reducing rendering methods on reading speeds in generic volumetric particle visualizations. Our study provides insights into the trade-off between text representation and legibility in cluttered immersive environments with specific suggestions for visualization designers and highlight areas for further research.
Collapse
|
7
|
Mikawa Y, Fukiage T. Low-Latency Ocular Parallax Rendering and Investigation of Its Effect on Depth Perception in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2228-2238. [PMID: 38442067 DOI: 10.1109/tvcg.2024.3372078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/07/2024]
Abstract
With a demand for an immersive experience in virtual/augmented reality (VR/AR) displays, recent efforts have incorporated eye states, such as focus and fixation, into display graphics. Among these, ocular parallax, a small parallax generated by eye rotation, has received considerable attention for its impact on depth perception. However, the substantial latency of head-mounted displays (HMDs) has made it challenging to accurately assess its true effect during free eye movements. To address this issue, we propose a high-speed (360 Hz) and low-latency (4.8 ms) ocular parallax rendering system with a custom-built eye tracker. Using this proposed system, we conducted an investigation to determine the latency requirements necessary for achieving perceptually stable ocular parallax rendering. Our findings indicate that, in binocular viewing, ocular parallax rendering is perceived as significantly less stable than conventional rendering when the latency exceeds 43.72 ms at 1.3 D and 21.50 ms at 2.0 D. We also evaluated the effects of ocular parallax rendering on binocular fusion and monocular depth perception under free viewing conditions. The results demonstrated that ocular parallax rendering can enhance binocular fusion but has a limited impact on depth perception under monocular viewing conditions when latency is minimized.
Collapse
|
8
|
Alonso JR, Fernández A, Javidi B. Spatial perception in stereoscopic augmented reality based on multifocus sensing. OPTICS EXPRESS 2024; 32:5943-5955. [PMID: 38439309 DOI: 10.1364/oe.510688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/12/2024] [Indexed: 03/06/2024]
Abstract
In many areas ranging from medical imaging to visual entertainment, 3D information acquisition and display is a key task. In this regard, in multifocus computational imaging, stacks of images of a certain 3D scene are acquired under different focus configurations and are later combined by means of post-capture algorithms based on image formation model in order to synthesize images with novel viewpoints of the scene. Stereoscopic augmented reality devices, through which is possible to simultaneously visualize the three dimensional real world along with overlaid digital stereoscopic image pair, could benefit from the binocular content allowed by multifocus computational imaging. Spatial perception of the displayed stereo pairs can be controlled by synthesizing the desired point of view of each image of the stereo-pair along with their parallax setting. The proposed method has the potential to alleviate the accommodation-convergence conflict and make augmented reality stereoscopic devices less vulnerable to visual fatigue.
Collapse
|
9
|
Cooper EA. The Perceptual Science of Augmented Reality. Annu Rev Vis Sci 2023; 9:455-478. [PMID: 36944311 DOI: 10.1146/annurev-vision-111022-123758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2023]
Abstract
Augmented reality (AR) systems aim to alter our view of the world and enable us to see things that are not actually there. The resulting discrepancy between perception and reality can create compelling entertainment and can support innovative approaches to education, guidance, and assistive tools. However, building an AR system that effectively integrates with our natural visual experience is hard. AR systems often suffer from visual limitations and artifacts, and addressing these flaws requires basic knowledge of perception. At the same time, AR system development can serve as a catalyst that drives innovative new research in perceptual science. This review describes recent perceptual research pertinent to and driven by modern AR systems, with the goal of highlighting thought-provoking areas of inquiry and open questions.
Collapse
Affiliation(s)
- Emily A Cooper
- Herbert Wertheim School of Optometry & Vision Science, Helen Wills Neuroscience Institute, University of California, Berkeley, California, USA;
| |
Collapse
|
10
|
Mevlevioğlu D, Tabirca S, Murphy D. Anxiety classification in virtual reality using biosensors: A mini scoping review. PLoS One 2023; 18:e0287984. [PMID: 37428748 DOI: 10.1371/journal.pone.0287984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 06/15/2023] [Indexed: 07/12/2023] Open
Abstract
BACKGROUND Anxiety prediction can be used for enhancing Virtual Reality applications. We aimed to assess the evidence on whether anxiety can be accurately classified in Virtual Reality. METHODS We conducted a scoping review using Scopus, Web of Science, IEEE Xplore, and ACM Digital Library as data sources. Our search included studies from 2010 to 2022. Our inclusion criteria were peer-reviewed studies which take place in a Virtual Reality environment and assess the user's anxiety using machine learning classification models and biosensors. RESULTS 1749 records were identified and out of these, 11 (n = 237) studies were selected. Studies had varying numbers of outputs, from two outputs to eleven. Accuracy of anxiety classification for two-output models ranged from 75% to 96.4%; accuracy for three-output models ranged from 67.5% to 96.3%; accuracy for four-output models ranged from 38.8% to 86.3%. The most commonly used measures were electrodermal activity and heart rate. CONCLUSION Results show that it is possible to create high-accuracy models to determine anxiety in real time. However, it should be noted that there is a lack of standardisation when it comes to defining ground truth for anxiety, making these results difficult to interpret. Additionally, many of these studies included small samples consisting of mostly students, which may bias the results. Future studies should be very careful in defining anxiety and aim for a more inclusive and larger sample. It is also important to research the application of the classification by conducting longitudinal studies.
Collapse
Affiliation(s)
- Deniz Mevlevioğlu
- Department of Computer Science, University College Cork, Cork, Ireland
| | - Sabin Tabirca
- Department of Computer Science, University College Cork, Cork, Ireland
- Faculty of Mathematics and Informatics, Transylvania University of Brasov, Brașov Romania
| | - David Murphy
- Department of Computer Science, University College Cork, Cork, Ireland
| |
Collapse
|
11
|
Souchet AD, Lourdeaux D, Burkhardt JM, Hancock PA. Design guidelines for limiting and eliminating virtual reality-induced symptoms and effects at work: a comprehensive, factor-oriented review. Front Psychol 2023; 14:1161932. [PMID: 37359863 PMCID: PMC10288216 DOI: 10.3389/fpsyg.2023.1161932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 05/16/2023] [Indexed: 06/28/2023] Open
Abstract
Virtual reality (VR) can induce side effects known as virtual reality-induced symptoms and effects (VRISE). To address this concern, we identify a literature-based listing of these factors thought to influence VRISE with a focus on office work use. Using those, we recommend guidelines for VRISE amelioration intended for virtual environment creators and users. We identify five VRISE risks, focusing on short-term symptoms with their short-term effects. Three overall factor categories are considered: individual, hardware, and software. Over 90 factors may influence VRISE frequency and severity. We identify guidelines for each factor to help reduce VR side effects. To better reflect our confidence in those guidelines, we graded each with a level of evidence rating. Common factors occasionally influence different forms of VRISE. This can lead to confusion in the literature. General guidelines for using VR at work involve worker adaptation, such as limiting immersion times to between 20 and 30 min. These regimens involve taking regular breaks. Extra care is required for workers with special needs, neurodiversity, and gerontechnological concerns. In addition to following our guidelines, stakeholders should be aware that current head-mounted displays and virtual environments can continue to induce VRISE. While no single existing method fully alleviates VRISE, workers' health and safety must be monitored and safeguarded when VR is used at work.
Collapse
Affiliation(s)
- Alexis D. Souchet
- Heudiasyc UMR 7253, Alliance Sorbonne Université, Université de Technologie de Compiègne, CNRS, Compiègne, France
- Institute for Creative Technologies, University of Southern California, Los Angeles, CA, United States
| | - Domitile Lourdeaux
- Heudiasyc UMR 7253, Alliance Sorbonne Université, Université de Technologie de Compiègne, CNRS, Compiègne, France
| | | | - Peter A. Hancock
- Department of Psychology, University of Central Florida, Orlando, FL, United States
| |
Collapse
|
12
|
Kim SK, Kwon Y, Yoon KH. Extended depth of field in augmented reality. Sci Rep 2023; 13:8786. [PMID: 37258690 DOI: 10.1038/s41598-023-35819-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 05/24/2023] [Indexed: 06/02/2023] Open
Abstract
The 3D display device shows an image with depth information. Conventional 3D display devices based on binocular parallax can focus accurately only on the depth of a specific screen. Because the human eye has a narrow depth of field (DOF) under normal circumstances, 3D displays that provide a relatively wide range of virtual depth areas have limitations on the DOF where clear 3D images are seen. To resolve this problem, it is necessary to find the optical conditions to extend the DOF and analyze the phenomena related to it. For this, by using the Rayleigh criterion and the Strehl ratio, a criterion for this extension of the DOF is suggested. A practical optical structure that can effectively extend the DOF is devised using a flat panel display. This optical structure could be applied to AR, VR, and MR in the field of near-eye displays. From the results of this research, the fundamental optical conditions and standards are proposed for 3D displays that will provide 3D images with extended DOF in the future. Furthermore, it is also expected that these conditions and criteria can be applied to optical designs for the required performance in the development of 3D displays in various fields.
Collapse
Affiliation(s)
- Sung Kyu Kim
- Center for Artificial Intelligence, Korea Institute of Science and Technology, Seoul, 136-791, South Korea.
| | - Yongjoon Kwon
- Department of Physics, Seoul Science High School, Seoul, 03066, South Korea
| | - Ki-Hyuk Yoon
- Center for Artificial Intelligence, Korea Institute of Science and Technology, Seoul, 136-791, South Korea
| |
Collapse
|
13
|
Hasan MM, Hossain MA, Alotaibi N, Arnold JF, Azad AKM. Binocular Rivalry Impact on Macroblock-Loss Error Concealment for Stereoscopic 3D Video Transmission. SENSORS (BASEL, SWITZERLAND) 2023; 23:3604. [PMID: 37050665 PMCID: PMC10098650 DOI: 10.3390/s23073604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 03/27/2023] [Accepted: 03/28/2023] [Indexed: 06/19/2023]
Abstract
Three-dimensional video services delivered through wireless communication channels have to deal with numerous challenges due to the limitations of both the transmission channel's bandwidth and receiving devices. Adverse channel conditions, delays, or jitters can result in bit errors and packet losses, which can alter the appearance of stereoscopic 3D (S3D) video. Due to the perception of dissimilar patterns by the two human eyes, they can not be fused into a stable composite pattern in the brain and hence try to dominate by suppressing each other. Thus, a psychovisual sensation that is called binocular rivalry occurs. As a result, undetectable changes causing irritating flickering effects are seen, leading to visual discomforts such as eye strain, headache, nausea, and weariness. This study addresses the observer's quality of experience (QoE) by analyzing the binocular rivalry impact on the macroblock (MB) losses in a frame and its error propagation due to predictive frame encoding in stereoscopic video transmission systems. To simulate the processing of experimental videos, the Joint Test Model (JM) reference software has been used as it is recommended by the International Telecommunication Union (ITU). Existing error concealing techniques were then applied to the contiguous lost MBs for a variety of transmission impairments. In order to validate the authenticity of the simulated packet loss environment, several objective evaluations were carried out. Standard numbers of subjects were then engaged in the subjective testing of common 3D video sequences. The results were then statistically examined using a standard Student's t-test, allowing the impact of binocular rivalry to be compared to that of a non-rivalry error condition. The major goal is to assure error-free video communication by minimizing the negative impacts of binocular rivalry and boosting the ability to efficiently integrate 3D video material to improve viewers' overall QoE.
Collapse
Affiliation(s)
- Md Mehedi Hasan
- Department of Robotics and Mechatronics Engineering, University of Dhaka, Dhaka 1000, Bangladesh;
| | - Md. Azam Hossain
- Department of Computer Science and Engineering, Islamic University of Technology, Gazipur 1704, Bangladesh
| | - Naif Alotaibi
- Department of Mathematics and Statistics, College of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - John F. Arnold
- School of Engineering and Information Technology, University of New South Wales, Canberra 2600, Australia
| | - AKM Azad
- Department of Mathematics and Statistics, College of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| |
Collapse
|
14
|
Denkinger S, Antoniou MP, Tarello D, Levi DM, Backus BT, Bavelier D, Chopin A. The eRDS v6 Stereotest and the Vivid Vision Stereo Test: Two New Tests of Stereoscopic Vision. Transl Vis Sci Technol 2023; 12:1. [PMID: 36857068 PMCID: PMC9987163 DOI: 10.1167/tvst.12.3.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2023] Open
Abstract
Purpose To describe two new stereoacuity tests: the eRDS v6 stereotest, a global dynamic random dot stereogram (dRDS) test, and the Vivid Vision Stereo Test version 2 (VV), a local or "contour" stereotest for virtual reality (VR) headsets; and to evaluate the tests' reliability, validity compared to a dRDS standard, and learning effects. Methods Sixty-four subjects passed a battery of stereotests, including perceiving depth from RDS. Validity was evaluated relative to a tablet-based dRDS reference test, ASTEROID. Reliability and learning effects were assessed over six sessions. Results eRDS v6 was effective at measuring small thresholds (<10 arcsec) and had a moderate correlation (0.48) with ASTEROID. Across the six sessions, test-retest reliability was good, varying from 0.84 to 0.91, but learning occurred across the first three sessions. VV did not measure stereoacuities below 15 arcsec. It had a weak correlation with ASTEROID (0.27), and test-retest reliability was poor to moderate, varying from 0.35 to 0.74; however, no learning occurred between sessions. Conclusions eRDS v6 is precise and reliable but shows learning effects. If repeated three times at baseline, this test is well suited as an outcome measure for testing interventions. VV is less precise, but it is easy and rapid and shows no learning. It may be useful for testing interventions in patients who have no global stereopsis. Translational Relevance eRDS v6 is well suited as an outcome measure to evaluate treatments that improve adult stereodepth perception. VV can be considered for screening patient with compromised stereovision.
Collapse
Affiliation(s)
- Sylvie Denkinger
- Psychology and Education Sciences, University of Geneva, Switzerland.,Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Maria-Paraskevi Antoniou
- Psychology and Education Sciences, University of Geneva, Switzerland.,Institute of Information Systems, University of Applied Sciences & Arts Western Switzerland (HES-SO) Valais-Wallis, Sierre, Switzerland
| | - Demetrio Tarello
- Psychology and Education Sciences, University of Geneva, Switzerland
| | - Dennis M Levi
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Daphné Bavelier
- Psychology and Education Sciences, University of Geneva, Switzerland.,Psychology and Education Sciences, University of Geneva & Campus Biotech, Switzerland
| | - Adrien Chopin
- Psychology and Education Sciences, University of Geneva, Switzerland.,Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| |
Collapse
|
15
|
Rzepka AM, Hussey KJ, Maltz MV, Babin K, Wilcox LM, Culham JC. Familiar size affects perception differently in virtual reality and the real world. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210464. [PMID: 36511414 PMCID: PMC9745877 DOI: 10.1098/rstb.2021.0464] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 08/10/2022] [Indexed: 12/15/2022] Open
Abstract
The promise of virtual reality (VR) as a tool for perceptual and cognitive research rests on the assumption that perception in virtual environments generalizes to the real world. Here, we conducted two experiments to compare size and distance perception between VR and physical reality (Maltz et al. 2021 J. Vis. 21, 1-18). In experiment 1, we used VR to present dice and Rubik's cubes at their typical sizes or reversed sizes at distances that maintained a constant visual angle. After viewing the stimuli binocularly (to provide vergence and disparity information) or monocularly, participants manually estimated perceived size and distance. Unlike physical reality, where participants relied less on familiar size and more on presented size during binocular versus monocular viewing, in VR participants relied heavily on familiar size regardless of the availability of binocular cues. In experiment 2, we demonstrated that the effects in VR generalized to other stimuli and to a higher quality VR headset. These results suggest that the use of binocular cues and familiar size differs substantially between virtual and physical reality. A deeper understanding of perceptual differences is necessary before assuming that research outcomes from VR will generalize to the real world. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Anna M. Rzepka
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Kieran J. Hussey
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Margaret V. Maltz
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Karsten Babin
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Laurie M. Wilcox
- Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| | - Jody C. Culham
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| |
Collapse
|
16
|
Browning MHEM, Shin S, Drong G, McAnirlin O, Gagnon RJ, Ranganathan S, Sindelar K, Hoptman D, Bratman GN, Yuan S, Prabhu VG, Heller W. Daily exposure to virtual nature reduces symptoms of anxiety in college students. Sci Rep 2023; 13:1239. [PMID: 36690698 PMCID: PMC9868517 DOI: 10.1038/s41598-023-28070-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 01/12/2023] [Indexed: 01/24/2023] Open
Abstract
Exposure to natural environments offers an array of mental health benefits. Virtual reality provides simulated experiences of being in nature when outdoor access is limited. Previous studies on virtual nature have focused mainly on single "doses" of virtual nature. The effects of repeated exposure remain poorly understood. Motivated by this gap, we studied the influence of a daily virtual nature intervention on symptoms of anxiety, depression, and an underlying cause of poor mental health: rumination. Forty college students (58% non-Hispanic White, median age = 19) were recruited from two U.S. universities and randomly assigned to the intervention or control group. Over several weeks, anxious arousal (panic) and anxious apprehension (worry) decreased with virtual nature exposure. Participants identifying as women, past VR users, experienced with the outdoors, and engaged with the beauty in nature benefited particularly strongly from virtual nature. Virtual nature did not help symptoms of anhedonic depression or rumination. Further research is necessary to distinguish when and for whom virtual nature interventions impact mental health outcomes.
Collapse
Affiliation(s)
- Matthew H E M Browning
- Virtual Reality and Nature Lab, Clemson University, Clemson, SC, USA.
- Department of Parks, Recreation and Tourism Management, Clemson University, Clemson, SC, USA.
| | - Seunguk Shin
- Department of Natural Resources and Environmental Sciences, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Gabrielle Drong
- College of Education, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| | - Olivia McAnirlin
- Virtual Reality and Nature Lab, Clemson University, Clemson, SC, USA
- Department of Parks, Recreation and Tourism Management, Clemson University, Clemson, SC, USA
| | - Ryan J Gagnon
- Department of Parks, Recreation and Tourism Management, Clemson University, Clemson, SC, USA
| | - Shyam Ranganathan
- School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC, USA
| | | | | | - Gregory N Bratman
- School of Environmental and Forest Sciences, University of Washington, Seattle, WA, USA
| | - Shuai Yuan
- Virtual Reality and Nature Lab, Clemson University, Clemson, SC, USA
- Department of Parks, Recreation and Tourism Management, Clemson University, Clemson, SC, USA
| | - Vishnunarayan Girishan Prabhu
- Virtual Reality and Nature Lab, Clemson University, Clemson, SC, USA
- Systems Engineering and Engineering Management, University of North Carolina at Charlotte, Charlotte, NC, USA
| | - Wendy Heller
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| |
Collapse
|
17
|
Chen Y, Ma T, Ye Z, Li Z. Effect of illuminance and colour temperature of LED lighting on asthenopia during reading. Ophthalmic Physiol Opt 2023; 43:73-82. [PMID: 36181399 DOI: 10.1111/opo.13051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 08/19/2022] [Accepted: 08/19/2022] [Indexed: 12/27/2022]
Abstract
PURPOSE A self-controlled study to determine the influence of illuminance and correlated colour temperature (CCT) of light-emitting diode (LED) lighting on asthenopia. METHODS Twenty-two healthy postgraduates (nine women) were recruited to read under eight LED lighting conditions with four illuminances (300 lx, 500 lx, 750 lx and 1000 lx) and four CCTs (2700, 4000, 5000 and 6500 K) for 2 h. A subjective asthenopia questionnaire, the optical quality analysis system (OQAS) and an inflammatory cytokine assay were used to assess the levels of asthenopia. RESULTS Increased asthenopia was observed after reading, but the degree varied with lighting conditions. There were significant differences among the groups in terms of subjective symptoms (inattention, eye pain, dry eye and total score), optical performance parameters (modulation transfer function [MTF] cut-off frequency, Strehl ratio [SR], objective scattering index [OSI], mean OSI and accommodative amplitude [AA]) as well as inflammatory cytokines in the tears (epidermal growth factor [EGF], transforming growth factor [TGF]-α, interleukin [IL]-6, IL-8, macrophage inflammatory protein [MIP]-1β, tumour necrosis factor [TNF]-α, TNF-β and vascular endothelial growth factor [VEGF]-A). All of the subjective and objective measurements collectively suggested that asthenopia was lessened for the 500 lx-4000 K condition. However, asthenopia was significantly worse for 300 lx-2700 K and 1000 lx-6500 K in terms of subjective symptoms and objective optical performance, respectively. CONCLUSIONS LED illuminance and CCT do have a significant effect on asthenopia during reading. 500 lx-4000 K lighting resulted in the lowest level of asthenopia. Conversely, low illuminance at low CCT (300 lx-2700 K) and high illuminance at high CCT (1000 lx-6500 K) promoted more severe asthenopia.
Collapse
Affiliation(s)
- Yilin Chen
- School of Medicine, Nankai University, Tianjin, China
| | - Tianju Ma
- Department of Ophthalmology, The Chinese People's Liberation Army, General Hospital, Beijing, China
| | - Zi Ye
- School of Medicine, Nankai University, Tianjin, China.,Department of Ophthalmology, The Chinese People's Liberation Army, General Hospital, Beijing, China
| | - Zhaohui Li
- School of Medicine, Nankai University, Tianjin, China.,Department of Ophthalmology, The Chinese People's Liberation Army, General Hospital, Beijing, China
| |
Collapse
|
18
|
Han W, Han J, Ju YG, Jang J, Park JH. Super multi-view near-eye display with a lightguide combiner. OPTICS EXPRESS 2022; 30:46383-46403. [PMID: 36558594 DOI: 10.1364/oe.477517] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 11/25/2022] [Indexed: 06/17/2023]
Abstract
We propose a lightguide-type super multi-view near-eye display that uses a digital micromirror device and a LED array. The proposed method presents three-dimensional images with a natural monocular depth cue using a compact combiner optics which consists of a thin lightguide and holographic optical elements (HOEs). Feasibility of the proposed method is verified by optical experiments which demonstrate monocular three-dimensional image presentation over a wide depth range. We also analyze the degradation of the image quality stemming from the spectral spread of the HOEs and show its reduction by a pre-compensation exploiting an adaptive moment estimation (Adam) optimizer.
Collapse
|
19
|
Lema AK, Anbesu EW. Computer vision syndrome and its determinants: A systematic review and meta-analysis. SAGE Open Med 2022; 10:20503121221142402. [PMCID: PMC9743027 DOI: 10.1177/20503121221142402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 11/14/2022] [Indexed: 12/14/2022] Open
Abstract
Objective: Computer vision syndromes are becoming a major public health concern. Inconsistent findings existed on computer vision syndrome. This systematic review and meta-analysis aimed to estimate the pooled prevalence of computer vision syndrome and identify its determinants. Methods: In this study, the review was developed using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Online electronic databases, including PubMed/Medline, CINAHL, and Google Scholar, were used to retrieve studies from 1 December to 9 April 2022. Quality assessment of the studies was performed using the JBI-MAStARI. RevMan and STATA 14 software were used for statistical analysis. Result: A total of 725 studies were retrieved, and 49 studies were included. The pooled prevalence of computer vision syndrome was 66% (95%, Confidence interval: 59, 74). Being female (Odd Ratio = 1.74, 95% Confidence interval [1.2, 2.53]), improper body posturing while using electronic devices (Odd Ratio = 2.65, 95% Confidence interval [1.7, 4.12]), use of electronic devices out of work (Odd Ratio = 1.66, 95% CI [1.15, 2.39]), no habit of taking breaks (Odd Ratio = 2.24, 95% Confidence interval [1.13, 4.44]), long duration of visual display terminal use (Odd Ratio = 2.02, 95% Confidence interval [1.08, 3.77]), short distance screen (Odd Ratio = 4.24, 95% Confidence interval [2.33, 7.71]), and general ergonomic practice (Odd Ratio = 3.87, 95% Confidence interval [2.18, 6.86]) were associated with increased odds of computer vision syndrome. However, good knowledge (Odd Ratio = 4.04, 95% Confidence interval [2.75, 5.94]) of computer vision syndrome was associated with decreased odds of computer vision syndrome. Conclusion: Nearly two in three participants had computer vision syndrome. Being female, improper body posturing, use of electronics devices out of work, no habit of taking a break, long-hour duration of visual display terminal use, short-distance screen, and general ergonomic practice were associated with increased odds of computer vision syndrome.
Collapse
Affiliation(s)
- Asamene Kelelom Lema
- Department of Computer Science, College of Engineering and Technology, Samara University, Samara, Ethiopia
| | - Etsay Woldu Anbesu
- Department of Public Health, College of Medical and Health Sciences, Samara University, Samara, Ethiopia,Etsay Woldu Anbesu, Department of Public Health, College of Medical and Health Sciences, Samara University, 132, Semera, Afar region, Ethiopia.
| |
Collapse
|
20
|
Akagi R, Sato H, Hirayama T, Hirata K, Kokubu M, Ando S. Effects of three-dimension movie visual fatigue on cognitive performance and brain activity. Front Hum Neurosci 2022; 16:974406. [PMID: 36337858 PMCID: PMC9626648 DOI: 10.3389/fnhum.2022.974406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 10/04/2022] [Indexed: 11/13/2022] Open
Abstract
To further develop three-dimensional (3D) applications, it is important to elucidate the negative effects of 3D applications on the human body and mind. Thus, this study investigated differences in the effects of visual fatigue on cognition and brain activity using visual and auditory tasks induced by watching a 1-h movie in two dimensions (2D) and 3D. Eighteen young men participated in this study. Two conditions were randomly performed for each participant on different days, namely, watching the 1-h movie on television in 2D (control condition) and 3D (3D condition). Before and after watching the 1-h movie on television, critical flicker fusion frequency (CFF: an index of visual fatigue), and response accuracy and reaction time for the cognitive tasks were determined. Brain activity during the cognitive tasks was evaluated using a multi-channel near-infrared spectroscopy system. In contrast to the control condition, the decreased CFF, and the lengthened reaction time and the decreased activity around the right primary somatosensory cortex during Go/NoGo blocks in the visual task at post-viewing in the 3D condition were significant, with significant repeated measures correlations among them. Meanwhile, in the auditory task, the changes in cognitive performance and brain activity during the Go/NoGo blocks were not significant in the 3D condition. These results suggest that the failure or delay in the transmission of visual information to the primary somatosensory cortex due to visual fatigue induced by watching a 3D movie reduced the brain activity around the primary somatosensory cortex, resulting in poor cognitive performance for the visual task. This suggests that performing tasks that require visual information, such as running in the dark or driving a car, immediately after using a 3D application, may create unexpected risks in our lives. Thus, the findings of this study will help outlining precautions for the use of 3D applications.
Collapse
Affiliation(s)
- Ryota Akagi
- College of Systems Engineering and Science, Shibaura Institute of Technology, Saitama, Japan
- Graduate School of Engineering and Science, Shibaura Institute of Technology, Saitama, Japan
- *Correspondence: Ryota Akagi,
| | - Hiroki Sato
- College of Systems Engineering and Science, Shibaura Institute of Technology, Saitama, Japan
- Graduate School of Engineering and Science, Shibaura Institute of Technology, Saitama, Japan
| | - Tatsuya Hirayama
- College of Systems Engineering and Science, Shibaura Institute of Technology, Saitama, Japan
| | - Kosuke Hirata
- Faculty of Sport Sciences, Waseda University, Tokorozawa, Japan
| | - Masahiro Kokubu
- Faculty of Health and Sport Sciences, University of Tsukuba, Tsukuba, Japan
| | - Soichi Ando
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Chofu, Japan
| |
Collapse
|
21
|
Neural Research on Depth Perception and Stereoscopic Visual Fatigue in Virtual Reality. Brain Sci 2022; 12:brainsci12091231. [PMID: 36138967 PMCID: PMC9497221 DOI: 10.3390/brainsci12091231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 09/04/2022] [Accepted: 09/07/2022] [Indexed: 11/29/2022] Open
Abstract
Virtual reality (VR) technology provides highly immersive depth perception experiences; nevertheless, stereoscopic visual fatigue (SVF) has become an important factor currently hindering the development of VR applications. However, there is scant research on the underlying neural mechanism of SVF, especially those induced by VR displays, which need further research. In this paper, a Go/NoGo paradigm based on disparity variations is proposed to induce SVF associated with depth perception, and the underlying neural mechanism of SVF in a VR environment was investigated. The effects of disparity variations as well as SVF on the temporal characteristics of visual evoked potentials (VEPs) were explored. Point-by-point permutation statistical with repeated measures ANOVA results revealed that the amplitudes and latencies of the posterior VEP component P2 were modulated by disparities, and posterior P2 amplitudes were modulated differently by SVF in different depth perception situations. Cortical source localization analysis was performed to explore the original cortex areas related to certain fatigue levels and disparities, and the results showed that posterior P2 generated from the precuneus could represent depth perception in binocular vision, and therefore could be performed to distinguish SVF induced by disparity variations. Our findings could help to extend an understanding of the neural mechanisms underlying depth perception and SVF as well as providing beneficial information for improving the visual experience in VR applications.
Collapse
|
22
|
Image-guided pelvic exenteration-preoperative and intraoperative strategies. Eur J Surg Oncol 2022; 48:2263-2276. [DOI: 10.1016/j.ejso.2022.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 07/19/2022] [Accepted: 08/01/2022] [Indexed: 12/19/2022] Open
|
23
|
Zhang K, Qu Z, Zhong X, Chen Q, Zhang X. Design of binocular stereo vision optical system based on a single lens and a single sensor. APPLIED OPTICS 2022; 61:6690-6696. [PMID: 36255746 DOI: 10.1364/ao.461564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 07/13/2022] [Indexed: 06/16/2023]
Abstract
To reduce the number of detectors used in conventional binocular stereo cameras, while improving the measurement accuracy and compactness of the system, this paper proposes a design method for a binocular stereo vision optical system based on a single lens and a single sensor. First, based on the design principle of the traditional binocular optical system, to the best of our knowledge, a novel method of designing a framing lens array at the optical stop of the optical system is proposed to image two images on one detector simultaneously. Second, we propose a dual-frame lens array design method at the aperture stop position of the optical system that can image two images on one detector simultaneously. Then, the design principle of the method is analyzed theoretically, as well as a detailed analysis of the imaging position layout and the stray light elimination method of the dual-channel optical system. Finally, a single-lens binocular optical system with a focal length of 20 mm and a full field of view of 30° is designed using the method in this paper, and the analysis results demonstrate that the system has the advantages of good imaging quality and compact construction and provides a design idea for the design of a binocular stereo vision optical system.
Collapse
|
24
|
Cohen-Lazry G, Degani A, Oron-Gilad T, Hancock PA. Discomfort: an assessment and a model. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2022. [DOI: 10.1080/1463922x.2022.2103201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Affiliation(s)
- Guy Cohen-Lazry
- Human Factors Laboratory, Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Be’er Sheva, Israel
| | - Asaf Degani
- General Motors Advanced Technical Center, Herzelyia, Israel
| | - Tal Oron-Gilad
- Human Factors Laboratory, Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Be’er Sheva, Israel
| | - P. A. Hancock
- MIT2 Laboratory, IST School of Modelling, Simulation and Training, University of Central Florida, Orlando, FL, USA
| |
Collapse
|
25
|
Alonso JR, Silva A, Fernández A, Arocena M. Computational multifocus fluorescence microscopy for three-dimensional visualization of multicellular tumor spheroids. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-210320SSR. [PMID: 35655357 PMCID: PMC9162503 DOI: 10.1117/1.jbo.27.6.066501] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 05/23/2022] [Indexed: 05/27/2023]
Abstract
SIGNIFICANCE Three-dimensional (3D) visualization of multicellular tumor spheroids (MCTS) in fluorescence microscopy can rapidly provide qualitative morphological information about the architecture of these cellular aggregates, which can recapitulate key aspects of their in vivo counterpart. AIM The present work is aimed at overcoming the shallow depth-of-field (DoF) limitation in fluorescence microscopy while achieving 3D visualization of thick biological samples under study. APPROACH A custom-built fluorescence microscope with an electrically focus-tunable lens was developed to optically sweep in-depth the structure of MCTS. Acquired multifocus stacks were combined by means of postprocessing algorithms performed in the Fourier domain. RESULTS Images with relevant characteristics as extended DoF, stereoscopic pairs as well as reconstructed viewpoints of MCTS were obtained without segmentation of the focused regions or estimation of the depth map. The reconstructed images allowed us to observe the 3D morphology of cell aggregates. CONCLUSIONS Computational multifocus fluorescence microscopy can provide 3D visualization in MCTS. This tool is a promising development in assessing the morphological structure of different cellular aggregates while preserving a robust yet simple optical setup.
Collapse
Affiliation(s)
- Julia R. Alonso
- Universidad de la República, Instituto de Física, Facultad de Ingeniería, Montevideo, Uruguay
| | - Alejandro Silva
- Universidad de la República, Instituto de Física, Facultad de Ingeniería, Montevideo, Uruguay
| | - Ariel Fernández
- Universidad de la República, Instituto de Física, Facultad de Ingeniería, Montevideo, Uruguay
| | - Miguel Arocena
- Instituto de Investigaciones Biológicas Clemente Estable, Departamento de Genómica, Montevideo, Uruguay
- Universidad de la República, Cátedra de Bioquímica y Biofísica, Facultad de Odontología, Montevideo, Uruguay
| |
Collapse
|
26
|
Tian P, Xu G, Han C, Zheng X, Zhang K, Du C, Wei F, Zhang S. Effects of Paradigm Color and Screen Brightness on Visual Fatigue in Light Environment of Night Based on Eye Tracker and EEG Acquisition Equipment. SENSORS 2022; 22:s22114082. [PMID: 35684700 PMCID: PMC9185549 DOI: 10.3390/s22114082] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 05/25/2022] [Accepted: 05/26/2022] [Indexed: 12/04/2022]
Abstract
Nowadays, more people tend to go to bed late and spend their sleep time with various electronic devices. At the same time, the BCI (brain−computer interface) rehabilitation equipment uses a visual display, thus it is necessary to evaluate the problem of visual fatigue to avoid the impact on the training effect. Therefore, it is very important to understand the impact of using electronic devices in a dark environment at night on human visual fatigue. This paper uses Matlab to write different color paradigm stimulations, uses a 4K display with an adjustable screen brightness to jointly design the experiment, uses eye tracker and g.tec Electroencephalogram (EEG) equipment to collect the signal, and then carries out data processing and analysis, finally obtaining the influence of the combination of different colors and different screen brightness on human visual fatigue in a dark environment. In this study, subjects were asked to evaluate their subjective (Likert scale) perception, and objective signals (pupil diameter, θ + α frequency band data) were collected in a dark environment (<3 lx). The Likert scale showed that a low screen brightness in the dark environment could reduce the visual fatigue of the subjects, and participants preferred blue to red. The pupil data revealed that visual perception sensitivity was more vulnerable to stimulation at a medium and high screen brightness, which is easier to deepen visual fatigue. EEG frequency band data concluded that there was no significant difference between paradigm colors and screen brightness on visual fatigue. On this basis, this paper puts forward a new index—the visual anti-fatigue index, which provides a valuable reference for the optimization of the indoor living environment, the improvement of satisfaction with the use of electronic equipment and BCI rehabilitation equipment, and the protection of human eyes.
Collapse
Affiliation(s)
- Peiyuan Tian
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China; (P.T.); (C.H.); (X.Z.); (K.Z.); (C.D.); (F.W.); (S.Z.)
| | - Guanghua Xu
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China; (P.T.); (C.H.); (X.Z.); (K.Z.); (C.D.); (F.W.); (S.Z.)
- State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong University, Xi’an 710049, China
- Correspondence:
| | - Chengcheng Han
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China; (P.T.); (C.H.); (X.Z.); (K.Z.); (C.D.); (F.W.); (S.Z.)
| | - Xiaowei Zheng
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China; (P.T.); (C.H.); (X.Z.); (K.Z.); (C.D.); (F.W.); (S.Z.)
| | - Kai Zhang
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China; (P.T.); (C.H.); (X.Z.); (K.Z.); (C.D.); (F.W.); (S.Z.)
| | - Chenghang Du
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China; (P.T.); (C.H.); (X.Z.); (K.Z.); (C.D.); (F.W.); (S.Z.)
| | - Fan Wei
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China; (P.T.); (C.H.); (X.Z.); (K.Z.); (C.D.); (F.W.); (S.Z.)
| | - Sicong Zhang
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China; (P.T.); (C.H.); (X.Z.); (K.Z.); (C.D.); (F.W.); (S.Z.)
| |
Collapse
|
27
|
Arefin MS, Phillips N, Plopski A, Gabbard JL, Swan JE. The Effect of Context Switching, Focal Switching Distance, Binocular and Monocular Viewing, and Transient Focal Blur on Human Performance in Optical See-Through Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2014-2025. [PMID: 35167470 DOI: 10.1109/tvcg.2022.3150503] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In optical see-through augmented reality (AR), information is often distributed between real and virtual contexts, and often appears at different distances from the user. To integrate information, users must repeatedly switch context and change focal distance. If the user's task is conducted under time pressure, they may attempt to integrate information while their eye is still changing focal distance, a phenomenon we term transient focal blur. Previously, Gabbard, Mehra, and Swan (2018) examined these issues, using a text-based visual search task on a one-eye optical see-through AR display. This paper reports an experiment that partially replicates and extends this task on a custom-built AR Haploscope. The experiment examined the effects of context switching, focal switching distance, binocular and monocular viewing, and transient focal blur on task performance and eye fatigue. Context switching increased eye fatigue but did not decrease performance. Increasing focal switching distance increased eye fatigue and decreased performance. Monocular viewing also increased eye fatigue and decreased performance. The transient focal blur effect resulted in additional performance decrements, and is an addition to knowledge about AR user interface design issues.
Collapse
|
28
|
Augmented Reality and Virtual Reality in Dentistry: Highlights from the Current Research. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083719] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Many modern advancements have taken place in dentistry that have exponentially impacted the progress and practice of dentistry. Augmented reality (AR) and virtual reality (VR) are becoming the trend in the practice of modern dentistry because of their impact on changing the patient’s experience. The use of AR and VR has been beneficial in different fields of science, but their use in dentistry is yet to be thoroughly explored, and conventional ways of dentistry are still practiced at large. Over the past few years, dental treatment has been significantly reshaped by technological advancements. In dentistry, the use of AR and VR systems has not become widespread, but their different uses should be explored. Therefore, the aim of this review was to provide an update on the contemporary knowledge, to report on the ongoing progress of AR and VR in various fields of dental medicine and education, and to identify the further research required to achieve their translation into clinical practice. A literature search was performed in PubMed, Scopus, Web of Science, and Google Scholar for articles in peer-reviewed English-language journals published in the last 10 years up to 31 March 2021, with the help of specific keywords related to AR and VR in various dental fields. Of the total of 101 articles found in the literature search, 68 abstracts were considered suitable and further evaluated, and consequently, 33 full-texts were identified. Finally, a total of 13 full-texts were excluded from further analysis, resulting in 20 articles for final inclusion. The overall number of studies included in this review was low; thus, at this point in time, scientifically-proven recommendations could not be stated. AR and VR have been found to be beneficial tools for clinical practice and for enhancing the learning experiences of students during their pre-clinical education and training sessions. Clinicians can use VR technology to show their patients the expected outcomes before the undergo dental procedures. Additionally, AR and VR can be implemented to overcome dental phobia, which is commonly experienced by pediatric patients. Future studies should focus on forming technological standards with high-quality data and developing scientifically-proven AR/VR gadgets for dental practice.
Collapse
|
29
|
Boo H, Lee YS, Yang H, Matthews B, Lee TG, Wong CW. Metasurface wavefront control for high-performance user-natural augmented reality waveguide glasses. Sci Rep 2022; 12:5832. [PMID: 35388053 PMCID: PMC8986769 DOI: 10.1038/s41598-022-09680-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Accepted: 01/20/2022] [Indexed: 11/17/2022] Open
Abstract
Augmented reality (AR) devices, as smart glasses, enable users to see both the real world and virtual images simultaneously, contributing to an immersive experience in interactions and visualization. Recently, to reduce the size and weight of smart glasses, waveguides incorporating holographic optical elements in the form of advanced grating structures have been utilized to provide light-weight solutions instead of bulky helmet-type headsets. However current waveguide displays often have limited display resolution, efficiency and field-of-view, with complex multi-step fabrication processes of lower yield. In addition, current AR displays often have vergence-accommodation conflict in the augmented and virtual images, resulting in focusing-visual fatigue and eye strain. Here we report metasurface optical elements designed and experimentally implemented as a platform solution to overcome these limitations. Through careful dispersion control in the excited propagation and diffraction modes, we design and implement our high-resolution full-color prototype, via the combination of analytical–numerical simulations, nanofabrication and device measurements. With the metasurface control of the light propagation, our prototype device achieves a 1080-pixel resolution, a field-of-view more than 40°, an overall input–output efficiency more than 1%, and addresses the vergence-accommodation conflict through our focal-free implementation. Furthermore, our AR waveguide is achieved in a single metasurface-waveguide layer, aiding the scalability and process yield control.
Collapse
Affiliation(s)
- Hyunpil Boo
- Mesoscopic Optics and Quantum Electronics Laboratory, University of California, Los Angeles, CA, USA.
| | - Yoo Seung Lee
- Mesoscopic Optics and Quantum Electronics Laboratory, University of California, Los Angeles, CA, USA.
| | - Hangbo Yang
- Mesoscopic Optics and Quantum Electronics Laboratory, University of California, Los Angeles, CA, USA.
| | - Brian Matthews
- Nanofabrication Laboratory, University of California, Los Angeles, CA, USA
| | - Tom G Lee
- Nanofabrication Laboratory, University of California, Los Angeles, CA, USA
| | - Chee Wei Wong
- Mesoscopic Optics and Quantum Electronics Laboratory, University of California, Los Angeles, CA, USA.
| |
Collapse
|
30
|
A Study on the Design of Vision Protection Products Based on Children’s Visual Fatigue under Online Learning Scenarios. Healthcare (Basel) 2022; 10:healthcare10040621. [PMID: 35455799 PMCID: PMC9024956 DOI: 10.3390/healthcare10040621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/23/2022] [Accepted: 03/24/2022] [Indexed: 01/25/2023] Open
Abstract
The rate of myopia in children is increasing rapidly under online learning scenarios. One of the important reasons for this is incorrect reading and writing posture. Three screen view parameters (viewing angle, viewing height, and viewing distance) are selected as significant influencing factors and blink rating is used as a sign of visual fatigue through literature analysis to study the influence factors of myopia in children, and their correlation. Children’s visual fatigue is evaluated by subjective evaluation and is recording using an eye tracker for changes in the three factors through online learning scenario simulation experiment. An optimal regression model is constructed that illustrates the relationship between the three variables and the visual fatigue levels. The aim of this study is to confirm the quantitative relationship between the screen view parameters and visual fatigue, and to design a child vision protection product on this basis. The test results show there is a linear positive correlation between the viewing angle, viewing height, and viewing distance. A vision protection device has been designed based on this model and was verified through function prototype testing. The result of this study quantified the relationship among screen view parameters and children’s visual fatigue, which provides a theoretical basis for the design of a children’s visual protection device.
Collapse
|
31
|
Yue K, Guo M, Liu Y, Hu H, Lu K, Chen S, Wang D. Investigate the Neuro Mechanisms of Stereoscopic Visual Fatigue. IEEE J Biomed Health Inform 2022; 26:2963-2973. [PMID: 35316199 DOI: 10.1109/jbhi.2022.3161083] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Stereoscopic visual fatigue (SVF) due to prolonged immersion in the virtual environment can lead to negative user experience, thus hindering the development of virtual reality (VR) industry. Previous studies have focused on investigating the evaluation indicators associated with SVF, while few studies have been conducted to reveal the underlying neural mechanism, especially in VR applications. In this paper, a modified Go/NoGo paradigm was adopted to induce SVF in VR environment with Go trials for maintaining participants' attention to experimental viewing tasks and NoGo trials for investigating the neural effects under SVF. Random dot stereograms (RDSs) with 11 disparities and 2 types of shapes (arrow and rectangle) were presented to evoke the depth-related visual evoked potentials (DVEPs) during 64-channel EEG recordings. EEG datasets collected from 15 participants in NoGo trials were selected to conduct individual processing and group analysis, in which the characteristics of the DVEPs components for various fatigue degrees were compared with one-way repeated-measurement ANOVA and independent components were clustered to explore the original cortex areas related to SVF. Point-by-point permutation statistics revealed that DVEPs sample points from 230ms to 280ms in most brain areas changed significantly with SVF. More specifically, we found that amplitudes of component P2 changed significantly when SVF increased. Additionally, independent component analysis (ICA) identified that component P2 which originated from posterior cingulate cortex and precuneus, was associated statistically with SVF. We believe that SVF is rather a conscious status concerning the changes of self-awareness or self-location awareness than the performance reduction of retinal image processing. Moreover, we suggest that indicators representing higher conscious state may be a better indicator for SVF evaluation in VR environments.
Collapse
|
32
|
Fatigue-free visual perception of high-density super-multiview augmented reality images. Sci Rep 2022; 12:2959. [PMID: 35194078 PMCID: PMC8863894 DOI: 10.1038/s41598-022-06778-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 02/07/2022] [Indexed: 11/09/2022] Open
Abstract
It is well known that wearing virtual reality (VR) and augmented reality (AR) devices for long periods can cause visual fatigue and motion sickness due to vergence-accommodation conflict (VAC). VAC is considered the main obstacle to the development of advanced three-dimensional VR and AR technology. In this paper, we present a novel AR high-density super-multiview (HDSMV) display technique capable of eliminating VAC in wide range. The designed binocular time-sequential AR HDSMV projection, which delivers 11 views to each eye pupil, is experimentally demonstrated, confirming that VAC is eliminated over a wide-range of viewer's focus distance. It is believed that the proposed time-sequential AR HDSMV method will pave the way for the development of VAC-free AR technology.
Collapse
|
33
|
Watanabe R, Koiso R, Nonaka K, Sakamoto Y, Kobayashi T. Fast calculation method based on hidden region continuity for computer-generated holograms with multiple cameras in a sports scene. APPLIED OPTICS 2022; 61:B64-B76. [PMID: 35201127 DOI: 10.1364/ao.441049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 10/27/2021] [Indexed: 06/14/2023]
Abstract
We propose, to the best of our knowledge, the world's first system capable of fast calculating computer-generated holograms (CGHs) from a large-scale outdoor sports scene captured with multiple RGB cameras. In the system, we introduce a fast calculation method focusing on hidden region continuity (HRC) that frequently appears in a point cloud of a 3D sports scene generated from free-viewpoint video technology. The experimental results show that the calculation time of the proposed HRC method is five to 10 times faster than that of the point-based method, which is one of the common CGH calculation methods.
Collapse
|
34
|
Yoo D, Nam SW, Jo Y, Moon S, Lee CK, Lee B. Learning-based compensation of spatially varying aberrations for holographic display [Invited]. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:A86-A92. [PMID: 35200966 DOI: 10.1364/josaa.444613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 01/11/2022] [Indexed: 06/14/2023]
Abstract
We propose a hologram generation technique to compensate for spatially varying aberrations of holographic displays through machine learning. The image quality of the holographic display is severely degraded when there exist optical aberrations due to misalignment of optical elements or off-axis projection. One of the main advantages of holographic display is that aberrations can be compensated for without additional optical elements. Conventionally, computer-generated holograms for compensation are synthesized through a point-wise integration method, which requires large computational loads. Here, we propose to replace the integration with a combination of fast-Fourier-transform-based convolutions and forward computation of a deep neural network. The point-wise integration method took approximately 95.14 s to generate a hologram of 1024×1024pixels, while the proposed method took about 0.13 s, which corresponds to ×732 computation speed improvement. Furthermore, the aberration compensation by the proposed method is verified through experiments.
Collapse
|
35
|
Huang Y, Li M, Shen Y, Liu F, Fang Y, Xu H, Zhou X. Study of the Immediate Effects of Autostereoscopic 3D Visual Training on the Accommodative Functions of Myopes. Invest Ophthalmol Vis Sci 2022; 63:9. [PMID: 35113140 PMCID: PMC8819359 DOI: 10.1167/iovs.63.2.9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Purpose Stereoscopic viewing has an impact on ocular dynamics, but its effects on accommodative functions are not fully understood, especially for autostereoscopic viewing. This study aimed to investigate the changes in dynamic accommodative response, accommodative amplitude, and accommodative facility of myopes after autostereoscopic visual training. Methods We enrolled 46 adults (men = 22 and women = 24; age = 21.5 ± 2.5 [range = 18–25] years, spherical equivalent: −4.52 ± 1.89 [−8.88 to −1.75] diopters [D]) who visited the Eye & ENT Hospital of Fudan University. The study population was randomly divided into three-dimensional (3D) and two-dimensional (2D) viewing groups to watch an 11-minute training video displayed in 3D or 2D mode. Dynamic accommodative response, accommodative facility, and accommodative amplitude were measured before, during, and immediately after the training. Accommodative lag and the variability of accommodation were also analyzed. Visual fatigue was evaluated subjectively using a questionnaire. Results Accommodative lag decreased from 0.54 ± 0.29 D to 0.42 ± 0.32 D (P = 0.004), whereas accommodative facility increased from 10.83 ± 4.55 cycles per minute (cpm) to 13.15 ± 5.25 cpm (P < 0.001) in the 3D group. In the 2D group, there was no significant change in the accommodative lag (P = 0.163) or facility (P = 0.975), but a decrease in accommodative amplitude was observed (from 13.88 ± 3.17 D to 12.71 ± 2.23 D, P = 0.013). In the 3D group, the accommodative response changed with the simulated target distance. Visual fatigue was relatively mild in both groups. Conclusions The immediate impact of autostereoscopic training included a decrease in the accommodative lag and an increase in the accommodative facility. However, the long-term effects of autostereoscopic training require further exploration.
Collapse
Affiliation(s)
- Yangyi Huang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.,NHC Key Laboratory of Myopia (Fudan University), Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.,Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
| | - Meiyan Li
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.,NHC Key Laboratory of Myopia (Fudan University), Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.,Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
| | - Yang Shen
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.,NHC Key Laboratory of Myopia (Fudan University), Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.,Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
| | - Fang Liu
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.,NHC Key Laboratory of Myopia (Fudan University), Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.,Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
| | - Yong Fang
- Shanghai EVIS Technology Co. Ltd., Shanghai, China
| | - Haipeng Xu
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.,NHC Key Laboratory of Myopia (Fudan University), Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.,Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
| | - Xingtao Zhou
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.,NHC Key Laboratory of Myopia (Fudan University), Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.,Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
| |
Collapse
|
36
|
Kim J, Oh H, Kim W, Choi S, Son W, Lee S. A Deep Motion Sickness Predictor Induced by Visual Stimuli in Virtual Reality. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:554-566. [PMID: 33079678 DOI: 10.1109/tnnls.2020.3028080] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In a virtual reality (VR) environment, where visual stimuli predominate over other stimuli, the user experiences cybersickness because the balance of the body collapses due to self-motion. Accordingly, the VR experience is accompanied by unavoidable sickness referred to as visually induced motion sickness (VIMS). In this article, our primary purpose is to simultaneously estimate the VIMS score by referring to the content and calculate the temporally induced VIMS sensitivity. To seek our goals, we propose a novel architecture composed of two consecutive networks: 1) neurological representation and 2) spatiotemporal representation. In the first stage, the network imitates and learns the neurological mechanism of motion sickness. In the second stage, the significant feature of the spatial and temporal domains is expressed over the generated frames. After the training procedure, our model can calculate VIMS sensitivity for each frame of the VR content by using the weakly supervised approach for unannotated temporal VIMS scores. Furthermore, we release a massive VR content database. In the experiments, the proposed framework demonstrates excellent performance for VIMS score prediction compared with existing methods, including feature engineering and deep learning-based approaches. Furthermore, we propose a way to visualize the cognitive response to visual stimuli and demonstrate that the induced sickness tends to be activated in a similar tendency, as done in clinical studies.
Collapse
|
37
|
Yoo D, Lee S, Jo Y, Cho J, Choi S, Lee B. Volumetric Head-Mounted Display With Locally Adaptive Focal Blocks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1415-1427. [PMID: 32746283 DOI: 10.1109/tvcg.2020.3011468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
A commercial head-mounted display (HMD) for virtual reality (VR) presents three-dimensional imagery with a fixed focal distance. The VR HMD with a fixed focus can cause visual discomfort to an observer. In this article, we propose a novel design of a compact VR HMD supporting near-correct focus cues over a wide depth of field (from 18 cm to optical infinity). The proposed HMD consists of a low-resolution binary backlight, a liquid crystal display panel, and focus-tunable lenses. In the proposed system, the backlight locally illuminates the display panel that is floated by the focus-tunable lens at a specific distance. The illumination moment and the focus-tunable lens' focal power are synchronized to generate focal blocks at the desired distances. The distance of each focal block is determined by depth information of three-dimensional imagery to provide near-correct focus cues. We evaluate the focus cue fidelity of the proposed system considering the fill factor and resolution of the backlight. Finally, we verify the display performance with experimental results.
Collapse
|
38
|
Kim Y, Park S, Baek H, Min SW. Voxel characteristic estimation of integral imaging display system using self-interference incoherent digital holography. OPTICS EXPRESS 2022; 30:902-913. [PMID: 35209269 DOI: 10.1364/oe.444925] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 12/22/2021] [Indexed: 06/14/2023]
Abstract
Three-dimensional (3D) images reconstructed by integral imaging display are captured as a complex hologram using self-interference incoherent digital holography (SIDH) and analyzed for the volumetric image characteristics. The integrated images can present 3D perception through not only binocular disparity but also volumetric property, which is represented in forming a volume picture element, called 'voxel', and an important criterion to distinguish the integral imaging from the multiview 3D display. Since SIDH can record the complex hologram under incoherent lighting conditions, the SIDH camera system has the advantage to measure the voxel formed with the incoherent light fields. In this paper, we propose a technique to estimate and analyze the voxel characteristics of the integral imaging system such as the depth location and resolution. The captured holograms of the integrated images are numerically reconstructed by depth for the voxel analysis. The depth location of the integrated image can be calculated and obtained using the autofocus algorithms and the focus metrics values, which also show the modalities of depth resolution. The estimation method of this paper can be applied to the accurate and quantitative analysis of the volumetric characteristics of light field 3D displays.
Collapse
|
39
|
Chao YP, Chuang HH, Hsin LJ, Kang CJ, Fang TJ, Li HY, Huang CG, Kuo TBJ, Yang CCH, Shyu HY, Wang SL, Shyu LY, Lee LA. Using a 360° Virtual Reality or 2D Video to Learn History Taking and Physical Examination Skills for Undergraduate Medical Students: Pilot Randomized Controlled Trial. JMIR Serious Games 2021; 9:e13124. [PMID: 34813485 PMCID: PMC8663656 DOI: 10.2196/13124] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Revised: 04/02/2020] [Accepted: 09/10/2021] [Indexed: 01/22/2023] Open
Abstract
BACKGROUND Learning through a 360° virtual reality (VR) or 2D video represents an alternative way to learn a complex medical education task. However, there is currently no consensus on how best to assess the effects of different learning materials on cognitive load estimates, heart rate variability (HRV), outcomes, and experience in learning history taking and physical examination (H&P) skills. OBJECTIVE The aim of this study was to investigate how learning materials (ie, VR or 2D video) impact learning outcomes and experience through changes in cognitive load estimates and HRV for learning H&P skills. METHODS This pilot system-design study included 32 undergraduate medical students at an academic teaching hospital. The students were randomly assigned, with a 1:1 allocation, to a 360° VR video group or a 2D video group, matched by age, sex, and cognitive style. The contents of both videos were different with regard to visual angle and self-determination. Learning outcomes were evaluated using the Milestone reporting form. Subjective and objective cognitive loads were estimated using the Paas Cognitive Load Scale, the National Aeronautics and Space Administration Task Load Index, and secondary-task reaction time. Cardiac autonomic function was assessed using HRV measurements. Learning experience was assessed using the AttrakDiff2 questionnaire and qualitative feedback. Statistical significance was accepted at a two-sided P value of <.01. RESULTS All 32 participants received the intended intervention. The sample consisted of 20 (63%) males and 12 (38%) females, with a median age of 24 (IQR 23-25) years. The 360° VR video group seemed to have a higher Milestone level than the 2D video group (P=.04). The reaction time at the 10th minute in the 360° VR video group was significantly higher than that in the 2D video group (P<.001). Multiple logistic regression models of the overall cohort showed that the 360° VR video module was independently and positively associated with a reaction time at the 10th minute of ≥3.6 seconds (exp B=18.8, 95% CI 3.2-110.8; P=.001) and a Milestone level of ≥3 (exp B=15.0, 95% CI 2.3-99.6; P=.005). However, a reaction time at the 10th minute of ≥3.6 seconds was not related to a Milestone level of ≥3. A low-frequency to high-frequency ratio between the 5th and 10th minute of ≥1.43 seemed to be inversely associated with a hedonic stimulation score of ≥2.0 (exp B=0.14, 95% CI 0.03-0.68; P=.015) after adjusting for video module. The main qualitative feedback indicated that the 360° VR video module was fun but caused mild dizziness, whereas the 2D video module was easy to follow but tedious. CONCLUSIONS Our preliminary results showed that 360° VR video learning may be associated with a better Milestone level than 2D video learning, and that this did not seem to be related to cognitive load estimates or HRV indexes in the novice learners. Of note, an increase in sympathovagal balance may have been associated with a lower hedonic stimulation score, which may have met the learners' needs and prompted learning through the different video modules. TRIAL REGISTRATION ClinicalTrials.gov NCT03501641; https://clinicaltrials.gov/ct2/show/NCT03501641.
Collapse
Affiliation(s)
- Yi-Ping Chao
- Department of Computer Science and Information Engineering, Graduate Institute of Biomedical Engineering, Chang Gung University, Taoyuan, Taiwan.,Department of Neurology, Chang Gung Memorial Hospital, Linkou Main Branch, Taoyuan, Taiwan
| | - Hai-Hua Chuang
- Department of Family Medicine, Chang Gung Memorial Hospital, Taipei Branch & Linkou Main Branch, Taoyuan, Taiwan.,College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Li-Jen Hsin
- College of Medicine, Chang Gung University, Taoyuan, Taiwan.,Department of Otorhinolaryngology-Head and Neck Surgery, Chang Gung Memorial Hospital, Linkou Main Branch, Taoyuan, Taiwan
| | - Chung-Jan Kang
- College of Medicine, Chang Gung University, Taoyuan, Taiwan.,Department of Otorhinolaryngology-Head and Neck Surgery, Chang Gung Memorial Hospital, Linkou Main Branch, Taoyuan, Taiwan
| | - Tuan-Jen Fang
- College of Medicine, Chang Gung University, Taoyuan, Taiwan.,Department of Otorhinolaryngology-Head and Neck Surgery, Chang Gung Memorial Hospital, Linkou Main Branch, Taoyuan, Taiwan
| | - Hsueh-Yu Li
- College of Medicine, Chang Gung University, Taoyuan, Taiwan.,Department of Otorhinolaryngology-Head and Neck Surgery, Chang Gung Memorial Hospital, Linkou Main Branch, Taoyuan, Taiwan
| | - Chung-Guei Huang
- Department of Laboratory Medicine, Chang Gung Memorial Hospital, Linkou Main Branch, Taoyuan, Taiwan.,Department of Medical Biotechnology and Laboratory Science, Graduate Institute of Biomedical Sciences, Chang Gung University, Taoyuan, Taiwan
| | - Terry B J Kuo
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Cheryl C H Yang
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Hsin-Yih Shyu
- College of Medicine, Chang Gung University, Taoyuan, Taiwan.,Department of Educational Technology, Tamkang University, New Taipei, Taiwan
| | - Shu-Ling Wang
- College of Medicine, Chang Gung University, Taoyuan, Taiwan.,Center of Teacher Education, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Liang-Yu Shyu
- Department of Biomedical Engineering, Chung Yuan Christian University, Taoyuan, Taiwan
| | - Li-Ang Lee
- College of Medicine, Chang Gung University, Taoyuan, Taiwan.,Department of Otorhinolaryngology-Head and Neck Surgery, Chang Gung Memorial Hospital, Linkou Main Branch, Taoyuan, Taiwan.,Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan
| |
Collapse
|
40
|
Peng Y, Choi S, Kim J, Wetzstein G. Speckle-free holography with partially coherent light sources and camera-in-the-loop calibration. SCIENCE ADVANCES 2021; 7:eabg5040. [PMID: 34767449 PMCID: PMC8589315 DOI: 10.1126/sciadv.abg5040] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 09/24/2021] [Indexed: 05/28/2023]
Abstract
Computer-generated holography (CGH) holds transformative potential for a wide range of applications, including direct-view, virtual and augmented reality, and automotive display systems. While research on holographic displays has recently made impressive progress, image quality and eye safety of holographic displays are fundamentally limited by the speckle introduced by coherent light sources. Here, we develop an approach to CGH using partially coherent sources. For this purpose, we devise a wave propagation model for partially coherent light that is demonstrated in conjunction with a camera-in-the-loop calibration strategy. We evaluate this algorithm using light-emitting diodes (LEDs) and superluminescent LEDs (SLEDs) and demonstrate improved speckle characteristics of the resulting holograms compared with coherent lasers. SLEDs in particular are demonstrated to be promising light sources for holographic display applications, because of their potential to generate sharp and high-contrast two-dimensional (2D) and 3D images that are bright, eye safe, and almost free of speckle.
Collapse
Affiliation(s)
- Yifan Peng
- Department of Electrical Engineering, Stanford University, 350 Jane Stanford Way, Stanford, CA 94305, USA
| | - Suyeon Choi
- Department of Electrical Engineering, Stanford University, 350 Jane Stanford Way, Stanford, CA 94305, USA
| | - Jonghyun Kim
- Department of Electrical Engineering, Stanford University, 350 Jane Stanford Way, Stanford, CA 94305, USA
- NVIDIA, 2788 San Tomas Expressway, Santa Clara, CA 95051, USA
| | - Gordon Wetzstein
- Department of Electrical Engineering, Stanford University, 350 Jane Stanford Way, Stanford, CA 94305, USA
| |
Collapse
|
41
|
Kyung G, Park S. Curved Versus Flat Monitors: Interactive Effects of Display Curvature Radius and Display Size on Visual Search Performance and Visual Fatigue. HUMAN FACTORS 2021; 63:1182-1195. [PMID: 32374635 DOI: 10.1177/0018720820922717] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
OBJECTIVE The aim of this study is to examine the interactive effects of display curvature radius and display size on visual search accuracy, visual search speed, and visual fatigue. BACKGROUND Although the advantages of curved displays have been reported, little is known about the interactive effects of display curvature radius and size. METHOD Twenty-seven individuals performed visual search tasks at a viewing distance of 50 cm using eight configurations involving four display curvature radii (400R, 600R, 1200R, and flat) and two display sizes (33″ and 50″). To simulate curved screens, five flat display panels were horizontally arranged with their centers concentrically repositioned following each display curvature radius. RESULTS For accuracy, speed, and fatigue, 33″-600R and 50″-600R provided the best or comparable-to-best results, whereas 50″-flat provided the worst results. For accuracy and fatigue, 33″-flat was the second worst. The changes in the horizontal field of view and viewing angle due to display curvature as well as the association between effective display curvature radii and empirical horopter (loci of perceived equidistance) can explain these results. CONCLUSION The interactive effects of display curvature radius and size were evident for visual search performance and fatigue. Beneficial effects of curved displays were maintained across 33″ and 50″, whereas increasing flat display size from 33″ to 50″ was detrimental. APPLICATION For visual search tasks at a viewing distance of 50 cm, 33″-600R and 50″ 600R displays are recommended, as opposed to 33″ and 50″ flat displays. Wide flat displays must be carefully considered for visual display terminal tasks.
Collapse
|
42
|
Xiong Q, Liu H, Chen Z, Tai Y, Shi J, Liu W. Detection of binocular chromatic fusion limit for opposite colors. OPTICS EXPRESS 2021; 29:35022-35037. [PMID: 34808947 DOI: 10.1364/oe.433319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 10/05/2021] [Indexed: 06/13/2023]
Abstract
When the input colors of the left and right eyes are different from one another, binocular rivalry may occur. According to Hering theory, opponent colors would have the most significant tendency for rivalry. However, binocular color fusion still occurs under the condition that each eye's opponent chromatic responses do not exceed a specific chromatic fusion limit (CFL). This paper detects the binocular chromatic fusion limit for opposite colors within a conventional 3D display color gamut. We conducted a psychophysical experiment to quantitatively measure the binocular chromatic fusion limit on four opposite color directions in the CIELAB color space. Due to color inconsistency between eyes may affect the binocular color fusion, the experiment was divided into two sessions by swapping stimulation colors of left and right eyes. There were 5 subjects and they each experienced 320 trials. By analyzing the results, we used ellipses to quantify the chromatic fusion limits for opposing colors. The average semi-major axis of the ellipses is 27.55 Δ E a b∗, and the average semi-minor axis is 16.98 Δ E a b∗. We observed that the chromatic fusion limit varies with the opposite color direction: the CFL on RedBlue-GreenYellow direction is greater than that on Red-Green direction, the latter being greater than that on Yellow-Blue direction and the CFL on RedYellow-GreenBlue direction is smallest. Furthermore, we suggested that the chromatic fusion limit is independent of the distribution of cells, and there is no significant change in the fusion ellipse boundaries after swapping left and right eye colors.
Collapse
|
43
|
Lee S, Lee S, Kim D, Lee B. Distortion corrected tomographic near-eye displays using light field optimization. OPTICS EXPRESS 2021; 29:27573-27586. [PMID: 34615171 DOI: 10.1364/oe.435755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 07/28/2021] [Indexed: 06/13/2023]
Abstract
Several multifocal displays have been proposed to provide accurate accommodation cues. However, multifocal displays have an undesirable feature, which is especially emphasized in near-eye displays configuration, that the field of views (FOVs) of the virtual planes change over depth. We demonstrate that this change in FOV causes image distortions, which reduces overall image quality, and depth perception error due to the variation of image sizes according to depths. Here, we introduce a light field optimization technique to compensate for magnification variations among the focal planes. Our approach alleviates image distortions, especially noticeable in the contents with large depth discontinuity, and reconstructs the image size to precise depths, while maintaining a specific tolerance length for the target eye relief. To verify the feasibility of the algorithm, we employ this optimization method for the tomographic near-eye display system to acquire the optimal image and backlight sequences for a volumetric scene. In general, we confirm that the structural similarity index measure of reconstructed images against ground truth increases by 20% when the eye relief is 15 mm, and the accommodation cue is appropriately stimulated at the target depth with our proposed method.
Collapse
|
44
|
Abstract
Image quality assessment (IQA) models aim to establish a quantitative relationship between visual images and their quality as perceived by human observers. IQA modeling plays a special bridging role between vision science and engineering practice, both as a test-bed for vision theories and computational biovision models and as a powerful tool that could potentially have a profound impact on a broad range of image processing, computer vision, and computer graphics applications for design, optimization, and evaluation purposes. The growth of IQA research has accelerated over the past two decades. In this review, we present an overview of IQA methods from a Bayesian perspective, with the goals of unifying a wide spectrum of IQA approaches under a common framework and providing useful references to fundamental concepts accessible to vision scientists and image processing practitioners. We discuss the implications of the successes and limitations of modern IQA methods for biological vision and the prospect for vision science to inform the design of future artificial vision systems. (The detailed model taxonomy can be found at http://ivc.uwaterloo.ca/research/bayesianIQA/.) Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Zhengfang Duanmu
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada; , , ,
| | - Wentao Liu
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada; , , ,
| | - Zhongling Wang
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada; , , ,
| | - Zhou Wang
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada; , , ,
| |
Collapse
|
45
|
Park S, Mun S, Ha J, Kim L. Non-Contact Measurement of Motion Sickness Using Pupillary Rhythms from an Infrared Camera. SENSORS 2021; 21:s21144642. [PMID: 34300382 PMCID: PMC8309520 DOI: 10.3390/s21144642] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 06/25/2021] [Accepted: 07/01/2021] [Indexed: 12/19/2022]
Abstract
Both physiological and neurological mechanisms are reflected in pupillary rhythms via neural pathways between the brain and pupil nerves. This study aims to interpret the phenomenon of motion sickness such as fatigue, anxiety, nausea and disorientation using these mechanisms and to develop an advanced non-contact measurement method from an infrared webcam. Twenty-four volunteers (12 females) experienced virtual reality content through both two-dimensional and head-mounted device interpretations. An irregular pattern of the pupillary rhythms, demonstrated by an increasing mean and standard deviation of pupil diameter and decreasing pupillary rhythm coherence ratio, was revealed after the participants experienced motion sickness. The motion sickness was induced while watching the head-mounted device as compared to the two-dimensional virtual reality, with the motion sickness strongly related to the visual information processing load. In addition, the proposed method was verified using a new experimental dataset for 23 participants (11 females), with a classification performance of 89.6% (n = 48) and 80.4% (n = 46) for training and test sets using a support vector machine with a radial basis function kernel, respectively. The proposed method was proven to be capable of quantitatively measuring and monitoring motion sickness in real-time in a simple, economical and contactless manner using an infrared camera.
Collapse
Affiliation(s)
- Sangin Park
- Center for Bionics, Korea Institute of Science and Technology, Seoul 02792, Korea; (S.P.); (J.H.)
| | - Sungchul Mun
- Department of Industrial Engineering, Jeonju University, Jeonju 55069, Korea;
| | - Jihyeon Ha
- Center for Bionics, Korea Institute of Science and Technology, Seoul 02792, Korea; (S.P.); (J.H.)
- Department of Biomedical Engineering, Hanyang University, Seoul 04673, Korea
| | - Laehyun Kim
- Center for Bionics, Korea Institute of Science and Technology, Seoul 02792, Korea; (S.P.); (J.H.)
- Correspondence: ; Tel.: +82-2-958-6726
| |
Collapse
|
46
|
Labhishetty V, Cholewiak SA, Roorda A, Banks MS. Lags and leads of accommodation in humans: Fact or fiction? J Vis 2021; 21:21. [PMID: 33764384 PMCID: PMC7995353 DOI: 10.1167/jov.21.3.21] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
The focusing response of the human eye — accommodation — exhibits errors known as lags and leads. Lags occur when the stimulus is near and the eye appears to focus farther than the stimulus. Leads occur with far stimuli where the eye appears to focus nearer than the stimulus. We used objective and subjective measures simultaneously to determine where the eye is best focused. The objective measures were made with a wavefront sensor and an autorefractor, both of which analyze light reflected from the retina. These measures exhibited typical accommodative errors, mostly lags. The subjective measure was visual acuity, which of course depends not only on the eye's optics but also on photoreception and neural processing of the retinal image. The subjective measure revealed much smaller errors. Acuity was maximized at or very close to the distance of the accommodative stimulus. Thus, accommodation is accurate in terms of maximizing visual performance.
Collapse
Affiliation(s)
- Vivek Labhishetty
- Optometry & Vision Science, University of California, Berkeley, CA, USA., https://www.researchgate.net/profile/Vivek_Labhishetty
| | - Steven A Cholewiak
- Optometry & Vision Science, University of California, Berkeley, CA, USA., http://steven.cholewiak.com
| | - Austin Roorda
- Optometry & Vision Science, University of California, Berkeley, CA, USA., http://roorda.vision.berkeley.edu
| | - Martin S Banks
- Optometry & Vision Science, University of California, Berkeley, CA, USA., http://bankslab.berkeley.edu
| |
Collapse
|
47
|
Pöhlmann KMT, Föcker J, Dickinson P, Parke A, O'Hare L. The Effect of Motion Direction and Eccentricity on Vection, VR Sickness and Head Movements in Virtual Reality. Multisens Res 2021; 34:1-40. [PMID: 33882451 DOI: 10.1163/22134808-bja10049] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 04/05/2021] [Indexed: 11/19/2022]
Abstract
Virtual Reality (VR) experienced through head-mounted displays often leads to vection, discomfort and sway in the user. This study investigated the effect of motion direction and eccentricity on these three phenomena using optic flow patterns displayed using the Valve Index. Visual motion stimuli were presented in the centre, periphery or far periphery and moved either in depth (back and forth) or laterally (left and right). Overall vection was stronger for motion in depth compared to lateral motion. Additionally, eccentricity primarily affected stimuli moving in depth with stronger vection for more peripherally presented motion patterns compared to more central ones. Motion direction affected the various aspects of VR sickness differently and modulated the effect of eccentricity on VR sickness. For stimuli moving in depth far peripheral presentation caused more discomfort, whereas for lateral motion the central stimuli caused more discomfort. Stimuli moving in depth led to more head movements in the anterior-posterior direction when the entire visual field was stimulated. Observers demonstrated more head movements in the anterior-posterior direction compared to the medio-lateral direction throughout the entire experiment independent of motion direction or eccentricity of the presented moving stimulus. Head movements were elicited on the same plane as the moving stimulus only for stimuli moving in depth covering the entire visual field. A correlation showed a positive relationship between dizziness and vection duration and between general discomfort and sway. Identifying where in the visual field motion presented to an individual causes the least amount of VR sickness without losing vection and presence can guide development for Virtual Reality games, training and treatment programmes.
Collapse
Affiliation(s)
| | - Julia Föcker
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Patrick Dickinson
- School of Computer Science, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Adrian Parke
- School of Media, Culture and Society, University of the West of Scotland, Paisley Campus, Paisley PA1 2BE, UK
| | - Louise O'Hare
- Division of Psychology, Nottingham Trent University, 50 Shakespeare Street, Nottingham, NG1 4FQ, UK
| |
Collapse
|
48
|
Lin C, Yeh F, Wu B, Yang C. The effects of reflected glare and visual field lighting on computer vision syndrome. Clin Exp Optom 2021; 102:513-520. [DOI: 10.1111/cxo.12878] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Revised: 11/29/2018] [Accepted: 01/15/2019] [Indexed: 11/30/2022] Open
Affiliation(s)
- Chao‐wen Lin
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan,
| | - Feng‐ming Yeh
- Department of Optometry, Yuanpei University of Medical Technology, Hsinchu, Taiwan,
| | - Bo‐wen Wu
- Department of Optometry, Yuanpei University of Medical Technology, Hsinchu, Taiwan,
| | - Chang‐hao Yang
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan,
| |
Collapse
|
49
|
Yue K, Wang D, Chiu SC, Liu Y. Investigate the 3D Visual Fatigue Using Modified Depth-Related Visual Evoked Potential Paradigm. IEEE Trans Neural Syst Rehabil Eng 2021; 28:2794-2804. [PMID: 33406041 DOI: 10.1109/tnsre.2021.3049566] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Prolonged viewing of 3D content may result in severe fatigue symptoms, giving negative user experience thus hindering the development of 3D industry. For 3D visual fatigue evaluation, previous studies focused on exploring the changes of frequency-domain features in EEG for various fatigue degrees. However, their time-domain features were scarcely investigated. In this study, a modified paradigm with a random disparities order is adopted to evoke the depth-related visual evoked potentials (DVEPs). Then the characteristics of the DVEPs components for various fatigue degrees are compared using one-way repeated-measurement ANOVA. Point-by-point permutation statistics revealed sample points from 100ms to 170ms - including P1 and N1 - in sensors Pz and P4 changed significantly with visual fatigue. More specifically, we find that the amplitudes of P1 and N1 change significantly when visual fatigue increases. Additionally, independent component analysis identify P1 and N1 which originate from posterior cingulate cortex are associated statistically with 3D visual fatigue. Our results indicate there is a significant correlation between 3D visual fatigue and P1 amplitude, as well as N1, of DVEPs on right parietal areas. We believe the characteristics (e.g., amplitude and latency) of identified components may be the indicators of 3D visual fatigue evaluation. Furthermore, we argue that 3D visual fatigue may be associated with the activities decrease of the attention and the processing capacity of disparity.
Collapse
|
50
|
Visual adaptation to natural scene statistics and visual preference. Vision Res 2021; 180:87-95. [PMID: 33401176 DOI: 10.1016/j.visres.2020.11.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 11/15/2020] [Accepted: 11/18/2020] [Indexed: 11/22/2022]
Abstract
The amplitude of Fourier spectra for natural scenes falls with spatial frequency (f) and is described by the equation, 1/fα, where exponent α corresponds to the slope of the spectral drop-off. For natural scenes α takes on intermediate values ~1.25, reflecting their scale invariance. It is also well-established that, on average, images with natural scene statistics are preferred to those that deviate from these properties. Although this average pattern of preference for images with the intermediate values of α is robust, there are also marked individual differences in preference for different levels of α. This study investigated the effects of adaptation on average and individual visual preferences for synthetic filtered noise images varying in α. Participant preferences (N = 58) were measured via a 2AFC task prior to adaptation (baseline) and post-adaptation There were 3 adaptation conditions (α = 0.25, 1.25, 2.25) and 5 test levels of α (0.25, 0.75, 1.25, 1.75, 2.25). On average, the adaptation elevated preferences for test images with α matching the adaptor conditions, especially in adaptor conditions, α = 0.25 and 2.25. We also observed marked individual differences in preference for different levels of α. These different preference profiles remained stable throughout the experiment and affected the levels of adaptation observed in different adaptation conditions.
Collapse
|