1
|
Ye X, Fan F, Wen S. Cascaded transflective liquid crystal planar lenses enable multi-plane augmented reality. OPTICS LETTERS 2023; 48:5919-5922. [PMID: 37966752 DOI: 10.1364/ol.503343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 10/22/2023] [Indexed: 11/16/2023]
Abstract
In this Letter, we report and experimentally demonstrate the multi-plane augmented reality (AR) by combining the reflective polarization volume lens (PVL) and electrically controlled transmissive Pancharatnam-Berry (PB) liquid crystal (LC) lens. This strategy is based on the electrically controlled power-based approach, which significantly alleviates the challenge of vergence-accommodation conflict (VAC) of the current near-eye display (NED). As a proof of concept, a birdbath architecture dual-plane optical see-through (OST) display was implemented experimentally by changing the power of the lens. The proposed method is expected to be a novel, to the best of our knowledge, NED that is compact, light, and fatigue-free.
Collapse
|
2
|
Wang Z, Su Y, Pang Y, Feng Q, Lv G. A Depth-Enhanced Holographic Super Multi-View Display Based on Depth Segmentation. MICROMACHINES 2023; 14:1720. [PMID: 37763881 PMCID: PMC10535776 DOI: 10.3390/mi14091720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 08/21/2023] [Accepted: 08/29/2023] [Indexed: 09/29/2023]
Abstract
A super multi-view (SMV) near-eye display (NED) effectively provides depth cues for three-dimensional (3D) display by projecting multiple viewpoint or parallax images onto the retina simultaneously. Previous SMV NED have suffered from a limited depth of field (DOF) due to a fixed image plane. In this paper, a holographic SMV Maxwellian display based on depth segmentation is proposed to enhance the DOF. The proposed approach involves capturing a set of parallax images and their corresponding depth maps. According to the depth maps, the parallax images are segmented into N sub-parallax images at different depth ranges. These sub-parallax images are then projected onto N image-recording planes (IRPs) of the corresponding depth for hologram computation. The wavefront at each IRP is calculated by multiplying the sub-parallax images with the corresponding spherical wave phases. Then, they are propagated to the hologram plane and added together to form a DOF-enhanced hologram. The simulation and experimental results are obtained to validate the effectiveness of the proposed method in extending the DOF of the holographic SMV displays, while accurately preserving occlusion.
Collapse
Affiliation(s)
- Zi Wang
- National Engineering Laboratory of Special Display Technology, National Key Laboratory of Advanced Display Technology, Academy of Photoelectric Technology, Hefei University of Technology, Hefei 230009, China
| | - Yumeng Su
- National Engineering Laboratory of Special Display Technology, National Key Laboratory of Advanced Display Technology, Academy of Photoelectric Technology, Hefei University of Technology, Hefei 230009, China
- School of Instrument Science and Opto-Electronics Engineering, Hefei University of Technology, Hefei 230009, China
| | - Yujian Pang
- National Engineering Laboratory of Special Display Technology, National Key Laboratory of Advanced Display Technology, Academy of Photoelectric Technology, Hefei University of Technology, Hefei 230009, China
- School of Instrument Science and Opto-Electronics Engineering, Hefei University of Technology, Hefei 230009, China
| | - Qibin Feng
- National Engineering Laboratory of Special Display Technology, National Key Laboratory of Advanced Display Technology, Academy of Photoelectric Technology, Hefei University of Technology, Hefei 230009, China
| | - Guoqiang Lv
- School of Instrument Science and Opto-Electronics Engineering, Hefei University of Technology, Hefei 230009, China
| |
Collapse
|
3
|
Aizenman AM, Koulieris GA, Gibaldi A, Sehgal V, Levi DM, Banks MS. The Statistics of Eye Movements and Binocular Disparities during VR Gaming: Implications for Headset Design. ACM TRANSACTIONS ON GRAPHICS 2023; 42:7. [PMID: 37122317 PMCID: PMC10139447 DOI: 10.1145/3549529] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
The human visual system evolved in environments with statistical regularities. Binocular vision is adapted to these such that depth perception and eye movements are more precise, faster, and performed comfortably in environments consistent with the regularities. We measured the statistics of eye movements and binocular disparities in virtual-reality (VR) - gaming environments and found that they are quite different from those in the natural environment. Fixation distance and direction are more restricted in VR, and fixation distance is farther. The pattern of disparity across the visual field is less regular in VR and does not conform to a prominent property of naturally occurring disparities. From this we predict that double vision is more likely in VR than in the natural environment. We also determined the optimal screen distance to minimize discomfort due to the vergence-accommodation conflict, and the optimal nasal-temporal positioning of head-mounted display (HMD) screens to maximize binocular field of view. Finally, in a user study we investigated how VR content affects comfort and performance. Content that is more consistent with the statistics of the natural world yields less discomfort than content that is not. Furthermore, consistent content yields slightly better performance than inconsistent content.
Collapse
|
4
|
Chao CH, Liu CL, Chen HH. Time-Division Multiplexing Light Field Display with Learned Coded Aperture. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; PP:350-363. [PMID: 37015682 DOI: 10.1109/tip.2022.3203210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Conventional stereoscopic displays suffer from vergence-accommodation conflict and cause visual fatigue. Integral-imaging-based displays resolve the problem by directly projecting the sub-aperture views of a light field into the eyes using a microlens array or a similar structure. However, such displays have an inherent trade-off between angular and spatial resolutions. In this paper, we propose a novel coded time-division multiplexing technique that projects encoded sub-aperture views to the eyes of a viewer with correct cues for vergence-accommodation reflex. Given sparse light field sub-aperture views, our pipeline can provide a perception of high-resolution refocused images with minimal aliasing by jointly optimizing the sub-aperture views for display and the coded aperture pattern. This is achieved via deep learning in an end-to-end fashion by simulating light transport and image formation with Fourier optics. To our knowledge, this work is among the first that optimize the light field display pipeline with deep learning. We verify our idea with objective image quality metrics (PSNR, SSIM, and LPIPS) and perform an extensive study on various customizable design variables in our display pipeline. Experimental results show that light fields displayed using the proposed technique indeed have higher quality than that of baseline display designs.
Collapse
|
5
|
Chen S, Lin J, He Z, Li Y, Su Y, Wu ST. Planar Alvarez tunable lens based on polymetric liquid crystal Pancharatnam-Berry optical elements. OPTICS EXPRESS 2022; 30:34655-34664. [PMID: 36242473 DOI: 10.1364/oe.468647] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 08/02/2022] [Indexed: 06/16/2023]
Abstract
Virtual reality (VR) and augmented reality (AR) have widespread applications. The vergence-accommodation conflict (VAC), which causes 3D visual fatigue, has become an urgent challenge for VR and AR displays. Alvarez lenses, with precise and continuously tunable focal length based on the lateral shift of its two sub-elements, are a promising candidate as the key electro-optical component in vari-focal AR display systems to solve the VAC problem. In this paper, we propose and fabricate a compact Alvarez lens based on planar polymetric liquid crystal Pancharatnam-Berry optical elements. It can provide continuous diopter change from -1.4 D to 1.4 D at the wavelength of 532 nm with the lateral shift ranging from -5 mm to 5 mm. We also demonstrate an AR display system using this proposed Alvarez lens, where virtual images are augmented on the real world at different depths.
Collapse
|
6
|
Liu L, Ye Q, Pang Z, Huang H, Lai C, Teng D. Polarization enlargement of FOV in Super Multi-view display based on near-eye timing-apertures. OPTICS EXPRESS 2022; 30:1841-1859. [PMID: 35209338 DOI: 10.1364/oe.446819] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 12/21/2021] [Indexed: 06/14/2023]
Abstract
With strip-type timing-apertures attached to each eye of a viewer, more than one perspective views can be guided to either eye sequentially through different timing-apertures, thus implementing VAC-free (vergence-accommodation conflict-free) SMV (Super Multi-view) 3D (three-dimensional) display. To overcome the FOV (field of view) limitation problem due to small size of the timing-apertures along their arrangement direction, novel polarization architectures are designed to the timing-apertures in this paper. Correspondingly, the display screen of the proposed SMV display system is divided into M > 1 sub-screens along the arrangement direction of the timing-apertures, with adjacent sub-screens emitting light of mutually orthogonal polarization. At a time-point of each time period, a group of M timing-apertures, which correspond to the M sub-screens in a one-by-one manner along the arrangement direction, are turned on for creating an M-fold FOV, with each polarized timing-aperture of the group allowing light from the corresponding sub-screen passing through and blocking light from sub-screen(s) adjacent to the corresponding sub-screen. At 2T > 1 time-points of each time period, 2T groups of timing-apertures are turned on sequentially for presenting more than one two-dimensional images of the displayed scene to each eye, to implement SMV display based on persistence of vision. M stands for the FOV magnification number and T stands for the two-dimensional image number for each eye. As proof, a 3-fold FOV of 41° gets implemented experimentally with a currently available timing-aperture array of M = 3, accompanied by an effective noise-free region (ENFR) of 8.34 mm. Furthermore, the promising of freeing FOV from timing-aperture constraint fundamentally by larger M is described, out-of-screen blur along strip direction of the timing-apertures and the problem of limited ENFR are discussed.
Collapse
|
7
|
Kimura S, Iwai D, Punpongsanon P, Sato K. Multifocal Stereoscopic Projection Mapping. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:4256-4266. [PMID: 34449374 DOI: 10.1109/tvcg.2021.3106486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Stereoscopic projection mapping (PM) allows a user to see a three-dimensional (3D) computer-generated (CG) object floating over physical surfaces of arbitrary shapes around us using projected imagery. However, the current stereoscopic PM technology only satisfies binocular cues and is not capable of providing correct focus cues, which causes a vergence-accommodation conflict (VAC). Therefore, we propose a multifocal approach to mitigate VAC in stereoscopic PM. Our primary technical contribution is to attach electrically focus-tunable lenses (ETLs) to active shutter glasses to control both vergence and accommodation. Specifically, we apply fast and periodical focal sweeps to the ETLs, which causes the "virtual image" (as an optical term) of a scene observed through the ETLs to move back and forth during each sweep period. A 3D CG object is projected from a synchronized high-speed projector only when the virtual image of the projected imagery is located at a desired distance. This provides an observer with the correct focus cues required. In this study, we solve three technical issues that are unique to stereoscopic PM: (1) The 3D CG object is displayed on non-planar and even moving surfaces; (2) the physical surfaces need to be shown without the focus modulation; (3) the shutter glasses additionally need to be synchronized with the ETLs and the projector. We also develop a novel compensation technique to deal with the "lens breathing" artifact that varies the retinal size of the virtual image through focal length modulation. Further, using a proof-of-concept prototype, we demonstrate that our technique can present the virtual image of a target 3D CG object at the correct depth. Finally, we validate the advantage provided by our technique by comparing it with conventional stereoscopic PM using a user study on a depth-matching task.
Collapse
|
8
|
Li Y, Yang Q, Xiong J, Li K, Wu ST. Dual-depth augmented reality display with reflective polarization-dependent lenses. OPTICS EXPRESS 2021; 29:31478-31487. [PMID: 34615239 DOI: 10.1364/oe.435914] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 09/01/2021] [Indexed: 06/13/2023]
Abstract
Vergence-accommodation conflict (VAC) is a common annoying issue in near-eye displays using stereoscopy technology to provide the perception of three-dimensional (3D) depth. By generating multiple image planes, the depth cues can be corrected to accommodate a comfortable 3D viewing experience. In this study, we propose a multi-plane optical see-through augmented reality (AR) display with customized reflective polarization-dependent lenses (PDLs). Leveraging the different optical powers of two PDLs, a proof-of-concept dual-plane AR device is realized. The proposed design paves the way to a compact, lightweight, and fatigue-free AR display.
Collapse
|
9
|
Lou Y, Hu J, Chen A, Wu F. Augmented reality display system using modulated moiré imaging technique. APPLIED OPTICS 2021; 60:A306-A312. [PMID: 33690382 DOI: 10.1364/ao.404278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 12/22/2020] [Indexed: 06/12/2023]
Abstract
To enhance the depth rendering ability of augmented reality (AR) display systems, a modulated moiré imaging technique is used to render the true three-dimensional (3D) images for AR display systems. 3D images with continuous depth information and large depth of field are rendered and superimposed on the real scene. The proposed AR system consists of a modulated moiré imaging subsystem and an optical combiner. The modulated moiré imaging subsystem employs modulated point light sources, a display device, and a microlens array to generate 3D images. A defocussing equal period moiré imaging structure is used, which gives a chance for the point light sources to modulate the depth position of 3D images continuously. The principles of the imaging system are deduced analytically. A custom-designed transparent off-axis spherical reflective lens is used as an optical combiner to project the 3D images into the real world. An experimental AR system that provides continuous 3D images with depth information ranging from 0.5 to 2.5 m is made to verify the feasibility of the proposed technique.
Collapse
|
10
|
Schneider M, Kunz C, Pal'a A, Wirtz CR, Mathis-Ullrich F, Hlaváč M. Augmented reality-assisted ventriculostomy. Neurosurg Focus 2021; 50:E16. [PMID: 33386016 DOI: 10.3171/2020.10.focus20779] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 10/22/2020] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Placement of a ventricular drain is one of the most common neurosurgical procedures. However, a higher rate of successful placements with this freehand procedure is desirable. The authors' objective was to develop a compact navigational augmented reality (AR)-based tool that does not require rigid patient head fixation, to support the surgeon during the operation. METHODS Segmentation and tracking algorithms were developed. A commercially available Microsoft HoloLens AR headset in conjunction with Vuforia marker-based tracking was used to provide guidance for ventriculostomy in a custom-made 3D-printed head model. Eleven surgeons conducted a series of tests to place a total of 110 external ventricular drains under holographic guidance. The HoloLens was the sole active component; no rigid head fixation was necessary. CT was used to obtain puncture results and quantify success rates as well as precision of the suggested setup. RESULTS In the proposed setup, the system worked reliably and performed well. The reported application showed an overall ventriculostomy success rate of 68.2%. The offset from the reference trajectory as displayed in the hologram was 5.2 ± 2.6 mm (mean ± standard deviation). A subgroup conducted a second series of punctures in which results and precision improved significantly. For most participants it was their first encounter with AR headset technology and the overall feedback was positive. CONCLUSIONS To the authors' knowledge, this is the first report on marker-based, AR-guided ventriculostomy. The results from this first application are encouraging. The authors would expect good acceptance of this compact navigation device in a supposed clinical implementation and assume a steep learning curve in the application of this technique. To achieve this translation, further development of the marker system and implementation of the new hardware generation are planned. Further testing to address visuospatial issues is needed prior to application in humans.
Collapse
Affiliation(s)
- Max Schneider
- 1Department of Neurosurgery, University of Ulm, Günzburg; and
| | - Christian Kunz
- 2Health Robotics and Automation Lab, Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| | - Andrej Pal'a
- 1Department of Neurosurgery, University of Ulm, Günzburg; and
| | | | - Franziska Mathis-Ullrich
- 2Health Robotics and Automation Lab, Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| | - Michal Hlaváč
- 1Department of Neurosurgery, University of Ulm, Günzburg; and
| |
Collapse
|
11
|
Xu M, Huang H, Hua H. Analytical model for the perceived retinal image formation of 3D display systems. OPTICS EXPRESS 2020; 28:38029-38048. [PMID: 33379624 DOI: 10.1364/oe.408585] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 11/16/2020] [Indexed: 06/12/2023]
Abstract
The optical design process of conventional stereoscope-type head mounted displays for virtual and augmented reality applications typically neglects the inherent aberrations of the eye optics or refractive errors of a viewer, which misses the opportunity of producing personal devices for optimal visual experiences. Although a few research efforts have been made to simulate the retinal image formation process for some of the emerging 3D display systems such as light field displays that require modeling the eye optics to complete the image formation process, the existing works generally are specific for one type of display methods, unable to provide a generalized framework for different display methods for the benefit of comparison, and often require the use of at least two different software platforms for implementation which is challenging in handling massive data and implementing compensation of wavefront aberrations induced by display engine or eye refractive errors. To overcome those limits, we present a generalized analytical model for accurately simulating the visual responses such as retinal PSF, MTF, and image formation of different types of 2D and 3D display systems. This analytical model can accurately simulate the retinal responses when viewing a given display system, accounting for the residual eye aberrations of schematic eye models that match with the statistical clinical measurements, eye accommodative change as required, the effects of different eye refractive errors specific to viewers, and the effects of various wavefront aberrations inherited from a display engine. We further describe the numerical implementation of this analytical model for simulating the perceived retinal image with different types of HMD systems in a single computational platform. Finally, with a test setup, we numerically demonstrated the application of this analytical model in the simulation of the perceived retinal image, accommodative response and in the investigation of the eye refractive error impacts on the perceived retinal image based on the multifocal plane display, integral imaging based light field display, computational multilayer light field display, as well as the stereoscope and natural viewing for comparison.
Collapse
|
12
|
Krajancich B, Padmanaban N, Wetzstein G. Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1871-1879. [PMID: 32070978 DOI: 10.1109/tvcg.2020.2973443] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Occlusion is a powerful visual cue that is crucial for depth perception and realism in optical see-through augmented reality (OST-AR). However, existing OST-AR systems additively overlay physical and digital content with beam combiners - an approach that does not easily support mutual occlusion, resulting in virtual objects that appear semi-transparent and unrealistic. In this work, we propose a new type of occlusion-capable OST-AR system. Rather than additively combining the real and virtual worlds, we employ a single digital micromirror device (DMD) to merge the respective light paths in a multiplicative manner. This unique approach allows us to simultaneously block light incident from the physical scene on a pixel-by-pixel basis while also modulating the light emitted by a light-emitting diode (LED) to display digital content. Our technique builds on mixed binary/continuous factorization algorithms to optimize time-multiplexed binary DMD patterns and their corresponding LED colors to approximate a target augmented reality (AR) scene. In simulations and with a prototype benchtop display, we demonstrate hard-edge occlusions, plausible shadows, and also gaze-contingent optimization of this novel display mode, which only requires a single spatial light modulator.
Collapse
|
13
|
Chang C, Cui W, Gao L. Holographic multiplane near-eye display based on amplitude-only wavefront modulation. OPTICS EXPRESS 2019; 27:30960-30970. [PMID: 31684337 DOI: 10.1364/oe.27.030960] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Accepted: 10/01/2019] [Indexed: 06/10/2023]
Abstract
We present a holographic multiplane near-eye display method based on Fresnel holography and amplitude-only wavefront modulation. Our method can create multiple focal images across a wide depth range while maintaining a high resolution (1080P) and refresh rate (60 Hz). To suppress the DC and conjugation signals inherent in amplitude-only wavefront modulation, we develop an optimization algorithm which completely separates primary diffracted light from DC and conjugation at a pre-defined intermediate plane. Spatial filtering at this plane leads to a dramatic increase in the image contrast. The experimental results demonstrate our approach can create continuous focus cues in complex 3D scenes.
Collapse
|
14
|
Huang H, Hua H. Generalized methods and strategies for modeling and optimizing the optics of 3D head-mounted light field displays. OPTICS EXPRESS 2019; 27:25154-25171. [PMID: 31510393 DOI: 10.1364/oe.27.025154] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
An integral-imaging based light field head-mounted display, which typically renders a 3D scene by reconstructing the directional light rays apparently emitted by the scene via an array optics, is potentially capable of rendering correct or nearly correct focus cues and therefore solving the well-known vergence-accommodation conflict problem plaguing conventional stereoscopic displays. Its true 3D image formation nature, however, imposes significant complications and the well-established optical design process for conventional head-mounted displays becomes inadequate to address the design challenges. To our best knowledge, there are no existing methods or framework that have been previously proposed or demonstrated to address the challenges of modeling and optimizing an optical system for this type of display systems. In this paper, we present novel and generalizable methodology and framework for designing and optimizing the optical performance of integral-imaging based light field head-mounted displays, including methods of system configurations, user-defined metrics for characterizing the performance of such systems, and optimization strategies unique in light field displays. A design example is further given based on the proposed design methodology for the purpose of validating the proposed design method and framework.
Collapse
|
15
|
Li S, Liu Y, Li Y, Liu S, Chen S, Su Y. Fast-response Pancharatnam-Berry phase optical elements based on polymer-stabilized liquid crystal. OPTICS EXPRESS 2019; 27:22522-22531. [PMID: 31510543 DOI: 10.1364/oe.27.022522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Accepted: 07/16/2019] [Indexed: 06/10/2023]
Abstract
In this paper we demonstrate fast-response Pancharatnam-Berry (PB) phase optical elements (PBOEs) based on polymer-stabilized liquid crystal (PSLC). First, a non-interferometric photo-alignment technique is employed to generate PB patterns in a dye-doped liquid crystal by green laser light. Then the samples are exposed to UV light to form polymer networks. Due to the greatly increased elastic constant in PSLC, all PBOEs can achieve submillisecond response time, while maintaining high diffraction efficiency (>90%). Furthermore, a varifocus PB lens (PBL) is implemented based on two identical PB lens elements and its application in fatigue free augmented-reality (AR) displays is verified. The fast response PBOEs based on PSLC hold great potential for various display and photonics applications.
Collapse
|
16
|
Wilson A, Hua H. Design and demonstration of a vari-focal optical see-through head-mounted display using freeform Alvarez lenses. OPTICS EXPRESS 2019; 27:15627-15637. [PMID: 31163757 DOI: 10.1364/oe.27.015627] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Alvarez lenses offer accurate and high-speed, dynamic tuning of optical power through a lateral shifting of two lens elements, making them an appealing solution to eliminate the inherent decoupling of accommodation and convergence seen in conventional stereoscopic displays. In this paper, we present a design of a compact eyepiece coupled with two lateral-shifting freeform Alvarez lenses to enable a compact, high-resolution, optical see-through head-mounted display (HMD). The proposed design is able to tune its focal depth from 0 to 3 diopters, rendering near-accurate focus cues with high image quality and a large undistorted see-through field of view (FOV). Our design utilizes an 1920x1080 color resolution organic light-emitting diode (OLED) microdisplay to achieve a >30 degree virtual diagonal FOV, with an angular resolution of <0.85 arcminutes and an average optical performance of > 0.4 contrast over the full field. We also experimentally demonstrate a fully functional benchtop prototype using mostly off-the-shelf optics.
Collapse
|
17
|
Condino S, Carbone M, Piazza R, Ferrari M, Ferrari V. Perceptual Limits of Optical See-Through Visors for Augmented Reality Guidance of Manual Tasks. IEEE Trans Biomed Eng 2019; 67:411-419. [PMID: 31059421 DOI: 10.1109/tbme.2019.2914517] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE The focal length of available optical see-through (OST) head-mounted displays (HMDs) is at least 2 m; therefore, during manual tasks, the user eye cannot keep in focus both the virtual and real content at the same time. Another perceptual limitation is related to the vergence-accommodation conflict, the latter being present in binocular vision only. This paper investigates the effect of incorrect focus cues on the user performance, visual comfort, and workload during the execution of augmented reality (AR)-guided manual task with one of the most advanced OST HMD, the Microsoft HoloLens. METHODS An experimental study was designed to investigate the performance of 20 subjects in a connect-the-dots task, with and without the use of AR. The following tests were planned: AR-guided monocular and binocular, and naked-eye monocular and binocular. Each trial was analyzed to evaluate the accuracy in connecting dots. NASA Task Load Index and Likert questionnaires were used to assess the workload and the visual comfort. RESULTS No statistically significant differences were found in the workload, and in the perceived comfort between the AR-guided binocular and monocular test. User performances were significantly better during the naked eye tests. No statistically significant differences in performances were found in the monocular and binocular tests. The maximum error in AR tests was 5.9 mm. CONCLUSION Even if there is a growing interest in using commercial OST HMD, for guiding high-precision manual tasks, attention should be paid to the limitations of the available technology not designed for the peripersonal space.
Collapse
|
18
|
Chen Q, Peng Z, Li Y, Liu S, Zhou P, Gu J, Lu J, Yao L, Wang M, Su Y. Multi-plane augmented reality display based on cholesteric liquid crystal reflective films. OPTICS EXPRESS 2019; 27:12039-12047. [PMID: 31052749 DOI: 10.1364/oe.27.012039] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Accepted: 03/25/2019] [Indexed: 06/09/2023]
Abstract
To address the accommodation-convergence conflict problem in conventional augmented reality (AR) head-mounted displays, we propose a compact multi-plane display design based on cholesteric liquid crystal (CLC) reflective films and a polarization switch. Because of the polarization selectivity of CLC films, circularly-polarized light with different handedness is reflected by different CLC films, resulting in different optical path lengths and different image depths by the lens. A flicker-free dual-plane prototype with correct focus cues and relatively low operating voltage has been implemented. Moreover, a multi-plane AR display scheme with more than 2 depth planes is proposed by stacking multiple CLC films and polarization switches together.
Collapse
|
19
|
Cui W, Gao L. All-passive transformable optical mapping near-eye display. Sci Rep 2019; 9:6064. [PMID: 30988506 PMCID: PMC6465389 DOI: 10.1038/s41598-019-42507-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2018] [Accepted: 04/01/2019] [Indexed: 11/24/2022] Open
Abstract
We present an all-passive, transformable optical mapping (ATOM) near-eye display based on the “human-centric design” principle. By employing a diffractive optical element, a distorted grating, the ATOM display can project different portions of a two-dimensional display screen to various depths, rendering a real three-dimensional image with correct focus cues. Thanks to its all-passive optical mapping architecture, the ATOM display features a reduced form factor and low power consumption. Moreover, the system can readily switch between a real-three-dimensional and a high-resolution two-dimensional display mode, providing task-tailored viewing experience for a variety of VR/AR applications.
Collapse
Affiliation(s)
- Wei Cui
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 306 N. Wright St., Urbana, 61801, IL, USA.,Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405 N. Mathews Ave., Urbana, 61801, IL, USA
| | - Liang Gao
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 306 N. Wright St., Urbana, 61801, IL, USA. .,Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405 N. Mathews Ave., Urbana, 61801, IL, USA.
| |
Collapse
|
20
|
Chou PY, Wu JY, Huang SH, Wang CP, Qin Z, Huang CT, Hsieh PY, Lee HH, Lin TH, Huang YP. Hybrid light field head-mounted display using time-multiplexed liquid crystal lens array for resolution enhancement. OPTICS EXPRESS 2019; 27:1164-1177. [PMID: 30696184 DOI: 10.1364/oe.27.001164] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Accepted: 01/06/2019] [Indexed: 06/09/2023]
Abstract
In recent years, head-mounted display technologies have greatly advanced. In order to overcome the accommodation-convergence conflict, light field displays reconstruct three-dimensional (3D) images with a focusing cue but sacrifice resolution. In this paper, a hybrid head-mounted display system that is based on a liquid crystal microlens array is proposed. By using a time-multiplexed method, the display signals can be divided into light field and two-dimensional (2D) modes to show comfortable 3D images with high resolution compensated by the 2D image. According to the experimental results, the prototype supports a 12.28 ppd resolution in the diagonal direction, which reaches 82% of the traditional virtual reality (VR) head-mounted display (HMD).
Collapse
|
21
|
Rathinavel K, Wang H, Blate A, Fuchs H. An Extended Depth-at-Field Volumetric Near-Eye Augmented Reality Display. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2857-2866. [PMID: 30207960 DOI: 10.1109/tvcg.2018.2868570] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We introduce an optical design and a rendering pipeline for a full-color volumetric near-eye display which simultaneously presents imagery with near-accurate per-pixel focus across an extended volume ranging from 15cm (6.7 diopters) to 4M (0.25 diopters), allowing the viewer to accommodate freely across this entire depth range. This is achieved using a focus-tunable lens that continuously sweeps a sequence of 280 synchronized binary images from a high-speed, Digital Micromirror Device (DMD) projector and a high-speed, high dynamic range (HDR) light source that illuminates the DMD images with a distinct color and brightness at each binary frame. Our rendering pipeline converts 3-D scene information into a 2-D surface of color voxels, which are decomposed into 280 binary images in a voxel-oriented manner, such that 280 distinct depth positions for full-color voxels can be displayed.
Collapse
|
22
|
Grubert J, Itoh Y, Moser K, Swan JE. A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2649-2662. [PMID: 28961115 DOI: 10.1109/tvcg.2017.2754257] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Optical see-through head-mounted displays (OST HMDs) are a major output medium for Augmented Reality, which have seen significant growth in popularity and usage among the general public due to the growing release of consumer-oriented models, such as the Microsoft Hololens. Unlike Virtual Reality headsets, OST HMDs inherently support the addition of computer-generated graphics directly into the light path between a user's eyes and their view of the physical world. As with most Augmented and Virtual Reality systems, the physical position of an OST HMD is typically determined by an external or embedded 6-Degree-of-Freedom tracking system. However, in order to properly render virtual objects, which are perceived as spatially aligned with the physical environment, it is also necessary to accurately measure the position of the user's eyes within the tracking system's coordinate frame. For over 20 years, researchers have proposed various calibration methods to determine this needed eye position. However, to date, there has not been a comprehensive overview of these procedures and their requirements. Hence, this paper surveys the field of calibration methods for OST HMDs. Specifically, it provides insights into the fundamentals of calibration techniques, and presents an overview of both manual and automatic approaches, as well as evaluation methods and metrics. Finally, it also identifies opportunities for future research.
Collapse
|
23
|
Kim D, Lee S, Moon S, Cho J, Jo Y, Lee B. Hybrid multi-layer displays providing accommodation cues. OPTICS EXPRESS 2018; 26:17170-17184. [PMID: 30119532 DOI: 10.1364/oe.26.017170] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Accepted: 06/05/2018] [Indexed: 05/19/2023]
Abstract
Hybrid multi-layer displays are proposed as the system combines additive light field (LF) displays and multiplicative LF displays. The system is implemented by integrating the multiplicative LF displays with a half mirror to expand the overall depth of field. The hybrid displays are advantageous in that the form factor is competitive with existing additive LF displays with 2 layers implemented by a half mirror and two panels, only half of brightness loss is experienced compared to multiplicative LF displays with 2 layers, and no time-division is required to provide images for multi-layer displays. The images for presentation planes are processed by light field factorization and optimized with the presented algorithm. Retinal images are reconstructed based on various accommodation states and display types to check the accommodation response and utilized to compare the proposed displays with existing displays. With ray tracing method, retinal images generated by the proposed displays can be obtained. To verify the feasibility of the system, a prototype of hybrid multi-layer displays was implemented and display photographs were captured with different accommodation states of camera. With the simulation results and experimental results, this system was confirmed to support accommodation cues in a range of 1.8 diopters.
Collapse
|
24
|
Huang H, Hua H. High-performance integral-imaging-based light field augmented reality display using freeform optics. OPTICS EXPRESS 2018; 26:17578-17590. [PMID: 30119569 DOI: 10.1364/oe.26.017578] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
A new integral-imaging-based light field augmented-reality display is proposed and implemented for the first time, to our best knowledge, to achieve a wide see-through view and high image quality over a large depth range. By using custom-designed freeform optics and incorporating a tunable lens and an aperture array, we demonstrated a compact design of a light field head-mounted-display that offers a true 3D display view of 30° by 18°, maintains a spatial resolution of 3 arc minutes across a depth range of over 3 diopters, and provides a see-through field of view of 65° by 40°.
Collapse
|
25
|
Liu S, Li Y, Zhou P, Chen Q, Su Y. Reverse-mode PSLC multi-plane optical see-through display for AR applications. OPTICS EXPRESS 2018; 26:3394-3403. [PMID: 29401867 DOI: 10.1364/oe.26.003394] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2017] [Accepted: 01/23/2018] [Indexed: 06/07/2023]
Abstract
In this paper we propose an optical see-through multi-plane display with reverse-mode polymer-stabilized liquid crystal (PSLC). Our design solves the problem of accommodation-vergence conflict with correct focus cues. In the reverse mode PSLC system, power consumption could be reduced to ~1/(N-1) of that in a normal mode system if N planes are displayed. The PSLC films fabricated in our experiment exhibit a low saturation voltage ~20 Vrms, a high transparent-state transmittance (92%), and a fast switching time within 2 ms and polarization insensitivity. A proof-of-concept two-plane color display prototype and a four-plane monocolor display prototype were implemented.
Collapse
|
26
|
Huang H, Hua H. Systematic characterization and optimization of 3D light field displays. OPTICS EXPRESS 2017; 25:18508-18525. [PMID: 29041051 DOI: 10.1364/oe.25.018508] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Accepted: 07/16/2017] [Indexed: 06/07/2023]
Abstract
One of the key issues in conventional stereoscopic displays is the well-known vergence-accommodation conflict problem due to the lack of the ability to render correct focus cues for 3D scenes. Recently several light field display methods have been explored to reconstruct a true 3D scene by sampling either the projections of the 3D scene at different depths or the directions of the light rays apparently emitted by the 3D scene and viewed from different eye positions. These methods are potentially capable of rendering correct or nearly correct focus cues and addressing the vergence-accommodation conflict problem. In this paper, we describe a generalized framework to model the image formation process of the existing light-field display methods and present a systematic method to simulate and characterize the retinal image and the accommodation response rendered by a light field display. We further employ this framework to investigate the trade-offs and guidelines for an optimal 3D light field display design. Our method is based on quantitatively evaluating the modulation transfer functions of the perceived retinal image of a light field display by accounting for the ocular factors of the human visual system.
Collapse
|
27
|
Cui W, Gao L. Optical mapping near-eye three-dimensional display with correct focus cues. OPTICS LETTERS 2017; 42:2475-2478. [PMID: 28957270 DOI: 10.1364/ol.42.002475] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2017] [Accepted: 05/25/2017] [Indexed: 06/07/2023]
Abstract
We present an optical mapping near-eye (OMNI) three-dimensional display method for wearable devices. By dividing a display screen into different subpanels and optically mapping them to various depths, we create a multiplane volumetric image with correct focus cues for depth perception. The resultant system can drive the eye's accommodation to the distance that is consistent with binocular stereopsis, thereby alleviating the vergence-accommodation conflict, the primary cause for eye fatigue and discomfort. Compared with the previous methods, the OMNI display offers prominent advantages in adaptability, image dynamic range, and refresh rate.
Collapse
|
28
|
Hong JY, Lee CK, Lee S, Lee B, Yoo D, Jang C, Kim J, Jeong J, Lee B. See-through optical combiner for augmented reality head-mounted display: index-matched anisotropic crystal lens. Sci Rep 2017; 7:2753. [PMID: 28584247 PMCID: PMC5459829 DOI: 10.1038/s41598-017-03117-w] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2017] [Accepted: 04/24/2017] [Indexed: 11/11/2022] Open
Abstract
A novel see-through optical device to combine the real world and the virtual image is proposed which is called an index-matched anisotropic crystal lens (IMACL). The convex lens made of anisotropic crystal is enveloped with the isotropic material having same refractive index with the extraordinary refractive index of the anisotropic crystal. This optical device functions as the transparent glass or lens according to the polarization state of the incident light. With the novel optical property, IMACL can be utilized in the see-through near eye display, or head-mounted display for augmented reality. The optical property of the proposed optical device is analyzed and aberration by the anisotropic property of the index-matched anisotropic crystal lens is described with the simulation. The concept of the head-mounted display using IMACL is introduced and various optical performances such as field of view, form factor and transmittance are analyzed. The prototype is implemented to verify the proposed system and experimental results show the mixture between the virtual image and real world scene.
Collapse
Affiliation(s)
- Jong-Young Hong
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Chang-Kun Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Seungjae Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Byounghyo Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Dongheon Yoo
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Changwon Jang
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Jonghyun Kim
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Jinsoo Jeong
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Byoungho Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea.
| |
Collapse
|
29
|
Otero C, Aldaba M, Martínez-Navarro B, Pujol J. Effect of apparent depth cues on accommodation in a Badal optometer. Clin Exp Optom 2017; 100:649-655. [PMID: 28326607 DOI: 10.1111/cxo.12534] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Revised: 12/16/2016] [Accepted: 12/22/2016] [Indexed: 01/23/2023] Open
Abstract
BACKGROUND The aim was to analyse the effect of peripheral depth cues on accommodation in Badal optometers. METHODS Monocular refractions at 0.17 and 5.00 D of accommodative stimulus were measured with the PowerRef II autorefractor (Plusoptix Inc., Atlanta, Georgia, USA). Subjects looked (randomly) at four different scenes, one real scene comprising familiar objects at different depth planes (Real) and three virtual scenes comprising different two-dimensional pictures seen through a Badal lens. The first image consisted of a photograph of the real scene taken in conditions that closely mimic a healthy standard human eye performance (out-of-focus [OoF] blur); the second image was the same photograph rendered with a depth of focus to infinity (OoF sharpness); and finally the third image consisted of a fixation target and a even white surrounding (White). In all cases the field of view was 25.0° and the fixation target was a Maltese cross subtending to two degrees. RESULTS Twenty-eight right eyes from healthy young subjects were measured. The achieved statistical power was 0.9. At 5.00 D of accommodative stimulus, the repeated measures analysis of variance was statistically significant (p < 0.05) and the corresponding Bonferroni post hoc tests showed the following mean accommodative response differences and standard deviation (p-value) between the real and the virtual scenes: real-white =-0.66 ± 0.92 D (p < 0.01); real-OoF sharpness = -0.43 ± 0.88 D (p = 0.07); real-OoF blur =-0.25 ± 0.93 D (p = 0.89). CONCLUSIONS A stimulus poor in depth cues inaccurately stimulates accommodation in Badal optometers; however, accommodation can be significantly improved in the same Badal optometer, when displaying a realistic image rich in peripheral depth cues, even though these peripheral cues (also referred to as retinal blur cues) are shown in the same plane as the fixation target. These results have important implications in stereoscopic virtual reality systems that fail to represent appropriately retinal blur.
Collapse
Affiliation(s)
- Carles Otero
- Davalor Research Centre, Polytechnic University of Catalonia, Terrassa, Spain.,Centre for Sensors, Instruments and Systems Development (CD6), Polytechnic University of Catalonia, Terrassa, Spain
| | - Mikel Aldaba
- Davalor Research Centre, Polytechnic University of Catalonia, Terrassa, Spain.,Centre for Sensors, Instruments and Systems Development (CD6), Polytechnic University of Catalonia, Terrassa, Spain
| | | | - Jaume Pujol
- Davalor Research Centre, Polytechnic University of Catalonia, Terrassa, Spain.,Centre for Sensors, Instruments and Systems Development (CD6), Polytechnic University of Catalonia, Terrassa, Spain
| |
Collapse
|
30
|
Lee CK, Moon S, Lee S, Yoo D, Hong JY, Lee B. Compact three-dimensional head-mounted display system with Savart plate. OPTICS EXPRESS 2016; 24:19531-19544. [PMID: 27557230 DOI: 10.1364/oe.24.019531] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We propose three-dimensional (3D) head-mounted display (HMD) providing multi-focal and wearable functions by using polarization-dependent optical path switching in Savart plate. The multi-focal function is implemented as micro display with high pixel density of 1666 pixels per inches is optically duplicated in longitudinal direction according to the polarization state. The combination of micro display, fast switching polarization rotator and Savart plate retains small form factor suitable for wearable function. The optical aberrations of duplicated panels are investigated by ray tracing according to both wavelength and polarization state. Astigmatism and lateral chromatic aberration of extraordinary wave are compensated by modification of the Savart plate and sub-pixel shifting method, respectively. To verify the feasibility of the proposed system, a prototype of the HMD module for monocular eye is implemented. The module has the compact size of 40 mm by 90 mm by 40 mm and the weight of 131 g with wearable function. The micro display and polarization rotator are synchronized in real-time as 30 Hz and two focal planes are formed at 640 and 900 mm away from eye box, respectively. In experiments, the prototype also provides augmented reality function by combining the optically duplicated panels with a beam splitter. The multi-focal function of the optically duplicated panels without astigmatism and color dispersion compensation is verified. When light field optimization for two additive layers is performed, perspective images are observed, and the integration of real world scene and high quality 3D images is confirmed.
Collapse
|
31
|
Abstract
Creating realistic three-dimensional (3D) experiences has been a very active area of research and development, and this article describes progress and what remains to be solved. A very active area of technical development has been to build displays that create the correct relationship between viewing parameters and triangulation depth cues: stereo, motion, and focus. Several disciplines are involved in the design, construction, evaluation, and use of 3D displays, but an understanding of human vision is crucial to this enterprise because in the end, the goal is to provide the desired perceptual experience for the viewer. In this article, we review research and development concerning displays that create 3D experiences. And we highlight areas in which further research and development is needed.
Collapse
|
32
|
Johnson PV, Parnell JAQ, Kim J, Saunter CD, Love GD, Banks MS. Dynamic lens and monovision 3D displays to improve viewer comfort. OPTICS EXPRESS 2016; 24:11808-27. [PMID: 27410105 PMCID: PMC5025225 DOI: 10.1364/oe.24.011808] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Stereoscopic 3D (S3D) displays provide an additional sense of depth compared to non-stereoscopic displays by sending slightly different images to the two eyes. But conventional S3D displays do not reproduce all natural depth cues. In particular, focus cues are incorrect causing mismatches between accommodation and vergence: The eyes must accommodate to the display screen to create sharp retinal images even when binocular disparity drives the eyes to converge to other distances. This mismatch causes visual discomfort and reduces visual performance. We propose and assess two new techniques that are designed to reduce the vergence-accommodation conflict and thereby decrease discomfort and increase visual performance. These techniques are much simpler to implement than previous conflict-reducing techniques. The first proposed technique uses variable-focus lenses between the display and the viewer's eyes. The power of the lenses is yoked to the expected vergence distance thereby reducing the mismatch between vergence and accommodation. The second proposed technique uses a fixed lens in front of one eye and relies on the binocularly fused percept being determined by one eye and then the other, depending on simulated distance. We conducted performance tests and discomfort assessments with both techniques and compared the results to those of a conventional S3D display. The first proposed technique, but not the second, yielded clear improvements in performance and reductions in discomfort. This dynamic-lens technique therefore offers an easily implemented technique for reducing the vergence-accommodation conflict and thereby improving viewer experience.
Collapse
Affiliation(s)
- Paul V. Johnson
- UC Berkeley – UCSF Graduate Program in Bioengineering, Berkeley, CA 94720,
USA
| | | | - Joohwan Kim
- Vision Science Program, School of Optometry, University of California, Berkeley, CA 94720,
USA
| | | | | | - Martin S. Banks
- UC Berkeley – UCSF Graduate Program in Bioengineering, Berkeley, CA 94720,
USA
- Vision Science Program, School of Optometry, University of California, Berkeley, CA 94720,
USA
| |
Collapse
|
33
|
Hu X, Hua H. Design and tolerance of a free-form optical system for an optical see-through multi-focal-plane display. APPLIED OPTICS 2015; 54:9990-9999. [PMID: 26836568 DOI: 10.1364/ao.54.009990] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
By elegantly combining recent advancements of free-form optical technology and multi-focal-plane (MFP) display technology, we developed a high-performance true 3D augmented reality (AR) display that is capable of rendering a large volume of 3D scenes with accurate focus cues; this display overcomes the accommodation-convergence discrepancy problem in conventional AR display. In this paper, we concentrate on various aspects of engineering challenges in the design and integration of a free-form optical see-through eyepiece with MFP technology for our AR display prototype. We present the design and optimization strategy in coupling free-form optics with a rotational-symmetric lens system to achieve high image quality. A comprehensive tolerance analysis of this complicated optical system is also presented, including an effective tolerance method for random surface figure errors on aspheric and free-form surfaces. Finally, the image quality of the virtual display is evaluated, which shows the as-built performance matches very well with the optical design results and tolerance analysis.
Collapse
|
34
|
|
35
|
Kim DY, Seo JW. A diffuser-based three-dimensional measurement of polarization-dependent scattering characteristics of optical films for 3D-display applications. OPTICS EXPRESS 2015; 23:1063-1072. [PMID: 25835866 DOI: 10.1364/oe.23.001063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We propose an accurate and easy-to-use three-dimensional measurement method using a diffuser plate to analyze the scattering characteristics of optical films. The far-field radiation pattern of light scattered by the optical film is obtained from the illuminance pattern created on the diffuser plate by the light. A mathematical model and calibration methods were described, and the results were compared with those obtained by a direct measurement using a luminance meter. The new method gave very precise three-dimensional polarization-dependent scattering characteristics of scattering polarizer films, and it can play an effective role in developing high performance polarization-selective screens for 3D display applications.
Collapse
|
36
|
Hu X, Hua H. High-resolution optical see-through multi-focal-plane head-mounted display using freeform optics. OPTICS EXPRESS 2014; 22:13896-13903. [PMID: 24921581 DOI: 10.1364/oe.22.013896] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Conventional stereoscopic displays force an unnatural decoupling of the accommodation and convergence cues, which may contribute to various visual artifacts and have adverse effects on depth perception accuracy. In this paper, we present the design and implementation of a high-resolution optical see-through multi-focal-plane head-mounted display enabled by state-of-the-art freeform optics. The prototype system is capable of rendering nearly-correct focus cues for a large volume of 3D space, extending into a depth range from 0 to 3 diopters. The freeform optics, consisting of a freeform prism eyepiece and a freeform lens, demonstrates an angular resolution of 1.8 arcminutes across a 40-degree diagonal field of view in the virtual display path while providing a 0.5 arcminutes angular resolution to the see-through view.
Collapse
|
37
|
Hua H, Javidi B. A 3D integral imaging optical see-through head-mounted display. OPTICS EXPRESS 2014; 22:13484-91. [PMID: 24921542 DOI: 10.1364/oe.22.013484] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
An optical see-through head-mounted display (OST-HMD), which enables optical superposition of digital information onto the direct view of the physical world and maintains see-through vision to the real world, is a vital component in an augmented reality (AR) system. A key limitation of the state-of-the-art OST-HMD technology is the well-known accommodation-convergence mismatch problem caused by the fact that the image source in most of the existing AR displays is a 2D flat surface located at a fixed distance from the eye. In this paper, we present an innovative approach to OST-HMD designs by combining the recent advancement of freeform optical technology and microscopic integral imaging (micro-InI) method. A micro-InI unit creates a 3D image source for HMD viewing optics, instead of a typical 2D display surface, by reconstructing a miniature 3D scene from a large number of perspective images of the scene. By taking advantage of the emerging freeform optical technology, our approach will result in compact, lightweight, goggle-style AR display that is potentially less vulnerable to the accommodation-convergence discrepancy problem and visual fatigue. A proof-of-concept prototype system is demonstrated, which offers a goggle-like compact form factor, non-obstructive see-through field of view, and true 3D virtual display.
Collapse
|