1
|
Zhao N, Xiao J, Weng P, Zhang H. Tomographic waveguide-based augmented reality display. OPTICS EXPRESS 2024; 32:18692-18699. [PMID: 38859019 DOI: 10.1364/oe.524983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 04/29/2024] [Indexed: 06/12/2024]
Abstract
A tomographic waveguide-based augmented reality display technique is proposed for near-eye three-dimensional (3D) display with accurate depth reconstructions. A pair of tunable lenses with complementary focuses is utilized to project tomographic virtual 3D images while maintaining the correct perception of the real scene. This approach reconstructs virtual 3D images with physical depth cues, thereby addressing the vergence-accommodation conflict inherent in waveguide augmented reality systems. A prototype has been constructed and optical experiments have been conducted, demonstrating the system's capability in delivering high-quality 3D scenes for waveguide-based augmented reality display.
Collapse
|
2
|
Minh Tran TT, Brown S, Weidlich O, Billinghurst M, Parker C. Wearable Augmented Reality: Research Trends and Future Directions from Three Major Venues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4782-4793. [PMID: 37782599 DOI: 10.1109/tvcg.2023.3320231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Wearable Augmented Reality (AR) has attracted considerable attention in recent years, as evidenced by the growing number of research publications and industry investments. With swift advancements and a multitude of interdisciplinary research areas within wearable AR, a comprehensive review is crucial for integrating the current state of the field. In this paper, we present a review of 389 research papers on wearable AR, published between 2018 and 2022 in three major venues: ISMAR, TVCG, and CHI. Drawing inspiration from previous works by Zhou et al. and Kim et al., which summarized AR research at ISMAR over the past two decades (1998-2017), we categorize the papers into different topics and identify prevailing trends. One notable finding is that wearable AR research is increasingly geared towards enabling broader consumer adoption. From our analysis, we highlight key observations related to potential future research areas essential for capitalizing on this trend and achieving widespread adoption. These include addressing challenges in Display, Tracking, Interaction, and Applications, and exploring emerging frontiers in Ethics, Accessibility, Avatar and Embodiment, and Intelligent Virtual Agents.
Collapse
|
3
|
Ebner C, Mohr P, Langlotz T, Peng Y, Schmalstieg D, Wetzstein G, Kalkofen D. Off-Axis Layered Displays: Hybrid Direct-View/Near-Eye Mixed Reality with Focus Cues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; PP:2816-2825. [PMID: 37027729 DOI: 10.1109/tvcg.2023.3247077] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This work introduces off-axis layered displays, the first approach to stereoscopic direct-view displays with support for focus cues. Off-axis layered displays combine a head-mounted display with a traditional direct-view display for encoding a focal stack and thus, for providing focus cues. To explore the novel display architecture, we present a complete processing pipeline for the real-time computation and post-render warping of off-axis display patterns. In addition, we build two prototypes using a head-mounted display in combination with a stereoscopic direct-view display, and a more widely available monoscopic direct-view display. In addition we show how extending off-axis layered displays with an attenuation layer and with eye-tracking can improve image quality. We thoroughly analyze each component in a technical evaluation and present examples captured through our prototypes.
Collapse
|
4
|
Wang L, Tabata S, Xu H, Hu Y, Watanabe Y, Ishikawa M. Dynamic depth-of-field projection mapping method based on a variable focus lens and visual feedback. OPTICS EXPRESS 2023; 31:3945-3953. [PMID: 36785374 DOI: 10.1364/oe.478416] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 01/01/2023] [Indexed: 06/18/2023]
Abstract
Dynamic projection mapping is an interactive display technology, which is capable with multiplayers with naked eyes for augmented reality. However, the fixed and shallow depth-of-field of the projector optics limits its potential applications. In this work, a high-speed projection mapping method with a dynamic focal tracking technology based on a variable focus lens will be illustrated. The proposed system included a high-speed variable focus lens, a high-speed camera, and a high- speed projector, so that the depth and rotation information would be detected and then served as feedback to correct the focal length and update the projection information in real time. As a result, the information would be well-focused projected even on a 3D dynamic moving object. The response speed of the high-speed prototype could reach around 5 ms, and the dynamic projection range covered from 0.5 to 2.0 m.
Collapse
|
5
|
Kim D, Kim B, Shin B, Shin D, Lee CK, Chung JS, Seo J, Kim YT, Sung G, Seo W, Kim S, Hong S, Hwang S, Han S, Kang D, Lee HS, Koh JS. Actuating compact wearable augmented reality devices by multifunctional artificial muscle. Nat Commun 2022; 13:4155. [PMID: 35851053 PMCID: PMC9293895 DOI: 10.1038/s41467-022-31893-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 07/06/2022] [Indexed: 11/17/2022] Open
Abstract
An artificial muscle actuator resolves practical engineering problems in compact wearable devices, which are limited to conventional actuators such as electromagnetic actuators. Abstracting the fundamental advantages of an artificial muscle actuator provides a small-scale, high-power actuating system with a sensing capability for developing varifocal augmented reality glasses and naturally fit haptic gloves. Here, we design a shape memory alloy-based lightweight and high-power artificial muscle actuator, the so-called compliant amplified shape memory alloy actuator. Despite its light weight (0.22 g), the actuator has a high power density of 1.7 kW/kg, an actuation strain of 300% under 80 g of external payload. We show how the actuator enables image depth control and an immersive tactile response in the form of augmented reality glasses and two-way communication haptic gloves whose thin form factor and high power density can hardly be achieved by conventional actuators. Artificial muscle actuators enabled by responsive functional materials like shape memory alloys are promising candidates for compact e-wearable devices. Here, authors demonstrate augmented reality glasses and two-way communication haptic gloves capable of image depth control and immersive tactile response.
Collapse
Affiliation(s)
- Dongjin Kim
- Department of Mechanical Engineering, Ajou University, 206 Worldcup-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16499, Republic of Korea
| | - Baekgyeom Kim
- Department of Mechanical Engineering, Ajou University, 206 Worldcup-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16499, Republic of Korea
| | - Bongsu Shin
- Samsung Advanced Institute of Technology, Samsung Electronics, 130 Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16678, Republic of Korea.,Samsung Electronics, 34, Seongchon-gil, Seocho-gu, Seoul, 06765, Republic of Korea
| | - Dongwook Shin
- Department of Mechanical Engineering, Ajou University, 206 Worldcup-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16499, Republic of Korea
| | - Chang-Kun Lee
- Samsung Advanced Institute of Technology, Samsung Electronics, 130 Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16678, Republic of Korea.,Samsung Electronics, 34, Seongchon-gil, Seocho-gu, Seoul, 06765, Republic of Korea
| | - Jae-Seung Chung
- Samsung Advanced Institute of Technology, Samsung Electronics, 130 Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16678, Republic of Korea.,Samsung Electronics, 34, Seongchon-gil, Seocho-gu, Seoul, 06765, Republic of Korea
| | - Juwon Seo
- Samsung Advanced Institute of Technology, Samsung Electronics, 130 Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16678, Republic of Korea.,Samsung Electronics, 34, Seongchon-gil, Seocho-gu, Seoul, 06765, Republic of Korea
| | - Yun-Tae Kim
- Samsung Advanced Institute of Technology, Samsung Electronics, 130 Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16678, Republic of Korea.,Samsung Electronics, 34, Seongchon-gil, Seocho-gu, Seoul, 06765, Republic of Korea
| | - Geeyoung Sung
- Samsung Advanced Institute of Technology, Samsung Electronics, 130 Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16678, Republic of Korea.,Samsung Electronics, 34, Seongchon-gil, Seocho-gu, Seoul, 06765, Republic of Korea
| | - Wontaek Seo
- Samsung Advanced Institute of Technology, Samsung Electronics, 130 Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16678, Republic of Korea
| | - Sunil Kim
- Samsung Advanced Institute of Technology, Samsung Electronics, 130 Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16678, Republic of Korea
| | - Sunghoon Hong
- Samsung Advanced Institute of Technology, Samsung Electronics, 130 Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16678, Republic of Korea
| | - Sungwoo Hwang
- Samsung Advanced Institute of Technology, Samsung Electronics, 130 Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16678, Republic of Korea.,Samsung SDS, 125, Olympic-ro, 35-gil, Songpa-gu, Seoul, 05510, Republic of Korea
| | - Seungyong Han
- Department of Mechanical Engineering, Ajou University, 206 Worldcup-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16499, Republic of Korea.
| | - Daeshik Kang
- Department of Mechanical Engineering, Ajou University, 206 Worldcup-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16499, Republic of Korea.
| | - Hong-Seok Lee
- Samsung Advanced Institute of Technology, Samsung Electronics, 130 Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16678, Republic of Korea. .,Department of Electrical and Computer Engineering, Seoul National University, 1, Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
| | - Je-Sung Koh
- Department of Mechanical Engineering, Ajou University, 206 Worldcup-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 16499, Republic of Korea.
| |
Collapse
|
6
|
Li Q, Deng H, Yang C, He W, Zhong F. Locally controllable 2D/3D mixed display and image generation method. OPTICS EXPRESS 2022; 30:22838-22847. [PMID: 36224975 DOI: 10.1364/oe.455320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 05/29/2022] [Indexed: 06/16/2023]
Abstract
In this paper, a locally controllable two-dimensional (2D)/ three-dimensional (3D) mixed display system and corresponding image generation method are proposed. The proposed system is mainly composed of a collimating backlight module (CBM) and a light control module (LCM). The CBM provides collimating polarized light. The LCM modulates a part of the collimating polarized light to form point light sources for 3D display and the other part to form scattered light sources for 2D display. The 2D and 3D display states can be locally controlled by using a pixelated mask loaded on a polarization switching layer. In addition, a corresponding image generation method is proposed. According to observer's demand, the parallax image is divided into target image area and residual image area by using deep learning matting algorithm, and a 2D/3D mixed light field image with full parallax 3D target image and high-resolution 2D residual image is generated. We developed a prototype based on the proposed locally controllable 2D/3D mixed display structure and generated two sets of 2D/3D mixed light field image with different target objects and residual objects from the same parallax images. The experimental results demonstrated the effectiveness of our proposed system and the corresponding image generation method. High-resolution 2D image and full parallax 3D image were displayed and locally switched in the experimental system.
Collapse
|
7
|
Nomoto T, Li W, Peng HL, Watanabe Y. Dynamic Multi-projection Mapping Based on Parallel Intensity Control. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2125-2134. [PMID: 35167463 DOI: 10.1109/tvcg.2022.3150488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Projection mapping using multiple projectors is promising for spatial augmented reality; however, it is difficult to apply it to dynamic scenes. This is because the conventional method decides all pixel intensities of multiple images simultaneously based on the global optimization method, and it is hard to reduce the latency from motion to projection. To mitigate this, we propose a novel method of controlling the intensity based on a pixel-parallel calculation for each projector in real-time with low latency. This parallel calculation leverages the insight that the projected pixels from different projectors in overlapping areas can be approximated independently if the pixel is sufficiently small relative to the surface structure. Additionally, our pixel-parallel calculation method allows a distributed system configuration, such that the number of projectors can be increased to form a network for high scalability. We demonstrate a seamless mapping into dynamic scenes at 360 fps with a 9.5-ms latency using ten cameras and four projectors.
Collapse
|
8
|
Ebner C, Mori S, Mohr P, Peng Y, Schmalstieg D, Wetzstein G, Kalkofen D. Video See-Through Mixed Reality with Focus Cues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2256-2266. [PMID: 35167471 DOI: 10.1109/tvcg.2022.3150504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This work introduces the first approach to video see-through mixed reality with full support for focus cues. By combining the flexibility to adjust the focus distance found in varifocal designs with the robustness to eye-tracking error found in multifocal designs, our novel display architecture reliably delivers focus cues over a large workspace. In particular, we introduce gaze-contingent layered displays and mixed reality focal stacks, an efficient representation of mixed reality content that lends itself to fast processing for driving layered displays in real time. We thoroughly evaluate this approach by building a complete end-to-end pipeline for capture, render, and display of focus cues in video see-through displays that uses only off-the-shelf hardware and compute components.
Collapse
|
9
|
Velez-Zea A, Barrera-Ramirez JF, Torroba R. Alternative constraints for improved multiplane hologram generation. APPLIED OPTICS 2022; 61:B8-B16. [PMID: 35201120 DOI: 10.1364/ao.439708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 10/09/2021] [Indexed: 06/14/2023]
Abstract
In this work, we introduce a modified hologram plane constraint to improve the accuracy of the global Gerchberg-Saxton (GGS) algorithm used for multiplane phase-only hologram generation. This constraint consists of a modified phase factor that depends on the amplitude of the field in the hologram plane. We demonstrate that this constraint produces an increase in the mean correlation coefficient between the reconstructed planes from a multiplane hologram and the corresponding amplitude targets for each plane. Furthermore, this constraint can be applied together with a mixed constraint in the reconstruction planes, leading to a more uniform and controllable reproduction of a target intensity distribution. To confirm the validity of our proposal, we show numerical and experimental results for multiplane holograms with six discrete planes, using both high and low contrast targets. For the experimental results, we implement a holographic projection scheme based on a phase-only spatial light modulator.
Collapse
|
10
|
Yoo D, Lee S, Jo Y, Cho J, Choi S, Lee B. Volumetric Head-Mounted Display With Locally Adaptive Focal Blocks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1415-1427. [PMID: 32746283 DOI: 10.1109/tvcg.2020.3011468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
A commercial head-mounted display (HMD) for virtual reality (VR) presents three-dimensional imagery with a fixed focal distance. The VR HMD with a fixed focus can cause visual discomfort to an observer. In this article, we propose a novel design of a compact VR HMD supporting near-correct focus cues over a wide depth of field (from 18 cm to optical infinity). The proposed HMD consists of a low-resolution binary backlight, a liquid crystal display panel, and focus-tunable lenses. In the proposed system, the backlight locally illuminates the display panel that is floated by the focus-tunable lens at a specific distance. The illumination moment and the focus-tunable lens' focal power are synchronized to generate focal blocks at the desired distances. The distance of each focal block is determined by depth information of three-dimensional imagery to provide near-correct focus cues. We evaluate the focus cue fidelity of the proposed system considering the fill factor and resolution of the backlight. Finally, we verify the display performance with experimental results.
Collapse
|
11
|
Kimura S, Iwai D, Punpongsanon P, Sato K. Multifocal Stereoscopic Projection Mapping. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:4256-4266. [PMID: 34449374 DOI: 10.1109/tvcg.2021.3106486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Stereoscopic projection mapping (PM) allows a user to see a three-dimensional (3D) computer-generated (CG) object floating over physical surfaces of arbitrary shapes around us using projected imagery. However, the current stereoscopic PM technology only satisfies binocular cues and is not capable of providing correct focus cues, which causes a vergence-accommodation conflict (VAC). Therefore, we propose a multifocal approach to mitigate VAC in stereoscopic PM. Our primary technical contribution is to attach electrically focus-tunable lenses (ETLs) to active shutter glasses to control both vergence and accommodation. Specifically, we apply fast and periodical focal sweeps to the ETLs, which causes the "virtual image" (as an optical term) of a scene observed through the ETLs to move back and forth during each sweep period. A 3D CG object is projected from a synchronized high-speed projector only when the virtual image of the projected imagery is located at a desired distance. This provides an observer with the correct focus cues required. In this study, we solve three technical issues that are unique to stereoscopic PM: (1) The 3D CG object is displayed on non-planar and even moving surfaces; (2) the physical surfaces need to be shown without the focus modulation; (3) the shutter glasses additionally need to be synchronized with the ETLs and the projector. We also develop a novel compensation technique to deal with the "lens breathing" artifact that varies the retinal size of the virtual image through focal length modulation. Further, using a proof-of-concept prototype, we demonstrate that our technique can present the virtual image of a target 3D CG object at the correct depth. Finally, we validate the advantage provided by our technique by comparing it with conventional stereoscopic PM using a user study on a depth-matching task.
Collapse
|
12
|
Gao C, Peng Y, Wang R, Zhang Z, Li H, Liu X. Foveated light-field display and real-time rendering for virtual reality. APPLIED OPTICS 2021; 60:8634-8643. [PMID: 34613088 DOI: 10.1364/ao.432911] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 08/02/2021] [Indexed: 06/13/2023]
Abstract
Glasses-free light field displays have significantly progressed due to advances in high-resolution microdisplays and high-end graphics processing units (GPUs). However, for near-eye light-field displays requiring portability, the fundamental trade-off regarding achieved spatial resolution remains: retinal blur quality must be degraded; otherwise, computational consumption increases. This has prevented synthesizing the high-quality light field from being fast. By integrating off-the-shelf gaze tracking modules into near-eye light-field displays, we present wearable virtual reality prototypes supporting human visual system-oriented focus cues. An optimized, foveated light field is delivered to each eye subject to the gaze point, providing more natural visual experiences than state-of-the-art solutions. Importantly, the factorization runtime can be immensely reduced, since the image resolution is only high within the gaze cone. In addition, we demonstrate significant improvements in computation and retinal blur quality over counterpart near-eye displays.
Collapse
|
13
|
Efstathiou C, Draviam VM. Electrically tunable lenses - eliminating mechanical axial movements during high-speed 3D live imaging. J Cell Sci 2021; 134:271866. [PMID: 34409445 DOI: 10.1242/jcs.258650] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
The successful investigation of photosensitive and dynamic biological events, such as those in a proliferating tissue or a dividing cell, requires non-intervening high-speed imaging techniques. Electrically tunable lenses (ETLs) are liquid lenses possessing shape-changing capabilities that enable rapid axial shifts of the focal plane, in turn achieving acquisition speeds within the millisecond regime. These human-eye-inspired liquid lenses can enable fast focusing and have been applied in a variety of cell biology studies. Here, we review the history, opportunities and challenges underpinning the use of cost-effective high-speed ETLs. Although other, more expensive solutions for three-dimensional imaging in the millisecond regime are available, ETLs continue to be a powerful, yet inexpensive, contender for live-cell microscopy.
Collapse
Affiliation(s)
- Christoforos Efstathiou
- School of Biological and Chemical Sciences , Queen Mary University of London, London, E1 4NS, UK
| | - Viji M Draviam
- School of Biological and Chemical Sciences , Queen Mary University of London, London, E1 4NS, UK
| |
Collapse
|
14
|
Ueda T, Iwai D, Sato K. IlluminatedZoom: spatially varying magnified vision using periodically zooming eyeglasses and a high-speed projector. OPTICS EXPRESS 2021; 29:16377-16395. [PMID: 34154202 DOI: 10.1364/oe.427616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 05/07/2021] [Indexed: 06/13/2023]
Abstract
Spatial zooming and magnification, which control the size of only a portion of a scene while maintaining its context, is an essential interaction technique in augmented reality (AR) systems. It has been applied in various AR applications including surgical navigation, visual search support, and human behavior control. However, spatial zooming has been implemented only on video see-through displays and not been supported by optical see-through displays. It is not trivial to achieve spatial zooming of an observed real scene using near-eye optics. This paper presents the first optical see-through spatial zooming glasses which enables interactive control of the perceived sizes of real-world appearances in a spatially varying manner. The key to our technique is the combination of periodically fast zooming eyeglasses and a synchronized high-speed projector. We stack two electrically focus-tunable lenses (ETLs) for each eyeglass and sweep their focal lengths to modulate the magnification periodically from one (unmagnified) to higher (magnified) at 60 Hz in a manner that prevents a user from perceiving the modulation. We use a 1,000 fps high-speed projector to provide high-resolution spatial illumination for the real scene around the user. A portion of the scene that is to appear magnified is illuminated by the projector when the magnification is greater than one, while the other part is illuminated when the magnification is equal to one. Through experiments, we demonstrate the spatial zooming results of up to 30% magnification using a prototype system. Our technique has the potential to expand the application field of spatial zooming interaction in optical see-through AR.
Collapse
|
15
|
Xu H, Wang L, Tabata S, Watanabe Y, Ishikawa M. Extended depth-of-field projection method using a high-speed projector with a synchronized oscillating variable-focus lens. APPLIED OPTICS 2021; 60:3917-3924. [PMID: 33983330 DOI: 10.1364/ao.419470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 04/01/2021] [Indexed: 06/12/2023]
Abstract
For a projector-based virtual reality (VR) or augmented reality (AR) display, a large depth of field and a high-speed image refresh rate are important keys to improve the projector's performance. Here, we propose a solution that extends the depth of field of the projection using a variable-focus lens and a high-speed projector as well as a control method that synchronizes oscillation of the variable-focus lens with the high-speed projector. The experiment confirms that the proposed system can project the well-focused and dynamically changeable contents on six different planes. Its projection range varies from 0.3 m to 1.5 m, and the refresh rate is 166.7 Hz.
Collapse
|
16
|
Li Q, He W, Deng H, Zhong FY, Chen Y. High-performance reflection-type augmented reality 3D display using a reflective polarizer. OPTICS EXPRESS 2021; 29:9446-9453. [PMID: 33820372 DOI: 10.1364/oe.421879] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 03/04/2021] [Indexed: 06/12/2023]
Abstract
We propose a high-performance reflection-type augmented reality (AR) 3D display by using a reflective polarizer (RP). The RP functions as a reflective imaging device as well as an image combiner that combines the real scenes and the 3D images reconstructed by the integral imaging display unit. Benefiting from the flawless imaging of the RP, the proposed reflection-type AR system can achieve high-definition 3D display. A prototype based on the proposed reflection-type AR structure is developed, and it presents good 3D display effects and reflection-type AR performances. The developed prototype is very compact, as thin as 3.4 mm, which makes it be a potential candidate in stomatology and vehicle AR display.
Collapse
|
17
|
Sun X, Zhang Y, Huang PC, Acharjee N, Dagenais M, Peckerar M, Varshney A. Correcting the Proximity Effect in Nanophotonic Phased Arrays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3503-3513. [PMID: 32941146 DOI: 10.1109/tvcg.2020.3023601] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Thermally modulated Nanophotonic Phased Arrays (NPAs) can be used as phase-only holographic displays. Compared to the holographic displays based on Liquid Crystal on Silicon Spatial Light Modulators (LCoS SLMs), NPAs have the advantage of integrated light source and high refresh rate. However, the formation of the desired wavefront requires accurate modulation of the phase which is distorted by the thermal proximity effect. This problem has been largely overlooked and existing approaches to similar problems are either slow or do not provide a good result in the setting of NPAs. We propose two new algorithms based on the iterative phase retrieval algorithm and the proximal algorithm to address this challenge. We have carried out computational simulations to compare and contrast various algorithms in terms of image quality and computational efficiency. This work is going to benefit the research on NPAs and enable the use of large-scale NPAs as holographic displays.
Collapse
|
18
|
Chang C, Bang K, Wetzstein G, Lee B, Gao L. Toward the next-generation VR/AR optics: a review of holographic near-eye displays from a human-centric perspective. OPTICA 2020; 7:1563-1578. [PMID: 34141829 PMCID: PMC8208705 DOI: 10.1364/optica.406004] [Citation(s) in RCA: 87] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Accepted: 09/23/2020] [Indexed: 05/19/2023]
Abstract
Wearable near-eye displays for virtual and augmented reality (VR/AR) have seen enormous growth in recent years. While researchers are exploiting a plethora of techniques to create life-like three-dimensional (3D) objects, there is a lack of awareness of the role of human perception in guiding the hardware development. An ultimate VR/AR headset must integrate the display, sensors, and processors in a compact enclosure that people can comfortably wear for a long time while allowing a superior immersion experience and user-friendly human-computer interaction. Compared with other 3D displays, the holographic display has unique advantages in providing natural depth cues and correcting eye aberrations. Therefore, it holds great promise to be the enabling technology for next-generation VR/AR devices. In this review, we survey the recent progress in holographic near-eye displays from the human-centric perspective.
Collapse
Affiliation(s)
- Chenliang Chang
- Department of Bioengineering, University of California, Los Angeles, 410 Westwood Plaza, Los Angeles, California 90095, USA
| | - Kiseung Bang
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul 08826, Republic of Korea
| | - Gordon Wetzstein
- Department of Electrical Engineering, Stanford University, 350 Jane Stanford Way, Stanford, California 94305, USA
| | - Byoungho Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul 08826, Republic of Korea
| | - Liang Gao
- Department of Bioengineering, University of California, Los Angeles, 410 Westwood Plaza, Los Angeles, California 90095, USA
- Corresponding author:
| |
Collapse
|
19
|
Light source optimization for partially coherent holographic displays with consideration of speckle contrast, resolution, and depth of field. Sci Rep 2020; 10:18832. [PMID: 33139826 PMCID: PMC7606540 DOI: 10.1038/s41598-020-75947-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 10/13/2020] [Indexed: 11/20/2022] Open
Abstract
Speckle reduction is an important topic in holographic displays as speckles not only reduce signal-to-noise ratio but also possess an eye-safety issue. Despite thorough exploration of speckle reduction methods using partially coherent light sources, the trade-off involved by the partial coherence has not been thoroughly discussed. Here, we introduce theoretical models that quantify the effects of partial coherence on the resolution and the speckle contrast. The theoretical models allow us to find an optimal light source that maximizes the speckle reduction while minimizing the decline of the other terms. We implement benchtop prototypes of partially coherent holographic displays using the optimal light source, and verify the theoretical models via simulation and experiment. We also present a criterion to evaluate the depth of field in partially coherent holographic displays. We conclude with a discussion about approximations and limitations inherent in the theoretical models.
Collapse
|
20
|
Krajancich B, Padmanaban N, Wetzstein G. Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1871-1879. [PMID: 32070978 DOI: 10.1109/tvcg.2020.2973443] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Occlusion is a powerful visual cue that is crucial for depth perception and realism in optical see-through augmented reality (OST-AR). However, existing OST-AR systems additively overlay physical and digital content with beam combiners - an approach that does not easily support mutual occlusion, resulting in virtual objects that appear semi-transparent and unrealistic. In this work, we propose a new type of occlusion-capable OST-AR system. Rather than additively combining the real and virtual worlds, we employ a single digital micromirror device (DMD) to merge the respective light paths in a multiplicative manner. This unique approach allows us to simultaneously block light incident from the physical scene on a pixel-by-pixel basis while also modulating the light emitted by a light-emitting diode (LED) to display digital content. Our technique builds on mixed binary/continuous factorization algorithms to optimize time-multiplexed binary DMD patterns and their corresponding LED colors to approximate a target augmented reality (AR) scene. In simulations and with a prototype benchtop display, we demonstrate hard-edge occlusions, plausible shadows, and also gaze-contingent optimization of this novel display mode, which only requires a single spatial light modulator.
Collapse
|
21
|
Ueda T, Iwai D, Hiraki T, Sato K. Illuminated Focus: Vision Augmentation using Spatial Defocusing via Focal Sweep Eyeglasses and High-Speed Projector. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2051-2061. [PMID: 32078550 DOI: 10.1109/tvcg.2020.2973496] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Aiming at realizing novel vision augmentation experiences, this paper proposes the IlluminatedFocus technique, which spatially defocuses real-world appearances regardless of the distance from the user's eyes to observed real objects. With the proposed technique, a part of a real object in an image appears blurred, while the fine details of the other part at the same distance remain visible. We apply Electrically Focus-Tunable Lenses (ETL) as eyeglasses and a synchronized high-speed projector as illumination for a real scene. We periodically modulate the focal lengths of the glasses (focal sweep) at more than 60 Hz so that a wearer cannot perceive the modulation. A part of the scene to appear focused is illuminated by the projector when it is in focus of the user's eyes, while another part to appear blurred is illuminated when it is out of the focus. As the basis of our spatial focus control, we build mathematical models to predict the range of distance from the ETL within which real objects become blurred on the retina of a user. Based on the blur range, we discuss a design guideline for effective illumination timing and focal sweep range. We also model the apparent size of a real scene altered by the focal length modulation. This leads to an undesirable visible seam between focused and blurred areas. We solve this unique problem by gradually blending the two areas. Finally, we demonstrate the feasibility of our proposal by implementing various vision augmentation applications.
Collapse
|
22
|
Yoo C, Chae M, Moon S, Lee B. Retinal projection type lightguide-based near-eye display with switchable viewpoints. OPTICS EXPRESS 2020; 28:3116-3135. [PMID: 32121986 DOI: 10.1364/oe.383386] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 01/05/2020] [Indexed: 06/10/2023]
Abstract
We present a retinal-projection-based near-eye display with switchable multiple viewpoints by polarization-multiplexing. Active switching of viewpoints is provided by the polarization grating, multiplexed holographic optical elements and polarization-dependent eyepiece lens that can generate one of the dual-divided focus groups according to the pupil position. The lightguide-combined optical devices have a potential to enable a wide field of view (FOV) and short eye relief with compact form factor. Our proposed system can support a pupil movement with an extended eyebox and mitigate image problem caused by duplicated viewpoints. We discuss the optical design for guiding system and demonstrate that proof-of-concept system provides all-in-focus images with 37 degrees FOV and 16 mm eyebox in horizontal direction.
Collapse
|
23
|
Akşit K. Patch scanning displays: spatiotemporal enhancement for displays. OPTICS EXPRESS 2020; 28:2107-2121. [PMID: 32121908 DOI: 10.1364/oe.380858] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Accepted: 01/06/2020] [Indexed: 06/10/2023]
Abstract
Emerging fields of mixed reality and electronic sports necessitate greater spatial and temporal resolutions in displays. We introduce a novel scanning display method that enhances spatiotemporal qualities of displays. Specifically, we demonstrate that scanning multiple image patches that are representing basis functions of each block in a target image can help to synthesize spatiotemporally enhanced visuals. To discover the right image patches, we introduce an optimization framework tailored to our hardware. In our method, spatiotemporally enhanced visuals are synthesized using an optical scanner scanning image patches from an image generator illuminated by a locally addressable backlight. As a validation of our method, we demonstrate a prototype using commodity equipment. Our method improves pixel fill factor to hundred percent and enhances spatial resolution of a display up to four times. An inherent constrain regarding to spatiotemporal qualities of displays could be solved using our method.
Collapse
|
24
|
Xia X, Guan Y, State A, Chakravarthula P, Rathinavel K, Cham TJ, Fuchs H. Towards a Switchable AR/VR Near-eye Display with Accommodation-Vergence and Eyeglass Prescription Support. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:3114-3124. [PMID: 31403422 DOI: 10.1109/tvcg.2019.2932238] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this paper, we present our novel design for switchable AR/VR near-eye displays which can help solve the vergence-accommodation-conflict issue. The principal idea is to time-multiplex virtual imagery and real-world imagery and use a tunable lens to adjust focus for the virtual display and the see-through scene separately. With this novel design, prescription eyeglasses for near- and far-sighted users become unnecessary. This is achieved by integrating the wearer's corrective optical prescription into the tunable lens for both virtual display and see-through environment. We built a prototype based on the design, comprised of micro-display, optical systems, a tunable lens, and active shutters. The experimental results confirm that the proposed near-eye display design can switch between AR and VR and can provide correct accommodation for both.
Collapse
|
25
|
Rathinavel K, Wetzstein G, Fuchs H. Varifocal Occlusion-Capable Optical See-through Augmented Reality Display based on Focus-tunable Optics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:3125-3134. [PMID: 31502977 DOI: 10.1109/tvcg.2019.2933120] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Optical see-through augmented reality (AR) systems are a next-generation computing platform that offer unprecedented user experiences by seamlessly combining physical and digital content. Many of the traditional challenges of these displays have been significantly improved over the last few years, but AR experiences offered by today's systems are far from seamless and perceptually realistic. Mutually consistent occlusions between physical and digital objects are typically not supported. When mutual occlusion is supported, it is only supported for a fixed depth. We propose a new optical see-through AR display system that renders mutual occlusion in a depth-dependent, perceptually realistic manner. To this end, we introduce varifocal occlusion displays based on focus-tunable optics, which comprise a varifocal lens system and spatial light modulators that enable depth-corrected hard-edge occlusions for AR experiences. We derive formal optimization methods and closed-form solutions for driving this tunable lens system and demonstrate a monocular varifocal occlusion-capable optical see-through AR display capable of perceptually realistic occlusion across a large depth range.
Collapse
|
26
|
Iwai D, Izawa H, Kashima K, Ueda T, Sato K. Speeded-Up Focus Control of Electrically Tunable Lens by Sparse Optimization. Sci Rep 2019; 9:12365. [PMID: 31451748 PMCID: PMC6710262 DOI: 10.1038/s41598-019-48900-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 08/15/2019] [Indexed: 11/30/2022] Open
Abstract
Electrically tunable lenses (ETL), also known as liquid lenses, can be focused at various distances by changing the electric signal applied on the lens. ETLs require no mechanical structures, and therefore, provide a more compact and inexpensive focus control than conventional computerized translation stages. They have been exploited in a wide range of imaging and display systems and enabled novel applications for the last several years. However, the optical fluid in the ETL is rippled after the actuation, which physically limits the response time and significantly hampers the applicability range. To alleviate this problem, we apply a sparse optimization framework that optimizes the temporal pattern of the electrical signal input to the ETL. In verification experiments, the proposed method accelerated the convergence of the focal length to the target patterns. In particular, it converged the optical power to the target at twice the speed of the simply determined input signal, and increased the quality of the captured image during multi-focal imaging.
Collapse
Affiliation(s)
- Daisuke Iwai
- Osaka University, Graduate School of Engineering Science, Toyonaka, 560-8531, Japan.
| | - Hidetoshi Izawa
- Osaka University, Graduate School of Engineering Science, Toyonaka, 560-8531, Japan
| | - Kenji Kashima
- Kyoto University, Graduate School of Informatics, Kyoto, 606-8501, Japan
| | - Tatsuyuki Ueda
- Osaka University, Graduate School of Engineering Science, Toyonaka, 560-8531, Japan
| | - Kosuke Sato
- Osaka University, Graduate School of Engineering Science, Toyonaka, 560-8531, Japan
| |
Collapse
|
27
|
Choi S, Lee S, Jo Y, Yoo D, Kim D, Lee B. Optimal binary representation via non-convex optimization on tomographic displays. OPTICS EXPRESS 2019; 27:24362-24381. [PMID: 31510326 DOI: 10.1364/oe.27.024362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Accepted: 08/04/2019] [Indexed: 06/10/2023]
Abstract
There have been many recent developments in 3D display technology to provide correct accommodation cues over an extended focus range. To this end, those displays rely on scene decomposition algorithms to reproduce accurate occlusion boundaries as well asretinal defocus blur. Recently, tomographic displays have been proposed with improved trade-offs of focus range, spatial resolution, and exit-pupil. The advantage of the system partly stems from a high-speed backlight modulation system based on a digital micromirror device, which only supports 1-bit images. However, its inherent binary constraint hinders achieving the optimal scene decomposition, thus leaving boundary artifacts. In this work, we present a technique for synthesizing optimal imagery of general 3D scenes with occlusion on tomographic displays. Requiring no prior knowledge of the scene geometry, our technique addresses the blending issue via non-convex optimization, inspired by recent studies in discrete tomography. Also, we present a general framework for this rendering algorithm and demonstrate the utility of the technique for volumetric display systems with binary representation.
Collapse
|
28
|
Lee S, Jo Y, Yoo D, Cho J, Lee D, Lee B. Tomographic near-eye displays. Nat Commun 2019; 10:2497. [PMID: 31175279 PMCID: PMC6555831 DOI: 10.1038/s41467-019-10451-2] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Accepted: 05/03/2019] [Indexed: 11/09/2022] Open
Abstract
The ultimate 3D displays should provide both psychological and physiological cues for depth recognition. However, it has been challenging to satisfy the essential features without making sacrifices in the resolution, frame rate, and eye box. Here, we present a tomographic near-eye display that supports a wide depth of field, quasi-continuous accommodation, omni-directional motion parallax, preserved resolution, full frame, and moderate field of view within a sufficient eye box. The tomographic display consists of focus-tunable optics, a display panel, and a fast spatially adjustable backlight. The synchronization of the focus-tunable optics and the backlight enables the display panel to express the depth information. We implement a benchtop prototype near-eye display, which is the most promising application of tomographic displays. We conclude with a detailed analysis and thorough discussion of the display's optimal volumetric reconstruction. of tomographic displays.
Collapse
Affiliation(s)
- Seungjae Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, Republic of Korea
| | - Youngjin Jo
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, Republic of Korea
| | - Dongheon Yoo
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, Republic of Korea
| | - Jaebum Cho
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, Republic of Korea
| | - Dukho Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, Republic of Korea
| | - Byoungho Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, Republic of Korea.
| |
Collapse
|
29
|
Cui W, Gao L. All-passive transformable optical mapping near-eye display. Sci Rep 2019; 9:6064. [PMID: 30988506 PMCID: PMC6465389 DOI: 10.1038/s41598-019-42507-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2018] [Accepted: 04/01/2019] [Indexed: 11/24/2022] Open
Abstract
We present an all-passive, transformable optical mapping (ATOM) near-eye display based on the “human-centric design” principle. By employing a diffractive optical element, a distorted grating, the ATOM display can project different portions of a two-dimensional display screen to various depths, rendering a real three-dimensional image with correct focus cues. Thanks to its all-passive optical mapping architecture, the ATOM display features a reduced form factor and low power consumption. Moreover, the system can readily switch between a real-three-dimensional and a high-resolution two-dimensional display mode, providing task-tailored viewing experience for a variety of VR/AR applications.
Collapse
Affiliation(s)
- Wei Cui
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 306 N. Wright St., Urbana, 61801, IL, USA.,Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405 N. Mathews Ave., Urbana, 61801, IL, USA
| | - Liang Gao
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 306 N. Wright St., Urbana, 61801, IL, USA. .,Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405 N. Mathews Ave., Urbana, 61801, IL, USA.
| |
Collapse
|