1
|
Grubert J, Witzani L, Otte A, Gesslein T, Kranz M, Kristensson PO. Text Entry Performance and Situation Awareness of a Joint Optical See-Through Head-Mounted Display and Smartphone System. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5830-5846. [PMID: 37639421 DOI: 10.1109/tvcg.2023.3309316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Optical see-through head-mounted displays (OST HMDs) are a popular output medium for mobile Augmented Reality (AR) applications. To date, they lack efficient text entry techniques. Smartphones are a major text entry medium in mobile contexts but attentional demands can contribute to accidents while typing on the go. Mobile multi-display ecologies, such as combined OST HMD-smartphone systems, promise performance and situation awareness benefits over single-device use. We study the joint performance of text entry on mobile phones with text output on optical see-through head-mounted displays. A series of five experiments with a total of 86 participants indicate that, as of today, the challenges in such a joint interactive system outweigh the potential benefits.
Collapse
|
2
|
Zhao N, Xiao J, Weng P, Zhang H. Tomographic waveguide-based augmented reality display. OPTICS EXPRESS 2024; 32:18692-18699. [PMID: 38859019 DOI: 10.1364/oe.524983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 04/29/2024] [Indexed: 06/12/2024]
Abstract
A tomographic waveguide-based augmented reality display technique is proposed for near-eye three-dimensional (3D) display with accurate depth reconstructions. A pair of tunable lenses with complementary focuses is utilized to project tomographic virtual 3D images while maintaining the correct perception of the real scene. This approach reconstructs virtual 3D images with physical depth cues, thereby addressing the vergence-accommodation conflict inherent in waveguide augmented reality systems. A prototype has been constructed and optical experiments have been conducted, demonstrating the system's capability in delivering high-quality 3D scenes for waveguide-based augmented reality display.
Collapse
|
3
|
Velez-Zea A, Barrera-Ramírez JF. Color multilayer holographic near-eye augmented reality display. Sci Rep 2023; 13:10651. [PMID: 37391489 DOI: 10.1038/s41598-023-36128-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 05/30/2023] [Indexed: 07/02/2023] Open
Abstract
This study demonstrates a full-color near-eye holographic display capable of superimposing color virtual scenes with 2D, 3D, and multiple objects with extended depth upon a real scene, which also has the ability to present different 3D information depending on the focus of the user's eyes using a single computer-generated hologram per color channel. Our setup makes use of a hologram generation method based on two-step propagation and the singular value decomposition of the Fresnel transform impulse response function to efficiently generate the holograms of the target scene. Then, we test our proposal by implementing a holographic display that makes use of a phase-only spatial light modulator and time-division multiplexing for color reproduction. We demonstrate the superior quality and computation speed of this approach compared with other hologram generation techniques with both numerical and experimental results.
Collapse
Affiliation(s)
- Alejandro Velez-Zea
- Grupo de Óptica y Fotónica, Instituto de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Antioquia UdeA, Calle 70 No. 52-21, Medellín, Colombia.
| | - John Fredy Barrera-Ramírez
- Grupo de Óptica y Fotónica, Instituto de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Antioquia UdeA, Calle 70 No. 52-21, Medellín, Colombia
| |
Collapse
|
4
|
Ebner C, Mohr P, Langlotz T, Peng Y, Schmalstieg D, Wetzstein G, Kalkofen D. Off-Axis Layered Displays: Hybrid Direct-View/Near-Eye Mixed Reality with Focus Cues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; PP:2816-2825. [PMID: 37027729 DOI: 10.1109/tvcg.2023.3247077] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This work introduces off-axis layered displays, the first approach to stereoscopic direct-view displays with support for focus cues. Off-axis layered displays combine a head-mounted display with a traditional direct-view display for encoding a focal stack and thus, for providing focus cues. To explore the novel display architecture, we present a complete processing pipeline for the real-time computation and post-render warping of off-axis display patterns. In addition, we build two prototypes using a head-mounted display in combination with a stereoscopic direct-view display, and a more widely available monoscopic direct-view display. In addition we show how extending off-axis layered displays with an attenuation layer and with eye-tracking can improve image quality. We thoroughly analyze each component in a technical evaluation and present examples captured through our prototypes.
Collapse
|
5
|
Qiu Y, Zhao Z, Yang J, Cheng Y, Liu Y, Yang BR, Qin Z. Light field displays with computational vision correction for astigmatism and high-order aberrations with real-time implementation. OPTICS EXPRESS 2023; 31:6262-6280. [PMID: 36823887 DOI: 10.1364/oe.485547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 01/29/2023] [Indexed: 06/18/2023]
Abstract
Vision-correcting near-eye displays are necessary concerning the large population with refractive errors. However, varifocal optics cannot effectively address astigmatism (AST) and high-order aberration (HOAs); freeform optics has little prescription flexibility. Thus, a computational solution is desired to correct AST and HOA with high prescription flexibility and no increase in volume and hardware complexity. In addition, the computational complexity should support real-time rendering. We propose that the light field display can achieve such computational vision correction by manipulating sampling rays so that rays forming a voxel are re-focused on the retina. The ray manipulation merely requires updating the elemental image array (EIA), being a fully computational solution. The correction is first calculated based on an eye's wavefront map and then refined by a simulator performing iterative optimization with a schematic eye model. Using examples of HOA and AST, we demonstrate that corrected EIAs make sampling rays distributed within ±1 arcmin on the retina. Correspondingly, the synthesized image is recovered to nearly as clear as normal vision. We also propose a new voxel-based EIA generation method considering the computational complexity. All voxel positions and the mapping between voxels and their homogeneous pixels are acquired in advance and stored as a lookup table, bringing about an ultra-fast rendering speed of 10 ms per frame with no cost in computing hardware and rendering accuracy. Finally, experimental verification is carried out by introducing the HOA and AST with customized lenses in front of a camera. As a result, significantly recovered images are reported.
Collapse
|
6
|
Aizenman AM, Koulieris GA, Gibaldi A, Sehgal V, Levi DM, Banks MS. The Statistics of Eye Movements and Binocular Disparities during VR Gaming: Implications for Headset Design. ACM TRANSACTIONS ON GRAPHICS 2023; 42:7. [PMID: 37122317 PMCID: PMC10139447 DOI: 10.1145/3549529] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
The human visual system evolved in environments with statistical regularities. Binocular vision is adapted to these such that depth perception and eye movements are more precise, faster, and performed comfortably in environments consistent with the regularities. We measured the statistics of eye movements and binocular disparities in virtual-reality (VR) - gaming environments and found that they are quite different from those in the natural environment. Fixation distance and direction are more restricted in VR, and fixation distance is farther. The pattern of disparity across the visual field is less regular in VR and does not conform to a prominent property of naturally occurring disparities. From this we predict that double vision is more likely in VR than in the natural environment. We also determined the optimal screen distance to minimize discomfort due to the vergence-accommodation conflict, and the optimal nasal-temporal positioning of head-mounted display (HMD) screens to maximize binocular field of view. Finally, in a user study we investigated how VR content affects comfort and performance. Content that is more consistent with the statistics of the natural world yields less discomfort than content that is not. Furthermore, consistent content yields slightly better performance than inconsistent content.
Collapse
|
7
|
Teng D, Lai C, Song Q, Yang X, Liu L. Super multi-view near-eye virtual reality with directional backlights from wave-guides. OPTICS EXPRESS 2023; 31:1721-1736. [PMID: 36785201 DOI: 10.1364/oe.478267] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 12/09/2022] [Indexed: 06/18/2023]
Abstract
Directional backlights have often been employed for generating multiple view-zones in three-dimensional (3D) display, with each backlight converging into a corresponding view-zone. By designing the view-zone interval for each pupil smaller than the pupil's diameter, super multi-view (SMV) can get implemented for a VAC-free 3D display. However, expanding the backlight from a light-source to cover the corresponding display panel often needs an extra thickness, which results in a thicker structure and is unwanted by a near-eye display. In this paper, two wave-guides are introduced into a near-eye virtual reality (NEVR) system, for sequentially guiding more than one directional backlight to each display panel for SMV display without bringing obvious extra thickness. A prototype SMV NEVR gets demonstrated, with two backlights from each wave-guide converging into two view-zones for a corresponding pupil. Although the additional configured light-sources are positioned far from the corresponding wave-guide in our proof-of-concept prototype, multiple light-sources can be attached to the corresponding wave-guide compactly if necessary. As proof, a 3D scene with defocus-blur effects gets displayed. The design range of the backlights' total reflection angles in the wave-guide is also discussed.
Collapse
|
8
|
Hiroi Y, Someya K, Itoh Y. Neural distortion fields for spatial calibration of wide field-of-view near-eye displays. OPTICS EXPRESS 2022; 30:40628-40644. [PMID: 36298994 DOI: 10.1364/oe.472288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 10/06/2022] [Indexed: 06/16/2023]
Abstract
We propose a spatial calibration method for wide field-of-view (FoV) near-eye displays (NEDs) with complex image distortions. Image distortions in NEDs can destroy the reality of the virtual object and cause sickness. To achieve distortion-free images in NEDs, it is necessary to establish a pixel-by-pixel correspondence between the viewpoint and the displayed image. Designing compact and wide-FoV NEDs requires complex optical designs. In such designs, the displayed images are subject to gaze-contingent, non-linear geometric distortions, which explicit geometric models can be difficult to represent or computationally intensive to optimize. To solve these problems, we propose neural distortion field (NDF), a fully-connected deep neural network that implicitly represents display surfaces complexly distorted in spaces. NDF takes spatial position and gaze direction as input and outputs the display pixel coordinate and its intensity as perceived in the input gaze direction. We synthesize the distortion map from a novel viewpoint by querying points on the ray from the viewpoint and computing a weighted sum to project output display coordinates into an image. Experiments showed that NDF calibrates an augmented reality NED with 90° FoV with about 3.23 pixel (5.8 arcmin) median error using only 8 training viewpoints. Additionally, we confirmed that NDF calibrates more accurately than the non-linear polynomial fitting, especially around the center of the FoV.
Collapse
|
9
|
Qian L, Song T, Unberath M, Kazanzides P. AR-Loupe: Magnified Augmented Reality by Combining an Optical See-Through Head-Mounted Display and a Loupe. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2550-2562. [PMID: 33170780 DOI: 10.1109/tvcg.2020.3037284] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Head-mounted loupes can increase the user's visual acuity to observe the details of an object. On the other hand, optical see-through head-mounted displays (OST-HMD) are able to provide virtual augmentations registered with real objects. In this article, we propose AR-Loupe, combining the advantages of loupes and OST-HMDs, to offer augmented reality in the user's magnified field-of-vision. Specifically, AR-Loupe integrates a commercial OST-HMD, Magic Leap One, and binocular Galilean magnifying loupes, with customized 3D-printed attachments. We model the combination of user's eye, screen of OST-HMD, and the optical loupe as a pinhole camera. The calibration of AR-Loupe involves interactive view segmentation and an adapted version of stereo single point active alignment method (Stereo-SPAAM). We conducted a two-phase multi-user study to evaluate AR-Loupe. The users were able to achieve sub-millimeter accuracy ( 0.82 mm) on average, which is significantly ( ) smaller compared to normal AR guidance ( 1.49 mm). The mean calibration time was 268.46 s. With the increased size of real objects through optical magnification and the registered augmentation, AR-Loupe can aid users in high-precision tasks with better visual acuity and higher accuracy.
Collapse
|
10
|
Ye B, Fujimoto Y, Uchimine Y, Sawabe T, Kanbara M, Kato H. Cross-talk elimination for lenslet array near eye display based on eye-gaze tracking. OPTICS EXPRESS 2022; 30:16196-16216. [PMID: 36221469 DOI: 10.1364/oe.455482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 04/04/2022] [Indexed: 06/16/2023]
Abstract
Lenslet array (LA) near-eye displays (NEDs) are a recent technical development that creates a virtual image in the field of view of one or both eyes. A problem occurs when the user's pupil moves out of the LA-NED eye box (i.e., cross-talk) making the image look doubled or ghosted. It negatively impacts the user experience. Although eye-gaze tracking can mitigate this problem, the effect of the solution has not been studied to understand the impact of pupil size and human perception. In this paper, we redefine the cross-talk region as the practical pupil movable region (PPMR50), which differs from eye box size because it considers pupil size and human visual perception. To evaluate the effect of eye-gaze tracking on subjective image quality, three user studies were conducted. From the results, PPMR50 was found to be consistent with human perception, and cross-talk elimination via eye-gaze tracking was better understood in a static gaze scenario. Although the system latency prevented the complete elimination of cross-talk for fast movements or large pupil changes, the problem was greatly alleviated. We also analyzed system delays based on PPMR50, which we newly defined in this paper and provided an optimization scheme to meet the maximum eyeball rotation speed.
Collapse
|
11
|
Ebner C, Mori S, Mohr P, Peng Y, Schmalstieg D, Wetzstein G, Kalkofen D. Video See-Through Mixed Reality with Focus Cues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2256-2266. [PMID: 35167471 DOI: 10.1109/tvcg.2022.3150504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This work introduces the first approach to video see-through mixed reality with full support for focus cues. By combining the flexibility to adjust the focus distance found in varifocal designs with the robustness to eye-tracking error found in multifocal designs, our novel display architecture reliably delivers focus cues over a large workspace. In particular, we introduce gaze-contingent layered displays and mixed reality focal stacks, an efficient representation of mixed reality content that lends itself to fast processing for driving layered displays in real time. We thoroughly evaluate this approach by building a complete end-to-end pipeline for capture, render, and display of focus cues in video see-through displays that uses only off-the-shelf hardware and compute components.
Collapse
|
12
|
Yoo D, Lee S, Jo Y, Cho J, Choi S, Lee B. Volumetric Head-Mounted Display With Locally Adaptive Focal Blocks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1415-1427. [PMID: 32746283 DOI: 10.1109/tvcg.2020.3011468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
A commercial head-mounted display (HMD) for virtual reality (VR) presents three-dimensional imagery with a fixed focal distance. The VR HMD with a fixed focus can cause visual discomfort to an observer. In this article, we propose a novel design of a compact VR HMD supporting near-correct focus cues over a wide depth of field (from 18 cm to optical infinity). The proposed HMD consists of a low-resolution binary backlight, a liquid crystal display panel, and focus-tunable lenses. In the proposed system, the backlight locally illuminates the display panel that is floated by the focus-tunable lens at a specific distance. The illumination moment and the focus-tunable lens' focal power are synchronized to generate focal blocks at the desired distances. The distance of each focal block is determined by depth information of three-dimensional imagery to provide near-correct focus cues. We evaluate the focus cue fidelity of the proposed system considering the fill factor and resolution of the backlight. Finally, we verify the display performance with experimental results.
Collapse
|
13
|
Kimura S, Iwai D, Punpongsanon P, Sato K. Multifocal Stereoscopic Projection Mapping. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:4256-4266. [PMID: 34449374 DOI: 10.1109/tvcg.2021.3106486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Stereoscopic projection mapping (PM) allows a user to see a three-dimensional (3D) computer-generated (CG) object floating over physical surfaces of arbitrary shapes around us using projected imagery. However, the current stereoscopic PM technology only satisfies binocular cues and is not capable of providing correct focus cues, which causes a vergence-accommodation conflict (VAC). Therefore, we propose a multifocal approach to mitigate VAC in stereoscopic PM. Our primary technical contribution is to attach electrically focus-tunable lenses (ETLs) to active shutter glasses to control both vergence and accommodation. Specifically, we apply fast and periodical focal sweeps to the ETLs, which causes the "virtual image" (as an optical term) of a scene observed through the ETLs to move back and forth during each sweep period. A 3D CG object is projected from a synchronized high-speed projector only when the virtual image of the projected imagery is located at a desired distance. This provides an observer with the correct focus cues required. In this study, we solve three technical issues that are unique to stereoscopic PM: (1) The 3D CG object is displayed on non-planar and even moving surfaces; (2) the physical surfaces need to be shown without the focus modulation; (3) the shutter glasses additionally need to be synchronized with the ETLs and the projector. We also develop a novel compensation technique to deal with the "lens breathing" artifact that varies the retinal size of the virtual image through focal length modulation. Further, using a proof-of-concept prototype, we demonstrate that our technique can present the virtual image of a target 3D CG object at the correct depth. Finally, we validate the advantage provided by our technique by comparing it with conventional stereoscopic PM using a user study on a depth-matching task.
Collapse
|
14
|
Zhang Y, Hu X, Kiyokawa K, Isoyama N, Sakata N, Hua H. Optical see-through augmented reality displays with wide field of view and hard-edge occlusion by using paired conical reflectors. OPTICS LETTERS 2021; 46:4208-4211. [PMID: 34469976 DOI: 10.1364/ol.428714] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 07/26/2021] [Indexed: 06/13/2023]
Abstract
Optical see-through head-mounted displays are actively developed in recent years. An appropriate method for mutual occlusion is essential to provide a decent user experience in many application scenarios of augmented reality. However, existing mutual occlusion methods fail to work well with a large field of view (FOV). In this Letter, we propose a double-parabolic-mirror structure that renders hard-edge occlusion within a wide FOV. The parabolic mirror increases the numerical aperture of the system significantly, and the usage of paired parabolic mirrors eliminates most optical aberrations. A liquid crystal on silicon device is introduced as the spatial light modulator for imaging a bright see-through view and rendering sharp occlusion patterns. A loop structure is built to eliminate vertical parallax. The system is designed to obtain a maximum monocular FOV of H114∘×V95∘ with hard-edge occlusion, and a FOV of H83.5∘×V53.1∘ is demonstrated with our bench-top prototype.
Collapse
|
15
|
Itoh Y, Langlotz T, Zollmann S, Iwai D, Kiyoshi K, Amano T. Computational Phase-Modulated Eyeglasses. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1916-1928. [PMID: 31613772 DOI: 10.1109/tvcg.2019.2947038] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present computational phase-modulated eyeglasses, a see-through optical system that modulates the view of the user using phase-only spatial light modulators (PSLM). A PSLM is a programmable reflective device that can selectively retardate, or delay, the incoming light rays. As a result, a PSLM works as a computational dynamic lens device. We demonstrate our computational phase-modulated eyeglasses with either a single PSLM or dual PSLMs and show that the concept can realize various optical operations including focus correction, bi-focus, image shift, and field of view manipulation, namely optical zoom. Compared to other programmable optics, computational phase-modulated eyeglasses have the advantage in terms of its versatility. In addition, we also presents some prototypical focus-loop applications where the lens is dynamically optimized based on distances of objects observed by a scene camera. We further discuss the implementation, applications but also discuss limitations of the current prototypes and remaining issues that need to be addressed in future research.
Collapse
|
16
|
Qin Z, Zhang Y, Yang BR. Interaction between sampled rays' defocusing and number on accommodative response in integral imaging near-eye light field displays. OPTICS EXPRESS 2021; 29:7342-7360. [PMID: 33726237 DOI: 10.1364/oe.417241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 02/17/2021] [Indexed: 06/12/2023]
Abstract
In an integral imaging near-eye light field display using a microlens array, a point on a reconstructed depth plane (RDP) is reconstructed by sampled rays. Previous studies respectively suggested the accommodative response may shift from the RDP under two circumstances: (i) the RDP is away from the central depth plane (CDP) to introduce defocusing in sampled rays; (ii) the sampled ray number is too low. However, sampled rays' defocusing and number may interact, and the interaction's influence on the accommodative response has been little revealed. Therefore, this study adopts a proven imaging model providing retinal images to analyze the accommodative response. As a result, when the RDP and the CDP coincide, the accommodative response matches the RDP. When the RDP deviates from the CDP, defocusing is introduced in sampled rays, causing the accommodative response to shift from the RDP towards the CDP. For example, in a system with a CDP of 4 diopters (D) and 45 sampled rays, when the RDP is at 3, 2, 1, and 0 D, the accommodative response shifts to 3.25, 2.75, 2, and 1.75 D, respectively. With fewer rays, the accommodative response tends to further shift to the CDP. Eventually, with fewer than five rays, the eye accommodates to the CDP and loses the 3D display capacity. Moreover, under different RDPs, the ray number influences differently, and vice versa. An x-y polynomial equation containing three interactive terms is finally provided to reveal the interaction between RDP position and ray number. In comparison, in a pinhole-based system with no CDP, the accommodative response always matches the RDP when the sampled ray number is greater than five.
Collapse
|
17
|
Enlarging the Eyebox of Maxwellian Displays with a Customized Liquid Crystal Dammann Grating. CRYSTALS 2021. [DOI: 10.3390/cryst11020195] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The Maxwellian view offers a promising approach to overcome the vergence-accommodation conflict in near-eye displays, however, its pinhole-like imaging naturally limits the eyebox size. Here, a liquid crystal polymer-based Dammann grating with evenly distributed energy among different diffraction orders is developed to enlarge the eyebox of Maxwellian view displays via pupil replication. In the experiment, a 3-by-3 Dammann grating is designed and fabricated, which exhibits good efficiency and high brightness uniformity. We further construct a proof-of-concept Maxwellian view display breadboard by inserting the Dammann grating into the optical system. The prototype successfully demonstrates the enlarged eyebox and full-color operation. Our work provides a promising route of eyebox expansion in Maxwellian view displays while maintaining full-color operation, simple system configuration, compactness, and lightweight.
Collapse
|
18
|
Jo Y, Yoo C, Bang K, Lee B, Lee B. Eye-box extended retinal projection type near-eye display with multiple independent viewpoints [Invited]. APPLIED OPTICS 2021; 60:A268-A276. [PMID: 33690378 DOI: 10.1364/ao.408707] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
We introduce an approach to expand the eye-box in a retinal-projection-based near-eye display. The retinal projection display has the advantage of providing clear images in a wide depth range; however, it has difficulty in practical use with a narrow eye-box. Here, we propose a method to enhance the eye-box of the retinal projection display by generating multiple independent viewpoints, maintaining a wide depth of field. The method prevents images projected from multiple viewpoints from overlapping one other in the retina. As a result, our proposed system can provide a continuous image over a wide viewing angle without an eye tracker or image update. We discuss the optical design for the proposed method and verify its feasibility through simulation and experiment.
Collapse
|
19
|
Aydındoğan G, Kavaklı K, Şahin A, Artal P, Ürey H. Applications of augmented reality in ophthalmology [Invited]. BIOMEDICAL OPTICS EXPRESS 2021; 12:511-538. [PMID: 33659087 PMCID: PMC7899512 DOI: 10.1364/boe.405026] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 12/08/2020] [Accepted: 12/10/2020] [Indexed: 05/21/2023]
Abstract
Throughout the last decade, augmented reality (AR) head-mounted displays (HMDs) have gradually become a substantial part of modern life, with increasing applications ranging from gaming and driver assistance to medical training. Owing to the tremendous progress in miniaturized displays, cameras, and sensors, HMDs are now used for the diagnosis, treatment, and follow-up of several eye diseases. In this review, we discuss the current state-of-the-art as well as potential uses of AR in ophthalmology. This review includes the following topics: (i) underlying optical technologies, displays and trackers, holography, and adaptive optics; (ii) accommodation, 3D vision, and related problems such as presbyopia, amblyopia, strabismus, and refractive errors; (iii) AR technologies in lens and corneal disorders, in particular cataract and keratoconus; (iv) AR technologies in retinal disorders including age-related macular degeneration (AMD), glaucoma, color blindness, and vision simulators developed for other types of low-vision patients.
Collapse
Affiliation(s)
- Güneş Aydındoğan
- Koç University, Department of Electrical Engineering and Translational Medicine Research Center (KUTTAM), Istanbul 34450, Turkey
| | - Koray Kavaklı
- Koç University, Department of Electrical Engineering and Translational Medicine Research Center (KUTTAM), Istanbul 34450, Turkey
| | - Afsun Şahin
- Koç University, School of Medicine and Translational Medicine Research Center (KUTTAM), Istanbul 34450, Turkey
| | - Pablo Artal
- Laboratorio de Óptica, Instituto Universitario de Investigación en Óptica y Nanofísica, Universidad de Murcia, Campus de Espinardo, E-30100 Murcia, Spain
| | - Hakan Ürey
- Koç University, Department of Electrical Engineering and Translational Medicine Research Center (KUTTAM), Istanbul 34450, Turkey
| |
Collapse
|
20
|
Sun X, Zhang Y, Huang PC, Acharjee N, Dagenais M, Peckerar M, Varshney A. Correcting the Proximity Effect in Nanophotonic Phased Arrays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3503-3513. [PMID: 32941146 DOI: 10.1109/tvcg.2020.3023601] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Thermally modulated Nanophotonic Phased Arrays (NPAs) can be used as phase-only holographic displays. Compared to the holographic displays based on Liquid Crystal on Silicon Spatial Light Modulators (LCoS SLMs), NPAs have the advantage of integrated light source and high refresh rate. However, the formation of the desired wavefront requires accurate modulation of the phase which is distorted by the thermal proximity effect. This problem has been largely overlooked and existing approaches to similar problems are either slow or do not provide a good result in the setting of NPAs. We propose two new algorithms based on the iterative phase retrieval algorithm and the proximal algorithm to address this challenge. We have carried out computational simulations to compare and contrast various algorithms in terms of image quality and computational efficiency. This work is going to benefit the research on NPAs and enable the use of large-scale NPAs as holographic displays.
Collapse
|
21
|
Ersumo NT, Yalcin C, Antipa N, Pégard N, Waller L, Lopez D, Muller R. A micromirror array with annular partitioning for high-speed random-access axial focusing. LIGHT, SCIENCE & APPLICATIONS 2020; 9:183. [PMID: 33298828 PMCID: PMC7596532 DOI: 10.1038/s41377-020-00420-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 09/29/2020] [Accepted: 10/12/2020] [Indexed: 05/24/2023]
Abstract
Dynamic axial focusing functionality has recently experienced widespread incorporation in microscopy, augmented/virtual reality (AR/VR), adaptive optics and material processing. However, the limitations of existing varifocal tools continue to beset the performance capabilities and operating overhead of the optical systems that mobilize such functionality. The varifocal tools that are the least burdensome to operate (e.g. liquid crystal, elastomeric or optofluidic lenses) suffer from low (≈100 Hz) refresh rates. Conversely, the fastest devices sacrifice either critical capabilities such as their dwelling capacity (e.g. acoustic gradient lenses or monolithic micromechanical mirrors) or low operating overhead (e.g. deformable mirrors). Here, we present a general-purpose random-access axial focusing device that bridges these previously conflicting features of high speed, dwelling capacity and lightweight drive by employing low-rigidity micromirrors that exploit the robustness of defocusing phase profiles. Geometrically, the device consists of an 8.2 mm diameter array of piston-motion and 48-μm-pitch micromirror pixels that provide 2π phase shifting for wavelengths shorter than 1100 nm with 10-90% settling in 64.8 μs (i.e., 15.44 kHz refresh rate). The pixels are electrically partitioned into 32 rings for a driving scheme that enables phase-wrapped operation with circular symmetry and requires <30 V per channel. Optical experiments demonstrated the array's wide focusing range with a measured ability to target 29 distinct resolvable depth planes. Overall, the features of the proposed array offer the potential for compact, straightforward methods of tackling bottlenecked applications, including high-throughput single-cell targeting in neurobiology and the delivery of dense 3D visual information in AR/VR.
Collapse
Affiliation(s)
- Nathan Tessema Ersumo
- The University of California, Berkeley and University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, 94720, USA
- Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, CA, 94720, USA
| | - Cem Yalcin
- Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, CA, 94720, USA
| | - Nick Antipa
- Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, CA, 94720, USA
| | - Nicolas Pégard
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27514, USA
| | - Laura Waller
- The University of California, Berkeley and University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, 94720, USA
- Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, CA, 94720, USA
- Chan Zuckerberg Biohub, San Francisco, CA, 94158, USA
| | - Daniel Lopez
- Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD, 20899, USA
| | - Rikky Muller
- The University of California, Berkeley and University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, 94720, USA.
- Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, CA, 94720, USA.
- Chan Zuckerberg Biohub, San Francisco, CA, 94158, USA.
| |
Collapse
|
22
|
Zhang Z, Liu J, Duan X, Wang Y. Enlarging field of view by a two-step method in a near-eye 3D holographic display. OPTICS EXPRESS 2020; 28:32709-32720. [PMID: 33114950 DOI: 10.1364/oe.403538] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 09/25/2020] [Indexed: 06/11/2023]
Abstract
The narrow field of view (FOV) has always been one of the most with limitations that drag the development of holographic three-dimensional (3D) near-eye display (NED). The complex amplitude modulation (CAM) technique is one way to realize holographic 3D display in real time with the advantage of high image quality. Previously, we applied the CAM technique on the design and integration of a compact colorful 3D-NED system. In this paper, a viewing angle enlarged CAM based 3D-NED system using a Abbe-Porter scheme and curved reflective structure is proposed. The viewing angle is increased in two steps. An Abbe-Porter filter system, composed of a lens and a grating, is used to enlarge the FOV for the first step and, meanwhile, realize complex amplitude modulation. A curved reflective structure is used to realize the FOV enlargement for the second step. Besides, the system retains the ability of colorful 3D display with high image quality. Optical experiments are performed, and the results show the system could present a 45.2° diagonal viewing angle. The system is able to present dynamic display as well. A compact prototype is fabricated and integrated for wearable and lightweight design.
Collapse
|
23
|
Duan X, Liu J, Shi X, Zhang Z, Xiao J. Full-color see-through near-eye holographic display with 80° field of view and an expanded eye-box. OPTICS EXPRESS 2020; 28:31316-31329. [PMID: 33115107 DOI: 10.1364/oe.399359] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 08/02/2020] [Indexed: 06/11/2023]
Abstract
A full-color see-through near-eye holographic display is proposed with 80° field of view (FOV) and an expanded eye-box. The system is based on a holographic optical element (HOE) to achieve a large FOV while the image light is focused at the entrance to human pupil and the image of entire field enters human eye. As we know, one of the major limitations of the large FOV holographic display system is the small eye-box that needs to be expanded. We design a double layer diffraction structure for HOE to realize eye-box expansion. The HOE consists of two non-uniform volume holographic gratings and a transparent substrate. Two fabricated holographic gratings are attached to front and back surfaces of the substrate to multiplex image light and achieve eye-box expansion. Simultaneously, the HOE is also manufactured for RGB colors to realize full-color display. The experiment results show that our proposed display system develops 80° round FOV and an enlarged eye-box of 7.5 mm (H) ×5 mm (V) at the same time. The dynamic display ability is also tested in the experiments. The proposed system provides a new solution for the practical application of augmented reality display.
Collapse
|
24
|
Suzuki K, Fukano Y, Oku H. 1000-volume/s high-speed volumetric display for high-speed HMD. OPTICS EXPRESS 2020; 28:29455-29468. [PMID: 33114845 DOI: 10.1364/oe.401778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Accepted: 09/05/2020] [Indexed: 06/11/2023]
Abstract
In this paper, we propose a high-speed volumetric display principle that can solve two problems faced by three-dimensional displays using the parallax stereo principle (namely, the vergence-accommodation conflict and display latency) and we report evaluation results. The proposed display method can update a set of images at different depths at 1000 Hz and is consistent with accommodation. The method selects the depth position in microseconds by combining a high-speed variable-focus lens that vibrates at about 69 kHz and sub-microsecond control of illumination light using an LED. By turning on the LED for only a few hundred nanoseconds when the refractive power of the lens is at a certain value, an image can be presented with this specific refractive power. The optical system is combined with a DMD to form an image at each depth. 3D information consisting of multiple planes in the depth direction can be presented at a high refresh rate by switching the images and changing the refractive power at high speed. A proof-of-concept system was developed to show the validity of the proposed display principle. The system successfully displayed 3D information consisting of six binary images at an update rate of 1000 volume/s.
Collapse
|
25
|
Zhan T, Yin K, Xiong J, He Z, Wu ST. Augmented Reality and Virtual Reality Displays: Perspectives and Challenges. iScience 2020; 23:101397. [PMID: 32759057 PMCID: PMC7404571 DOI: 10.1016/j.isci.2020.101397] [Citation(s) in RCA: 69] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 07/14/2020] [Accepted: 07/20/2020] [Indexed: 11/19/2022] Open
Abstract
As one of the most promising candidates for next-generation mobile platform, augmented reality (AR) and virtual reality (VR) have potential to revolutionize the ways we perceive and interact with various digital information. In the meantime, recent advances in display and optical technologies, together with the rapidly developing digital processers, offer new development directions to advancing the near-eye display systems further. In this perspective paper, we start by analyzing the optical requirements in near-eye displays poised by the human visual system and then compare it against the specifications of state-of-the-art devices, which reasonably shows the main challenges in near-eye displays at the present stage. Afterward, potential solutions to address these challenges in both AR and VR displays are presented case by case, including the most recent optical research and development, which are already or have the potential to be industrialized for extended reality displays.
Collapse
Affiliation(s)
- Tao Zhan
- College of Optics and Photonics, University of Central Florida, Orlando, FL 32816, USA
| | - Kun Yin
- College of Optics and Photonics, University of Central Florida, Orlando, FL 32816, USA
| | - Jianghao Xiong
- College of Optics and Photonics, University of Central Florida, Orlando, FL 32816, USA
| | - Ziqian He
- College of Optics and Photonics, University of Central Florida, Orlando, FL 32816, USA
| | - Shin-Tson Wu
- College of Optics and Photonics, University of Central Florida, Orlando, FL 32816, USA.
| |
Collapse
|
26
|
Choi S, Park S, Min SW. Design of ghost-free floating 3D display with narrow thickness using offset lens and dihedral corner reflector arrays. OPTICS EXPRESS 2020; 28:15691-15705. [PMID: 32403591 DOI: 10.1364/oe.392036] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Accepted: 05/02/2020] [Indexed: 06/11/2023]
Abstract
A floating 3D display with a dihedral corner reflector array (DCRA) is presented to improve space efficiency and eliminate ghost images. Floating displays using a DCRA have the space efficiency problem of having a system thickness equal to the height of the floating image and the problem of a ghost image interrupting the visibility of the floating display. The DCRA is analyzed to find the ghost image region. Based on the analysis, an off-axis integral floating display is placed outside the ghost image region to avoid the ghost image. To increase space efficiency, the optical path is folded using a mirror. In addition, the off-axis integral floating display is used to create a tilt angle for projecting the input image onto the DCRA in a proposed confined and narrow system to observe the complete 3D image. The effectiveness of the system was verified through simulations and experiments.
Collapse
|
27
|
He Z, Yin K, Wu ST. Passive polymer-dispersed liquid crystal enabled multi-focal plane displays. OPTICS EXPRESS 2020; 28:15294-15299. [PMID: 32403560 DOI: 10.1364/oe.392489] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Accepted: 04/24/2020] [Indexed: 06/11/2023]
Abstract
A multi-focal plane see-through near-eye display using a transparent projection display is demonstrated. The key component of the transparent projection display is a passive polymer-dispersed liquid crystal (PDLC), which is highly transparent for a large range of incident angles in air but strongly scattering at large oblique angles in high refractive index medium (e.g. glass). The use of a passive device can avoid temporal multiplexing. Such a display is highly transparent in air and can easily deliver full-color images. The proposed method is an important step toward transparent display-enabled multi-focal plane displays.
Collapse
|
28
|
Ueno T, Takaki Y. Approximated super multi-view head-mounted display to reduce visual fatigue. OPTICS EXPRESS 2020; 28:14134-14150. [PMID: 32403874 DOI: 10.1364/oe.392966] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Accepted: 04/19/2020] [Indexed: 06/11/2023]
Abstract
To reduce the visual fatigue of the head-mounted displays, we propose an approximated super multi-view technique in which multiple viewpoints are generated two-dimensionally with an interval smaller than the pupil diameter using the time multiplexing technique, and left and right virtual images are two-dimensionally shifted in synchronization with the viewpoint generation. The proposed technique enlarges the depth of field of eyes to provide the accommodation-invariant feature so that the vergence-accommodation conflict is mitigated. We constructed an experimental system by using two LED arrays for the viewpoint generation and one LCD panel vibrated by two stepping motors. The proposed technique was then experimentally validated.
Collapse
|
29
|
Xu M, Hua H. Finite-depth and vari-focal head-mounted displays based on geometrical lightguides. OPTICS EXPRESS 2020; 28:12121-12137. [PMID: 32403712 DOI: 10.1364/oe.390928] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Accepted: 04/01/2020] [Indexed: 06/11/2023]
Abstract
Existing waveguides and lightguides in optical see-through augmented reality (AR) displays usually guide collimated light, which results in a fixed image depth at optical infinity. In this paper, we explore the feasibility of integrating a lightguide with a varifocal optics engine to provide correct focus cues and solve the vergence-accommodation conflict in lightguide-based AR displays. The image performance and the cause of artifacts in a lightguide-based AR display with a varifocal optics engine are systematically analyzed. A non-sequential ray tracing method was developed to simulate the retinal image and quantify the effects of image focal depth on the image performance and artifacts for a vari-focal display engine of different depths. A prototype with varying image depths from 0 to 3 diopters was built and the experimental results validate the proposed system. A digital correction method is also proposed to correct the primary image artifact caused by the physical structure of the lightguide.
Collapse
|
30
|
Wu JY, Kim J. Prescription AR: a fully-customized prescription-embedded augmented reality display. OPTICS EXPRESS 2020; 28:6225-6241. [PMID: 32225876 DOI: 10.1364/oe.380945] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
In this paper, we present a fully-customized AR display design that considers the user's prescription, interpupillary distance, and taste of fashion. A free-form image combiner embedded inside the prescription lens provides augmented images onto the vision-corrected real world. The optics was optimized for each prescription level, which can reduce the mass production cost while satisfying the user's taste. The foveated optimization method was applied which distributes the pixels in accordance with human visual acuity. Our design can cover myopia, hyperopia, astigmatism, and presbyopia, and allows the eye-contact interaction with privacy protection. A 169g dynamic prototype showed a 40° × 20° virtual image with a 23 cpd resolution at center field and 6 mm × 4 mm eye-box, with the vision-correction and varifocal (0.5-3m) capability.
Collapse
|
31
|
Yoo C, Chae M, Moon S, Lee B. Retinal projection type lightguide-based near-eye display with switchable viewpoints. OPTICS EXPRESS 2020; 28:3116-3135. [PMID: 32121986 DOI: 10.1364/oe.383386] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 01/05/2020] [Indexed: 06/10/2023]
Abstract
We present a retinal-projection-based near-eye display with switchable multiple viewpoints by polarization-multiplexing. Active switching of viewpoints is provided by the polarization grating, multiplexed holographic optical elements and polarization-dependent eyepiece lens that can generate one of the dual-divided focus groups according to the pupil position. The lightguide-combined optical devices have a potential to enable a wide field of view (FOV) and short eye relief with compact form factor. Our proposed system can support a pupil movement with an extended eyebox and mitigate image problem caused by duplicated viewpoints. We discuss the optical design for guiding system and demonstrate that proof-of-concept system provides all-in-focus images with 37 degrees FOV and 16 mm eyebox in horizontal direction.
Collapse
|
32
|
Chen J, Mi L, Chen CP, Liu H, Jiang J, Zhang W. Design of foveated contact lens display for augmented reality. OPTICS EXPRESS 2019; 27:38204-38219. [PMID: 31878591 DOI: 10.1364/oe.381200] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Accepted: 12/06/2019] [Indexed: 06/10/2023]
Abstract
We present a design of a contact lens display, which features an array of collimated light-emitting diodes and a contact lens, for the augmented reality. By building the infrastructure directly on top of the eye, eye is allowed to move or rotate freely without the need of exit pupil expansion nor eye tracking. The resolution of light-emitting diodes is foveated to match with the density of cones on the retina. In this manner, the total number of pixels as well as the latency of image processing can be significantly reduced. Based on the simulation, the device performance is quantitatively analyzed. For the real image, modulation transfer functions is 0.669757 at 30 cycle/degree, contrast ratio is 5, and distortion is 10%. For the virtual image, the field of view is 82°, best angular resolution is 0.38', modulation transfer function is above 0.999999 at 30 cycle/degree, contrast ratio is 4988, and distortion is 6%.
Collapse
|
33
|
Xia X, Guan Y, State A, Chakravarthula P, Rathinavel K, Cham TJ, Fuchs H. Towards a Switchable AR/VR Near-eye Display with Accommodation-Vergence and Eyeglass Prescription Support. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:3114-3124. [PMID: 31403422 DOI: 10.1109/tvcg.2019.2932238] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this paper, we present our novel design for switchable AR/VR near-eye displays which can help solve the vergence-accommodation-conflict issue. The principal idea is to time-multiplex virtual imagery and real-world imagery and use a tunable lens to adjust focus for the virtual display and the see-through scene separately. With this novel design, prescription eyeglasses for near- and far-sighted users become unnecessary. This is achieved by integrating the wearer's corrective optical prescription into the tunable lens for both virtual display and see-through environment. We built a prototype based on the design, comprised of micro-display, optical systems, a tunable lens, and active shutters. The experimental results confirm that the proposed near-eye display design can switch between AR and VR and can provide correct accommodation for both.
Collapse
|
34
|
Huang H, Hua H. Generalized methods and strategies for modeling and optimizing the optics of 3D head-mounted light field displays. OPTICS EXPRESS 2019; 27:25154-25171. [PMID: 31510393 DOI: 10.1364/oe.27.025154] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
An integral-imaging based light field head-mounted display, which typically renders a 3D scene by reconstructing the directional light rays apparently emitted by the scene via an array optics, is potentially capable of rendering correct or nearly correct focus cues and therefore solving the well-known vergence-accommodation conflict problem plaguing conventional stereoscopic displays. Its true 3D image formation nature, however, imposes significant complications and the well-established optical design process for conventional head-mounted displays becomes inadequate to address the design challenges. To our best knowledge, there are no existing methods or framework that have been previously proposed or demonstrated to address the challenges of modeling and optimizing an optical system for this type of display systems. In this paper, we present novel and generalizable methodology and framework for designing and optimizing the optical performance of integral-imaging based light field head-mounted displays, including methods of system configurations, user-defined metrics for characterizing the performance of such systems, and optimization strategies unique in light field displays. A design example is further given based on the proposed design methodology for the purpose of validating the proposed design method and framework.
Collapse
|
35
|
Padmanaban N, Konrad R, Wetzstein G. Autofocals: Evaluating gaze-contingent eyeglasses for presbyopes. SCIENCE ADVANCES 2019; 5:eaav6187. [PMID: 31259239 PMCID: PMC6598771 DOI: 10.1126/sciadv.aav6187] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Accepted: 05/22/2019] [Indexed: 05/13/2023]
Abstract
As humans age, they gradually lose the ability to accommodate, or refocus, to near distances because of the stiffening of the crystalline lens. This condition, known as presbyopia, affects nearly 20% of people worldwide. We design and build a new presbyopia correction, autofocals, to externally mimic the natural accommodation response, combining eye tracker and depth sensor data to automatically drive focus-tunable lenses. We evaluated 19 users on visual acuity, contrast sensitivity, and a refocusing task. Autofocals exhibit better visual acuity when compared to monovision and progressive lenses while maintaining similar contrast sensitivity. On the refocusing task, autofocals are faster and, compared to progressives, also significantly more accurate. In a separate study, a majority of 23 of 37 users ranked autofocals as the best correction in terms of ease of refocusing. Our work demonstrates the superiority of autofocals over current forms of presbyopia correction and could affect the lives of millions.
Collapse
|
36
|
Yu H, Bemana M, Wernikowski M, Chwesiuk M, Tursun OT, Singh G, Myszkowski K, Mantiuk R, Seidel HP, Didyk P. A Perception-driven Hybrid Decomposition for Multi-layer Accommodative Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1940-1950. [PMID: 30794180 DOI: 10.1109/tvcg.2019.2898821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Multi-focal plane and multi-layered light-field displays are promising solutions for addressing all visual cues observed in the real world. Unfortunately, these devices usually require expensive optimizations to compute a suitable decomposition of the input light field or focal stack to drive individual display layers. Although these methods provide near-correct image reconstruction, a significant computational cost prevents real-time applications. A simple alternative is a linear blending strategy which decomposes a single 2D image using depth information. This method provides real-time performance, but it generates inaccurate results at occlusion boundaries and on glossy surfaces. This paper proposes a perception-based hybrid decomposition technique which combines the advantages of the above strategies and achieves both real-time performance and high-fidelity results. The fundamental idea is to apply expensive optimizations only in regions where it is perceptually superior, e.g., depth discontinuities at the fovea, and fall back to less costly linear blending otherwise. We present a complete, perception-informed analysis and model that locally determine which of the two strategies should be applied. The prediction is later utilized by our new synthesis method which performs the image decomposition. The results are analyzed and validated in user experiments on a custom multi-plane display.
Collapse
|
37
|
Aksit K, Chakravarthula P, Rathinavel K, Jeong Y, Albert R, Fuchs H, Luebke D. Manufacturing Application-Driven Foveated Near-Eye Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1928-1939. [PMID: 30794179 DOI: 10.1109/tvcg.2019.2898781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Traditional optical manufacturing poses a great challenge to near-eye display designers due to large lead times in the order of multiple weeks, limiting the abilities of optical designers to iterate fast and explore beyond conventional designs. We present a complete near-eye display manufacturing pipeline with a day lead time using commodity hardware. Our novel manufacturing pipeline consists of several innovations including a rapid production technique to improve surface of a 3D printed component to optical quality suitable for near-eye display application, a computational design methodology using machine learning and ray tracing to create freeform static projection screen surfaces for near-eye displays that can represent arbitrary focal surfaces, and a custom projection lens design that distributes pixels non-uniformly for a foveated near-eye display hardware design candidate. We have demonstrated untethered augmented reality near-eye display prototypes to assess success of our technique, and show that a ski-goggles form factor, a large monocular field of view (30o×55o), and a resolution of 12 cycles per degree can be achieved.
Collapse
|
38
|
Cui W, Gao L. All-passive transformable optical mapping near-eye display. Sci Rep 2019; 9:6064. [PMID: 30988506 PMCID: PMC6465389 DOI: 10.1038/s41598-019-42507-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2018] [Accepted: 04/01/2019] [Indexed: 11/24/2022] Open
Abstract
We present an all-passive, transformable optical mapping (ATOM) near-eye display based on the “human-centric design” principle. By employing a diffractive optical element, a distorted grating, the ATOM display can project different portions of a two-dimensional display screen to various depths, rendering a real three-dimensional image with correct focus cues. Thanks to its all-passive optical mapping architecture, the ATOM display features a reduced form factor and low power consumption. Moreover, the system can readily switch between a real-three-dimensional and a high-resolution two-dimensional display mode, providing task-tailored viewing experience for a variety of VR/AR applications.
Collapse
Affiliation(s)
- Wei Cui
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 306 N. Wright St., Urbana, 61801, IL, USA.,Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405 N. Mathews Ave., Urbana, 61801, IL, USA
| | - Liang Gao
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 306 N. Wright St., Urbana, 61801, IL, USA. .,Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405 N. Mathews Ave., Urbana, 61801, IL, USA.
| |
Collapse
|
39
|
Lv Z, Liu J, Xiao J, Kuang Y. Integrated holographic waveguide display system with a common optical path for visible and infrared light. OPTICS EXPRESS 2018; 26:32802-32811. [PMID: 30645442 DOI: 10.1364/oe.26.032802] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2018] [Accepted: 10/02/2018] [Indexed: 06/09/2023]
Abstract
We propose an integrated holographic waveguide display system. An infrared volume holographic grating (IVHG) and a visible light grating are recorded on the same waveguide to achieve the purpose of a common light path for system miniaturization. Simulated and experimental results verify the feasibility of this method. The coupling efficiencies of the infrared module for eye tracking and the visible light module for augmented reality (AR) display are 40% and 45%. The holographic waveguide has a weight of only 4.3 grams. It is believed that this technique is a good way to achieve a light and thin eye tracking near-eye display.
Collapse
|
40
|
Chakravarthula P, Dunn D, Aksit K, Fuchs H. FocusAR: Auto-focus Augmented Reality Eyeglasses for both Real World and Virtual Imagery. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2906-2916. [PMID: 30207958 DOI: 10.1109/tvcg.2018.2868532] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
We describe a system which corrects dynamically for the focus of the real world surrounding the near-eye display of the user and simultaneously the internal display for augmented synthetic imagery, with an aim of completely replacing the user prescription eyeglasses. The ability to adjust focus for both real and virtual stimuli will be useful for a wide variety of users, but especially for users over 40 years of age who have limited accommodation range. Our proposed solution employs a tunable-focus lens for dynamic prescription vision correction, and a varifocal internal display for setting the virtual imagery at appropriate spatially registered depths. We also demonstrate a proof of concept prototype to verify our design and discuss the challenges to building an auto-focus augmented reality eyeglasses for both real and virtual.
Collapse
|
41
|
Rathinavel K, Wang H, Blate A, Fuchs H. An Extended Depth-at-Field Volumetric Near-Eye Augmented Reality Display. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2857-2866. [PMID: 30207960 DOI: 10.1109/tvcg.2018.2868570] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We introduce an optical design and a rendering pipeline for a full-color volumetric near-eye display which simultaneously presents imagery with near-accurate per-pixel focus across an extended volume ranging from 15cm (6.7 diopters) to 4M (0.25 diopters), allowing the viewer to accommodate freely across this entire depth range. This is achieved using a focus-tunable lens that continuously sweeps a sequence of 280 synchronized binary images from a high-speed, Digital Micromirror Device (DMD) projector and a high-speed, high dynamic range (HDR) light source that illuminates the DMD images with a distinct color and brightness at each binary frame. Our rendering pipeline converts 3-D scene information into a 2-D surface of color voxels, which are decomposed into 280 binary images in a voxel-oriented manner, such that 280 distinct depth positions for full-color voxels can be displayed.
Collapse
|
42
|
Gilles A, Gioia P. Real-time layer-based computer-generated hologram calculation for the Fourier transform optical system. APPLIED OPTICS 2018; 57:8508-8517. [PMID: 30461916 DOI: 10.1364/ao.57.008508] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 09/06/2018] [Indexed: 06/09/2023]
Abstract
With the growing interest for augmented reality devices, holography is often considered as a promising technology to overcome the focus issues of conventional stereoscopic displays. To enlarge the field of view of holographic head-mounted displays, a Fourier transform optical system (FTOS) has been proposed. However, since the scene geometry is distorted by the FTOS, it is necessary to compensate the position of each scene point during the hologram computation, resulting in long calculation times. In this paper, we propose a real-time computer-generated hologram calculation method for the FTOS. Whereas previously proposed methods used a ray-tracing approach to compensate the distortion induced by the FTOS, our proposed method relies on a layer-based approach. Experimental results show that our method is able to compute holograms of resolution (3840×2160) in real time at 24 frames per second, enabling its use in augmented reality applications.
Collapse
|
43
|
Wolffsohn JS, Davies LN. Presbyopia: Effectiveness of correction strategies. Prog Retin Eye Res 2018; 68:124-143. [PMID: 30244049 DOI: 10.1016/j.preteyeres.2018.09.004] [Citation(s) in RCA: 148] [Impact Index Per Article: 24.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2018] [Revised: 09/14/2018] [Accepted: 09/18/2018] [Indexed: 01/04/2023]
Abstract
Presbyopia is a global problem affecting over a billion people worldwide. The prevalence of unmanaged presbyopia is as high as 50% of those over 50 years of age in developing world populations, due to a lack of awareness and accessibility to affordable treatment, and is even as high as 34% in developed countries. Definitions of presbyopia are inconsistent and varied, so we propose a redefinition that states "presbyopia occurs when the physiologically normal age-related reduction in the eye's focusing range reaches a point, when optimally corrected for distance vision, that the clarity of vision at near is insufficient to satisfy an individual's requirements". Strategies for correcting presbyopia include separate optical devices located in front of the visual system (reading glasses) or a change in the direction of gaze to view through optical zones of different optical powers (bifocal, trifocal or progressive addition spectacle lenses), monovision (with contact lenses, intraocular lenses, laser refractive surgery and corneal collagen shrinkage), simultaneous images (with contact lenses, intraocular lenses and corneal inlays), pinhole depth of focus expansion (with intraocular lenses, corneal inlays and pharmaceuticals), crystalline lens softening (with lasers or pharmaceuticals) or restored dynamics (with 'accommodating' intraocular lenses, scleral expansion techniques and ciliary muscle electrostimulation); these strategies may be applied differently to the two eyes to optimise the range of clear focus for an individual's task requirements and minimise adverse visual effects. However, none fully overcome presbyopia in all patients. While the restoration of natural accommodation or an equivalent remains elusive, guidance is given on presbyopic correction evaluation techniques.
Collapse
Affiliation(s)
- James S Wolffsohn
- Ophthalmic Research Group, Life and Health Sciences, Aston University, Birmingham, B4 7ET, UK.
| | - Leon N Davies
- Ophthalmic Research Group, Life and Health Sciences, Aston University, Birmingham, B4 7ET, UK
| |
Collapse
|
44
|
Grubert J, Itoh Y, Moser K, Swan JE. A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2649-2662. [PMID: 28961115 DOI: 10.1109/tvcg.2017.2754257] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Optical see-through head-mounted displays (OST HMDs) are a major output medium for Augmented Reality, which have seen significant growth in popularity and usage among the general public due to the growing release of consumer-oriented models, such as the Microsoft Hololens. Unlike Virtual Reality headsets, OST HMDs inherently support the addition of computer-generated graphics directly into the light path between a user's eyes and their view of the physical world. As with most Augmented and Virtual Reality systems, the physical position of an OST HMD is typically determined by an external or embedded 6-Degree-of-Freedom tracking system. However, in order to properly render virtual objects, which are perceived as spatially aligned with the physical environment, it is also necessary to accurately measure the position of the user's eyes within the tracking system's coordinate frame. For over 20 years, researchers have proposed various calibration methods to determine this needed eye position. However, to date, there has not been a comprehensive overview of these procedures and their requirements. Hence, this paper surveys the field of calibration methods for OST HMDs. Specifically, it provides insights into the fundamentals of calibration techniques, and presents an overview of both manual and automatic approaches, as well as evaluation methods and metrics. Finally, it also identifies opportunities for future research.
Collapse
|
45
|
Kim H, Gabbard JL, Anon AM, Misu T. Driver Behavior and Performance with Augmented Reality Pedestrian Collision Warning: An Outdoor User Study. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:1515-1524. [PMID: 29543169 DOI: 10.1109/tvcg.2018.2793680] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
This article investigates the effects of visual warning presentation methods on human performance in augmented reality (AR) driving. An experimental user study was conducted in a parking lot where participants drove a test vehicle while braking for any cross traffic with assistance from AR visual warnings presented on a monoscopic and volumetric head-up display (HUD). Results showed that monoscopic displays can be as effective as volumetric displays for human performance in AR braking tasks. The experiment also demonstrated the benefits of conformal graphics, which are tightly integrated into the real world, such as their ability to guide drivers' attention and their positive consequences on driver behavior and performance. These findings suggest that conformal graphics presented via monoscopic HUDs can enhance driver performance by leveraging the effectiveness of monocular depth cues. The proposed approaches and methods can be used and further developed by future researchers and practitioners to better understand driver performance in AR as well as inform usability evaluation of future automotive AR applications.
Collapse
|
46
|
Liu S, Li Y, Zhou P, Chen Q, Su Y. Reverse-mode PSLC multi-plane optical see-through display for AR applications. OPTICS EXPRESS 2018; 26:3394-3403. [PMID: 29401867 DOI: 10.1364/oe.26.003394] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2017] [Accepted: 01/23/2018] [Indexed: 06/07/2023]
Abstract
In this paper we propose an optical see-through multi-plane display with reverse-mode polymer-stabilized liquid crystal (PSLC). Our design solves the problem of accommodation-vergence conflict with correct focus cues. In the reverse mode PSLC system, power consumption could be reduced to ~1/(N-1) of that in a normal mode system if N planes are displayed. The PSLC films fabricated in our experiment exhibit a low saturation voltage ~20 Vrms, a high transparent-state transmittance (92%), and a fast switching time within 2 ms and polarization insensitivity. A proof-of-concept two-plane color display prototype and a four-plane monocolor display prototype were implemented.
Collapse
|
47
|
Soomro SR, Urey H. Integrated 3D display and imaging using dual purpose passive screen and head-mounted projectors and camera. OPTICS EXPRESS 2018; 26:1161-1173. [PMID: 29401993 DOI: 10.1364/oe.26.001161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2017] [Accepted: 01/04/2018] [Indexed: 06/07/2023]
Abstract
We propose an integrated 3D display and imaging system using a head-mounted device and a special dual-purpose passive screen that can simultaneously facilitate 3D display and imaging. The screen is mainly composed of two optical layers, the first layer is a projection surface, which are the finely patterned retro-reflective microspheres that provide high optical gain when illuminated with head-mounted projectors. The second layer is an imaging surface made up of an array of curved mirrors, which form the perspective views of the scene captured by a head-mounted camera. The display and imaging operation are separated by performing polarization multiplexing. The demonstrated prototype system consists of a head-worn unit having a pair of 15 lumen pico-projectors and a 24MP camera, and an in-house designed and fabricated 30cm × 24cm screen. The screen provides bright display using 25% filled retro-reflective microspheres and 20 different perspective views of the user/scene using 5 × 4 array of convex mirrors. The real-time implementation is demonstrated by displaying stereo-3D content providing high brightness (up to 240 cd/m2) and low crosstalk (<4%), while 3D image capture is demonstrated by performing the computational reconstruction of the discrete free-viewpoint stereo pair displayed on a desktop or virtual reality display. Furthermore, the capture quality is determined by measuring the imaging MTF of the captured views and the capture light efficiency is calculated by considering the loss in transmitted light at each interface. Further developments in microfabrication and computational optics can present the proposed system as a unique mobile platform for immersive human-computer interaction of the future.
Collapse
|
48
|
Yu C, Peng Y, Zhao Q, Li H, Liu X. Highly efficient waveguide display with space-variant volume holographic gratings. APPLIED OPTICS 2017; 56:9390-9397. [PMID: 29216051 DOI: 10.1364/ao.56.009390] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
We propose a highly efficient waveguide display based on space-variant volume holographic gratings (SVVHGs). The local period and slant angle of the SVVHG vary along the tangential direction, enabling variant incident angles to satisfy the Bragg condition of the local gratings. As a result, we enlarge the field of view (FOV) without using the conventional multiplexing scheme, while achieving high efficiency and large FOV at the same time. We experimentally record the SVVHGs on Bayfol HX200 films. We demonstrate that the proposed display can achieve 31.9% system efficiency for a broadband light source and 52.3% for a coherent light source, 20° FOV, and high brightness uniformity, making it a promising candidate for widespread applications in the augmented reality (AR) industry.
Collapse
|
49
|
Itoh Y, Hamasaki T, Sugimoto M. Occlusion Leak Compensation for Optical See-Through Displays Using a Single-Layer Transmissive Spatial Light Modulator. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:2463-2473. [PMID: 28809690 DOI: 10.1109/tvcg.2017.2734427] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We propose an occlusion compensation method for optical see-through head-mounted displays (OST-HMDs) equipped with a singlelayer transmissive spatial light modulator (SLM), in particular, a liquid crystal display (LCD). Occlusion is an important depth cue for 3D perception, yet realizing it on OST-HMDs is particularly difficult due to the displays' semitransparent nature. A key component for the occlusion support is the SLM-a device that can selectively interfere with light rays passing through it. For example, an LCD is a transmissive SLM that can block or pass incoming light rays by turning pixels black or transparent. A straightforward solution places an LCD in front of an OST-HMD and drives the LCD to block light rays that could pass through rendered virtual objects at the viewpoint. This simple approach is, however, defective due to the depth mismatch between the LCD panel and the virtual objects, leading to blurred occlusion. This led existing OST-HMDs to employ dedicated hardware such as focus optics and multi-stacked SLMs. Contrary to these viable, yet complex and/or computationally expensive solutions, we return to the single-layer LCD approach for the hardware simplicity while maintaining fine occlusion-we compensate for a degraded occlusion area by overlaying a compensation image. We compute the image based on the HMD parameters and the background scene captured by a scene camera. The evaluation demonstrates that the proposed method reduced the occlusion leak error by 61.4% and the occlusion error by 85.7%.
Collapse
|
50
|
EyeAR: Refocusable Augmented Reality Content through Eye Measurements. MULTIMODAL TECHNOLOGIES AND INTERACTION 2017. [DOI: 10.3390/mti1040022] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|