1
|
Cooper EA. The Perceptual Science of Augmented Reality. Annu Rev Vis Sci 2023; 9:455-478. [PMID: 36944311 DOI: 10.1146/annurev-vision-111022-123758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2023]
Abstract
Augmented reality (AR) systems aim to alter our view of the world and enable us to see things that are not actually there. The resulting discrepancy between perception and reality can create compelling entertainment and can support innovative approaches to education, guidance, and assistive tools. However, building an AR system that effectively integrates with our natural visual experience is hard. AR systems often suffer from visual limitations and artifacts, and addressing these flaws requires basic knowledge of perception. At the same time, AR system development can serve as a catalyst that drives innovative new research in perceptual science. This review describes recent perceptual research pertinent to and driven by modern AR systems, with the goal of highlighting thought-provoking areas of inquiry and open questions.
Collapse
Affiliation(s)
- Emily A Cooper
- Herbert Wertheim School of Optometry & Vision Science, Helen Wills Neuroscience Institute, University of California, Berkeley, California, USA;
| |
Collapse
|
2
|
Wang Z, Tu K, Lv G, Feng Q, Wang A, Ming H. Cross talk-free retinal projection display based on a holographic complementary viewpoint array. OPTICS LETTERS 2023; 48:2437-2440. [PMID: 37126292 DOI: 10.1364/ol.485259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
In near-eye displays (NEDs), retinal projection display (RPD) is one kind of promising technology to alleviate the vergence-accommodation conflict (VAC) issue due to its always-in-focus feature. Viewpoint replication is widely used to enlarge the limited eyebox. However, the mismatch between viewpoint interval and eye pupil diameter will cause the inter-viewpoint cross talk when multiple viewpoints enter the pupil simultaneously. In this Letter, a holographic complementary viewpoint method is proposed to solve this cross talk problem. Instead of avoiding observing multiple viewpoint images simultaneously, it is designed that multiple complementary viewpoints jointly project the complete image on the retina without cross talk. To do this, the target image is segmented into multiple sub-images, each multiplied with a corresponding partial spherical phase to converge to a specific complementary viewpoint. A group of complementary viewpoint enter the eye pupil simultaneously, and each viewpoint project a corresponding sub-image on a specific area of the retina and splice to a complete image. All of the complementary viewpoints are duplicated to an interlaced two-dimensional array to extend the eyebox in both horizontal and vertical directions. Optical experiment verifies that the proposed method could present smooth transition between viewpoints to avoid both inter-viewpoint cross talk and blank image issues.
Collapse
|
3
|
Syed TA, Siddiqui MS, Abdullah HB, Jan S, Namoun A, Alzahrani A, Nadeem A, Alkhodre AB. In-Depth Review of Augmented Reality: Tracking Technologies, Development Tools, AR Displays, Collaborative AR, and Security Concerns. SENSORS (BASEL, SWITZERLAND) 2022; 23:146. [PMID: 36616745 PMCID: PMC9824627 DOI: 10.3390/s23010146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/11/2022] [Accepted: 12/13/2022] [Indexed: 06/17/2023]
Abstract
Augmented reality (AR) has gained enormous popularity and acceptance in the past few years. AR is indeed a combination of different immersive experiences and solutions that serve as integrated components to assemble and accelerate the augmented reality phenomena as a workable and marvelous adaptive solution for many realms. These solutions of AR include tracking as a means for keeping track of the point of reference to make virtual objects visible in a real scene. Similarly, display technologies combine the virtual and real world with the user's eye. Authoring tools provide platforms to develop AR applications by providing access to low-level libraries. The libraries can thereafter interact with the hardware of tracking sensors, cameras, and other technologies. In addition to this, advances in distributed computing and collaborative augmented reality also need stable solutions. The various participants can collaborate in an AR setting. The authors of this research have explored many solutions in this regard and present a comprehensive review to aid in doing research and improving different business transformations. However, during the course of this study, we identified that there is a lack of security solutions in various areas of collaborative AR (CAR), specifically in the area of distributed trust management in CAR. This research study also proposed a trusted CAR architecture with a use-case of tourism that can be used as a model for researchers with an interest in making secure AR-based remote communication sessions.
Collapse
Affiliation(s)
- Toqeer Ali Syed
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Muhammad Shoaib Siddiqui
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Hurria Binte Abdullah
- School of Social Sciences and Humanities, National University of Science and Technology (NUST), Islamabad 44000, Pakistan
| | - Salman Jan
- Malaysian Institute of Information Technology, Universiti Kuala Lumpur, Kuala Lumpur 50250, Malaysia
- Department of Computer Science, Bacha Khan University Charsadda, Charsadda 24420, Pakistan
| | - Abdallah Namoun
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Ali Alzahrani
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Adnan Nadeem
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Ahmad B. Alkhodre
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| |
Collapse
|
4
|
Xiong J, Hsiang EL, He Z, Zhan T, Wu ST. Augmented reality and virtual reality displays: emerging technologies and future perspectives. LIGHT, SCIENCE & APPLICATIONS 2021; 10:216. [PMID: 34697292 PMCID: PMC8546092 DOI: 10.1038/s41377-021-00658-8] [Citation(s) in RCA: 154] [Impact Index Per Article: 51.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 09/26/2021] [Accepted: 10/04/2021] [Indexed: 05/19/2023]
Abstract
With rapid advances in high-speed communication and computation, augmented reality (AR) and virtual reality (VR) are emerging as next-generation display platforms for deeper human-digital interactions. Nonetheless, to simultaneously match the exceptional performance of human vision and keep the near-eye display module compact and lightweight imposes unprecedented challenges on optical engineering. Fortunately, recent progress in holographic optical elements (HOEs) and lithography-enabled devices provide innovative ways to tackle these obstacles in AR and VR that are otherwise difficult with traditional optics. In this review, we begin with introducing the basic structures of AR and VR headsets, and then describing the operation principles of various HOEs and lithography-enabled devices. Their properties are analyzed in detail, including strong selectivity on wavelength and incident angle, and multiplexing ability of volume HOEs, polarization dependency and active switching of liquid crystal HOEs, device fabrication, and properties of micro-LEDs (light-emitting diodes), and large design freedoms of metasurfaces. Afterwards, we discuss how these devices help enhance the AR and VR performance, with detailed description and analysis of some state-of-the-art architectures. Finally, we cast a perspective on potential developments and research directions of these photonic devices for future AR and VR displays.
Collapse
Affiliation(s)
- Jianghao Xiong
- College of Optics and Photonics, University of Central Florida, Orlando, FL, 32816, USA
| | - En-Lin Hsiang
- College of Optics and Photonics, University of Central Florida, Orlando, FL, 32816, USA
| | - Ziqian He
- College of Optics and Photonics, University of Central Florida, Orlando, FL, 32816, USA
| | - Tao Zhan
- College of Optics and Photonics, University of Central Florida, Orlando, FL, 32816, USA
| | - Shin-Tson Wu
- College of Optics and Photonics, University of Central Florida, Orlando, FL, 32816, USA.
| |
Collapse
|
5
|
Bang K, Jo Y, Chae M, Lee B. LensIet VR: Thin, Flat and Wide-FOV Virtual Reality Display Using Fresnel Lens and LensIet Array. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2545-2554. [PMID: 33755568 DOI: 10.1109/tvcg.2021.3067758] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
We propose a new thin and flat virtual reality (VR) display design using a Fresnel lenslet array, a Fresnel lens, and a polarization-based optical folding technique. The proposed optical system has a wide field of view (FOV) of 102°x102°, a wide eye-box of 8.8 mm, and an ergonomic eye-relief of 20 mm. Simultaneously, only 3.3 mm of physical distance is required between the display panel and the lens, so that the integrated VR display can have a compact form factor like sunglasses. Moreover, since all lenslet of the lenslet array is designed to operate under on-axis condition with low aberration, the discontinuous pupil swim distortion between the lenslets is hardly observed. In addition, all on-axis lenslets can be designed identically, reducing production cost, and even off-the-shelf Fresnel optics can be used. In this paper, we introduce how we design system parameters and analyze system performance. Finally, we demonstrate two prototypes and experimentally verify that the proposed VR display system has the expected performance while having a glasses-like form factor.
Collapse
|
6
|
Shi X, Liu J, Zhang Z, Zhao Z, Zhang S. Extending eyebox with tunable viewpoints for see-through near-eye display. OPTICS EXPRESS 2021; 29:11613-11626. [PMID: 33984938 DOI: 10.1364/oe.421158] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 03/11/2021] [Indexed: 06/12/2023]
Abstract
The Maxwellian display presents always-focused images to the viewer, alleviating the vergence-accommodation conflict (VAC) in near-eye displays (NEDs). However, the limited eyebox of the typical Maxwellian display prevents it from wider applications. We propose a Maxwellian see-through NED based on a multiplexed holographic optical element (HOE) and polarization gratings (PGs) to extend the eyebox by viewpoint multiplication. The multiplexed HOE functions as multiple convex lenses to form multiple viewpoints, which are copied to different locations by PGs. To mitigate the imaging problem that multiple viewpoints or no viewpoints enter the eye pupil, the viewpoints can be tuned by mechanically moving a PG. We implement our method in a proof-of-concept system. The optical experiments confirm that the proposed display system provides always in-focus images within a 12 mm eyebox in the horizontal direction with a 32.7° diagonal field of view (FOV) and a 16.5 mm eye relief (ERF), and its viewpoints are tunable to match the actual eye pupil size. Compared with other techniques to extend the eyebox of Maxwellian displays, the proposed method shows competitive performances of a large eyebox, adaptability to the eye pupil size, and focus cues within a large depth range.
Collapse
|
7
|
Jo Y, Yoo C, Bang K, Lee B, Lee B. Eye-box extended retinal projection type near-eye display with multiple independent viewpoints [Invited]. APPLIED OPTICS 2021; 60:A268-A276. [PMID: 33690378 DOI: 10.1364/ao.408707] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
We introduce an approach to expand the eye-box in a retinal-projection-based near-eye display. The retinal projection display has the advantage of providing clear images in a wide depth range; however, it has difficulty in practical use with a narrow eye-box. Here, we propose a method to enhance the eye-box of the retinal projection display by generating multiple independent viewpoints, maintaining a wide depth of field. The method prevents images projected from multiple viewpoints from overlapping one other in the retina. As a result, our proposed system can provide a continuous image over a wide viewing angle without an eye tracker or image update. We discuss the optical design for the proposed method and verify its feasibility through simulation and experiment.
Collapse
|
8
|
Cholewiak SA, Başgöze Z, Cakmakci O, Hoffman DM, Cooper EA. A perceptual eyebox for near-eye displays. OPTICS EXPRESS 2020; 28:38008-38028. [PMID: 33379623 DOI: 10.1364/oe.408404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
In near-eye display systems that support three-dimensional (3D) augmented and virtual reality, a central factor in determining the user experience is the size of the eyebox. The eyebox refers to a volume where the eye receives an acceptable view of the image with respect to a set of criteria and thresholds. The size and location of this volume are primarily driven by optical architecture choices in which designers trade-off a number of constraints, such as field of view, image quality, and product design. It is thus important to clearly quantify how design decisions affect the properties of the eyebox. Recent work has started evaluating the eyebox in 3D based purely on optical criteria. However, such analyses do not incorporate perceptual criteria that determine visual quality, which are particularly important for binocular 3D systems. To address this limitation, we introduce the framework of a perceptual eyebox. The perceptual eyebox is the volume where the eye(s) must be located for the user to experience a visual percept falling within a perceptually-defined criterion. We combine optical and perceptual data to characterize an example perceptual eyebox for display visibility in augmented reality. The key contributions in this paper include: comparing the perceptual eyebox for monocular and binocular display designs, modeling the effects of user eye separation, and examining the effects of eye rotation on the eyebox volume.
Collapse
|
9
|
Yoo C, Chae M, Moon S, Lee B. Retinal projection type lightguide-based near-eye display with switchable viewpoints. OPTICS EXPRESS 2020; 28:3116-3135. [PMID: 32121986 DOI: 10.1364/oe.383386] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 01/05/2020] [Indexed: 06/10/2023]
Abstract
We present a retinal-projection-based near-eye display with switchable multiple viewpoints by polarization-multiplexing. Active switching of viewpoints is provided by the polarization grating, multiplexed holographic optical elements and polarization-dependent eyepiece lens that can generate one of the dual-divided focus groups according to the pupil position. The lightguide-combined optical devices have a potential to enable a wide field of view (FOV) and short eye relief with compact form factor. Our proposed system can support a pupil movement with an extended eyebox and mitigate image problem caused by duplicated viewpoints. We discuss the optical design for guiding system and demonstrate that proof-of-concept system provides all-in-focus images with 37 degrees FOV and 16 mm eyebox in horizontal direction.
Collapse
|