1
|
Yang W, Cheng Y, Zou G, Yang BR, Qin Z. Enhancing the spatial resolution of light-field displays without losing angular resolution by a computational subpixel realignment. OPTICS LETTERS 2024; 49:1-4. [PMID: 38134137 DOI: 10.1364/ol.504215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 11/13/2023] [Indexed: 12/24/2023]
Abstract
Low spatial resolution is an urgent problem in integral imaging light-field displays (LFDs). This study proposes a computational method to enhance the spatial resolution without losing angular resolution. How rays reconstruct voxels through lenslets is changed so that every ray through a lenslet merely provides a subpixel. The three subpixels of a pixel no longer form one voxel but three independent voxels. We further demonstrate imperfect integration of subpixels, called the sampling error, can be eliminated on specific image depths, including the central depth plane. By realigning subpixels in the above manner under no sampling error, the sampling rate of voxels is three times the conventional pixel-based LFDs. Moreover, the ray number of every voxel is preserved for an unaffected angular resolution. With unavoidable component alignment errors, resolution gains of 2.52 and 2.0 are verified in simulation and experiment by computationally updating the elemental image array. The proposed computational method further reveals that LFDs intrinsically have a higher space-bandwidth product than presumed.
Collapse
|
2
|
Qin Z, Cheng Y, Dong J, Qiu Y, Yang W, Yang BR. Real-time computer-generated integral imaging light field displays: revisiting the point retracing rendering method from a signal processing perspective. OPTICS EXPRESS 2023; 31:35835-35849. [PMID: 38017747 DOI: 10.1364/oe.502141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 10/02/2023] [Indexed: 11/30/2023]
Abstract
Integral imaging light field displays (InIm-LFDs) can provide realistic 3D images by showing an elemental image array (EIA) under a lens array. However, it is always challenging to computationally generate an EIA in real-time with entry-level computing hardware because the current practice that projects many viewpoints to the EIA induces heavy computations. This study discards the viewpoint-based strategy, revisits the early point retracing rendering method, and proposes that InIm-LFDs and regular 2D displays share two similar signal processing phases: sampling and reconstructing. An InIm-LFD is demonstrated to create a finite number of static voxels for signal sampling. Each voxel is invariantly formed by homogeneous pixels for signal reconstructing. We obtain the static voxel-pixel mapping through arbitrarily accurate raytracing in advance and store it as a lookup table (LUT). Our EIA rendering method first resamples input 3D data with the pre-defined voxels and then assigns every voxel's value to its homogeneous pixels through the LUT. As a result, the proposed method reduces the computational complexity by several orders of magnitude. The experimental rendering speed is as fast as 7 to 10 ms for a full-HD EIA frame on an entry-level laptop. Finally, considering a voxel may not be perfectly integrated by its homogeneous pixels, called the sampling error, the proposed and conventional viewpoint-based methods are analyzed in the Fourier domain. We prove that even with severe sampling errors, the two methods negligibly differ in the output signal's frequency spectrum. We expect the proposed method to break the long-standing tradeoff between rendering speed, accuracy, and system complexity for computer-generated integral imaging.
Collapse
|
3
|
Zhao CJ, Guo ZD, Deng H, Yang CN, Bai YC. Integral imaging three-dimensional display system with anisotropic backlight for the elimination of voxel aliasing and separation. OPTICS EXPRESS 2023; 31:29132-29144. [PMID: 37710719 DOI: 10.1364/oe.498147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 07/19/2023] [Indexed: 09/16/2023]
Abstract
Compared with conventional scattered backlight systems, integral imaging (InIm) display system with collimated backlight can reduce the voxel size, but apparent voxel separation and severe graininess still exist in reconstructed 3D images. In this paper, an InIm 3D display system with anisotropic backlight control of sub-pixels was proposed to resolve both voxel aliasing and voxel separation simultaneously. It consists of an anisotropic backlight unit (ABU), a transmissive liquid crystal panel (LCP), and a lens array. The ABU with specific horizontal and vertical divergence angles was proposed and designed. Within the depth of field, the light rays emitted from sub-pixels are controlled precisely by the ABU to minimize the voxel size as well as stitch adjacent voxels seamlessly, thus improving the 3D image quality effectively. In the experiment, the prototype of our proposed ABU-type InIm system was developed, and the spatial frequency was nearly two times of conventional scattered backlight InIm system. Additionally, the proposed system eliminated the voxel separation which usually occurs in collimated backlight InIm system. As a result, voxels reconstructed by our proposed system were stitched in space without aliasing and separation, thereby greatly enhancing the 3D resolution and image quality.
Collapse
|
4
|
Lee E, Cho H, Yoo H. Computational Integral Imaging Reconstruction via Elemental Image Blending without Normalization. SENSORS (BASEL, SWITZERLAND) 2023; 23:5468. [PMID: 37420635 DOI: 10.3390/s23125468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 05/28/2023] [Accepted: 06/07/2023] [Indexed: 07/09/2023]
Abstract
This paper presents a novel computational integral imaging reconstruction (CIIR) method using elemental image blending to eliminate the normalization process in CIIR. Normalization is commonly used in CIIR to address uneven overlapping artifacts. By incorporating elemental image blending, we remove the normalization step in CIIR, leading to decreased memory consumption and computational time compared to those of existing techniques. We conducted a theoretical analysis of the impact of elemental image blending on a CIIR method using windowing techniques, and the results showed that the proposed method is superior to the standard CIIR method in terms of image quality. We also performed computer simulations and optical experiments to evaluate the proposed method. The experimental results showed that the proposed method enhances the image quality over that of the standard CIIR method, while also reducing memory usage and processing time.
Collapse
Affiliation(s)
- Eunsu Lee
- Department of Computer Science, Sangmyung University, Seoul 110-743, Republic of Korea
| | - Hyunji Cho
- Department of Computer Science, Sangmyung University, Seoul 110-743, Republic of Korea
| | - Hoon Yoo
- Department of Intelligent IOT, Sangmyung University, Seoul 110-743, Republic of Korea
| |
Collapse
|
5
|
Qiu Y, Zhao Z, Yang J, Cheng Y, Liu Y, Yang BR, Qin Z. Light field displays with computational vision correction for astigmatism and high-order aberrations with real-time implementation. OPTICS EXPRESS 2023; 31:6262-6280. [PMID: 36823887 DOI: 10.1364/oe.485547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 01/29/2023] [Indexed: 06/18/2023]
Abstract
Vision-correcting near-eye displays are necessary concerning the large population with refractive errors. However, varifocal optics cannot effectively address astigmatism (AST) and high-order aberration (HOAs); freeform optics has little prescription flexibility. Thus, a computational solution is desired to correct AST and HOA with high prescription flexibility and no increase in volume and hardware complexity. In addition, the computational complexity should support real-time rendering. We propose that the light field display can achieve such computational vision correction by manipulating sampling rays so that rays forming a voxel are re-focused on the retina. The ray manipulation merely requires updating the elemental image array (EIA), being a fully computational solution. The correction is first calculated based on an eye's wavefront map and then refined by a simulator performing iterative optimization with a schematic eye model. Using examples of HOA and AST, we demonstrate that corrected EIAs make sampling rays distributed within ±1 arcmin on the retina. Correspondingly, the synthesized image is recovered to nearly as clear as normal vision. We also propose a new voxel-based EIA generation method considering the computational complexity. All voxel positions and the mapping between voxels and their homogeneous pixels are acquired in advance and stored as a lookup table, bringing about an ultra-fast rendering speed of 10 ms per frame with no cost in computing hardware and rendering accuracy. Finally, experimental verification is carried out by introducing the HOA and AST with customized lenses in front of a camera. As a result, significantly recovered images are reported.
Collapse
|
6
|
Zou G, Wang Z, Liu Y, Li J, Liu X, Liu J, Yang BR, Qin Z. Deep learning-enabled image content-adaptive field sequential color LCDs with mini-LED backlight. OPTICS EXPRESS 2022; 30:21044-21064. [PMID: 36224834 DOI: 10.1364/oe.459752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 05/13/2022] [Indexed: 06/16/2023]
Abstract
The mini-LED as the backlight of field sequential color LCD (FSC-LCD) enables high contrast, thin volume, and theoretically tripled light efficiency and resolution. However, color breakup (CBU) induced by a relative speed between an observer and the display severely limits the application of FSC-LCDs. Several driving algorithms have been proposed for CBU suppression, but their performance depends on image content. Moreover, their performance plateaus with increasing image segment number, preventing taking advantage of the massive segments introduced by mini-LEDs. Therefore, this study proposes an image content-adaptive driving algorithm for mini-LED FSC-LCDs. Deep learning-based image classification accurately determines the best FSC algorithm with the lowest CBU. In addition, the algorithm is heterogeneous that the image classification is independently performed in each segment, guaranteeing minimized CBU in all segments. We perform objective and subjective validation. Compared with the currently best algorithm, the proposed algorithm improves the performance in suppressing CBU by more than 20% using two evaluation metrics, supported by experiment-based subjective evaluation. Mini-LED FSC-LCDs driven by the proposed algorithm with outstanding CBU suppression can be ideal for display systems requiring high brightness and high resolution, such as head-up displays, virtual reality, and augmented reality displays.
Collapse
|
7
|
Qin Z, Zhang Y, Yang BR. Interaction between sampled rays' defocusing and number on accommodative response in integral imaging near-eye light field displays. OPTICS EXPRESS 2021; 29:7342-7360. [PMID: 33726237 DOI: 10.1364/oe.417241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 02/17/2021] [Indexed: 06/12/2023]
Abstract
In an integral imaging near-eye light field display using a microlens array, a point on a reconstructed depth plane (RDP) is reconstructed by sampled rays. Previous studies respectively suggested the accommodative response may shift from the RDP under two circumstances: (i) the RDP is away from the central depth plane (CDP) to introduce defocusing in sampled rays; (ii) the sampled ray number is too low. However, sampled rays' defocusing and number may interact, and the interaction's influence on the accommodative response has been little revealed. Therefore, this study adopts a proven imaging model providing retinal images to analyze the accommodative response. As a result, when the RDP and the CDP coincide, the accommodative response matches the RDP. When the RDP deviates from the CDP, defocusing is introduced in sampled rays, causing the accommodative response to shift from the RDP towards the CDP. For example, in a system with a CDP of 4 diopters (D) and 45 sampled rays, when the RDP is at 3, 2, 1, and 0 D, the accommodative response shifts to 3.25, 2.75, 2, and 1.75 D, respectively. With fewer rays, the accommodative response tends to further shift to the CDP. Eventually, with fewer than five rays, the eye accommodates to the CDP and loses the 3D display capacity. Moreover, under different RDPs, the ray number influences differently, and vice versa. An x-y polynomial equation containing three interactive terms is finally provided to reveal the interaction between RDP position and ray number. In comparison, in a pinhole-based system with no CDP, the accommodative response always matches the RDP when the sampled ray number is greater than five.
Collapse
|
8
|
Yao C, Cheng D, Wang Y. Matrix optics representation and imaging analysis of a light-field near-eye display. OPTICS EXPRESS 2020; 28:39976-39997. [PMID: 33379535 DOI: 10.1364/oe.411997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 12/07/2020] [Indexed: 06/12/2023]
Abstract
Integral-imaging-based (InI-based) light-field near-eye display (LF-NED) is an effective way to relieve vergence-accommodation conflict (VAC) in applications of virtual reality (VR) and augmented reality (AR). Lenslet arrays are often used as spatial light modulator (SLM) in such systems. However, the conflict between refocusing on a virtual object point from the light-field image (LF image) and focusing on the image plane of the lenslets leads to degradation of the viewing effect. Thus, the light field (LF) cannot be accurately restored. In this study, we introduce matrix optics and build a parameterized model of a lenslet-array-based LF-NED with general applicability, based on which the imaging process is derived, and the performance of the system is analyzed. A lenslet-array-based LF-NED optical model is embodied in LightTools to verify the theoretical model. The simulations prove that the model we propose and the conclusions about it are consistent with the simulation results. Thus, the model can be used as the theoretical basis for evaluating the primary performance of an InI-based LF-NED system.
Collapse
|
9
|
Xu M, Huang H, Hua H. Analytical model for the perceived retinal image formation of 3D display systems. OPTICS EXPRESS 2020; 28:38029-38048. [PMID: 33379624 DOI: 10.1364/oe.408585] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 11/16/2020] [Indexed: 06/12/2023]
Abstract
The optical design process of conventional stereoscope-type head mounted displays for virtual and augmented reality applications typically neglects the inherent aberrations of the eye optics or refractive errors of a viewer, which misses the opportunity of producing personal devices for optimal visual experiences. Although a few research efforts have been made to simulate the retinal image formation process for some of the emerging 3D display systems such as light field displays that require modeling the eye optics to complete the image formation process, the existing works generally are specific for one type of display methods, unable to provide a generalized framework for different display methods for the benefit of comparison, and often require the use of at least two different software platforms for implementation which is challenging in handling massive data and implementing compensation of wavefront aberrations induced by display engine or eye refractive errors. To overcome those limits, we present a generalized analytical model for accurately simulating the visual responses such as retinal PSF, MTF, and image formation of different types of 2D and 3D display systems. This analytical model can accurately simulate the retinal responses when viewing a given display system, accounting for the residual eye aberrations of schematic eye models that match with the statistical clinical measurements, eye accommodative change as required, the effects of different eye refractive errors specific to viewers, and the effects of various wavefront aberrations inherited from a display engine. We further describe the numerical implementation of this analytical model for simulating the perceived retinal image with different types of HMD systems in a single computational platform. Finally, with a test setup, we numerically demonstrated the application of this analytical model in the simulation of the perceived retinal image, accommodative response and in the investigation of the eye refractive error impacts on the perceived retinal image based on the multifocal plane display, integral imaging based light field display, computational multilayer light field display, as well as the stereoscope and natural viewing for comparison.
Collapse
|
10
|
Zhao Z, Liu J, Xu L, Zhang Z, Zhao N. Wave-optics and spatial frequency analyses of integral imaging three-dimensional display systems. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2020; 37:1603-1613. [PMID: 33104607 DOI: 10.1364/josaa.397255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Accepted: 08/21/2020] [Indexed: 06/11/2023]
Abstract
Wave optics is usually thought to be more rigorous than geometrical optics to analyze integral imaging (II) systems. However, most of the previous wave-optics investigations are directed to a certain subsystem or do not sufficiently consider the finite aperture of microlens arrays (MLAs). Therefore, a diffraction-limited model of the entire II system, which consists of pickup, image processing, and reconstruction subsystems, is proposed, and the effects of system parameters on spatial resolution are especially studied. With the help of paraxial scalar diffraction theory, the origin impulse response function of the entire II system is derived; the parameter matching condition with optimum resolution and the wave-optics principle are achieved. Besides, the modulation transfer function is then obtained and Fourier analysis is performed, which indicates that the features of MLA and the display play a critical role in spatial frequency transfer characteristics, greatly affecting the resolution. These studies might be useful for the further research and understanding of II systems, especially for the effective enhancement of resolution.
Collapse
|
11
|
Zhao ZF, Liu J, Zhang ZQ, Xu LF. Bionic-compound-eye structure for realizing a compact integral imaging 3D display in a cell phone with enhanced performance. OPTICS LETTERS 2020; 45:1491-1494. [PMID: 32163999 DOI: 10.1364/ol.384182] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Accepted: 02/13/2020] [Indexed: 06/10/2023]
Abstract
A bionic-compound-eye structure (BCES), which is a substitute of a microlens array, is proposed to enhance the performance of integral imaging (II) 3D display systems. Hexagonal ocelli without gaps and barriers are predesigned to obtain a continuous image, high-resolution, and uniform parallax. A curved substrate is designed to enhance the viewing angle. In addition, ocelli are fused with the substrate to form a relief structure, BCES. When they are placed above a normal display, continuous and full-parallax 3D images with 150 µm effective resolution and a 28° horizontal, 22° vertical viewing angle could be achieved, about twice as much as that of normal systems. The weight of the BCES is 31 g, and the thickness of the whole system is 22 mm; thus, the BCES-based II (BCES-II) is very compact. In addition, this structure can be easily integrated into a cell phone or iPad for compact quasi-2D and 3D adjustable display.
Collapse
|
12
|
Zhan T, Zou J, Lu M, Chen E, Wu ST. Wavelength-multiplexed multi-focal-plane seethrough near-eye displays. OPTICS EXPRESS 2019; 27:27507-27513. [PMID: 31684516 DOI: 10.1364/oe.27.027507] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 08/25/2019] [Indexed: 06/10/2023]
Abstract
We demonstrate a multi-focal-plane see-through near-eye display with effective focus cues enabled by wavelength multiplexing. A spectral notch filter is implemented as the wavelength-sensitive depth separation element. The vergence-accommodation conflict can be mitigated with the proposed design without space- or time-multiplexing. Another design of a dual-focus projection module for the waveguide-type augmented reality devices using wavelength-multiplexing is also presented.
Collapse
|