1
|
Qin Z, Cheng Y, Dong J, Qiu Y, Yang W, Yang BR. Real-time computer-generated integral imaging light field displays: revisiting the point retracing rendering method from a signal processing perspective. OPTICS EXPRESS 2023; 31:35835-35849. [PMID: 38017747 DOI: 10.1364/oe.502141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 10/02/2023] [Indexed: 11/30/2023]
Abstract
Integral imaging light field displays (InIm-LFDs) can provide realistic 3D images by showing an elemental image array (EIA) under a lens array. However, it is always challenging to computationally generate an EIA in real-time with entry-level computing hardware because the current practice that projects many viewpoints to the EIA induces heavy computations. This study discards the viewpoint-based strategy, revisits the early point retracing rendering method, and proposes that InIm-LFDs and regular 2D displays share two similar signal processing phases: sampling and reconstructing. An InIm-LFD is demonstrated to create a finite number of static voxels for signal sampling. Each voxel is invariantly formed by homogeneous pixels for signal reconstructing. We obtain the static voxel-pixel mapping through arbitrarily accurate raytracing in advance and store it as a lookup table (LUT). Our EIA rendering method first resamples input 3D data with the pre-defined voxels and then assigns every voxel's value to its homogeneous pixels through the LUT. As a result, the proposed method reduces the computational complexity by several orders of magnitude. The experimental rendering speed is as fast as 7 to 10 ms for a full-HD EIA frame on an entry-level laptop. Finally, considering a voxel may not be perfectly integrated by its homogeneous pixels, called the sampling error, the proposed and conventional viewpoint-based methods are analyzed in the Fourier domain. We prove that even with severe sampling errors, the two methods negligibly differ in the output signal's frequency spectrum. We expect the proposed method to break the long-standing tradeoff between rendering speed, accuracy, and system complexity for computer-generated integral imaging.
Collapse
|
2
|
Qiu Y, Zhao Z, Yang J, Cheng Y, Liu Y, Yang BR, Qin Z. Light field displays with computational vision correction for astigmatism and high-order aberrations with real-time implementation. OPTICS EXPRESS 2023; 31:6262-6280. [PMID: 36823887 DOI: 10.1364/oe.485547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 01/29/2023] [Indexed: 06/18/2023]
Abstract
Vision-correcting near-eye displays are necessary concerning the large population with refractive errors. However, varifocal optics cannot effectively address astigmatism (AST) and high-order aberration (HOAs); freeform optics has little prescription flexibility. Thus, a computational solution is desired to correct AST and HOA with high prescription flexibility and no increase in volume and hardware complexity. In addition, the computational complexity should support real-time rendering. We propose that the light field display can achieve such computational vision correction by manipulating sampling rays so that rays forming a voxel are re-focused on the retina. The ray manipulation merely requires updating the elemental image array (EIA), being a fully computational solution. The correction is first calculated based on an eye's wavefront map and then refined by a simulator performing iterative optimization with a schematic eye model. Using examples of HOA and AST, we demonstrate that corrected EIAs make sampling rays distributed within ±1 arcmin on the retina. Correspondingly, the synthesized image is recovered to nearly as clear as normal vision. We also propose a new voxel-based EIA generation method considering the computational complexity. All voxel positions and the mapping between voxels and their homogeneous pixels are acquired in advance and stored as a lookup table, bringing about an ultra-fast rendering speed of 10 ms per frame with no cost in computing hardware and rendering accuracy. Finally, experimental verification is carried out by introducing the HOA and AST with customized lenses in front of a camera. As a result, significantly recovered images are reported.
Collapse
|
3
|
Zhao BC, Yang F, Wu F. High-Aperture-Ratio Dual-View Integral Imaging Display. MICROMACHINES 2022; 13:2213. [PMID: 36557512 PMCID: PMC9785181 DOI: 10.3390/mi13122213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 12/04/2022] [Accepted: 12/13/2022] [Indexed: 06/17/2023]
Abstract
Low aperture ratio is a problem in the conventional dual-view integral imaging (DVII) display using a point light source array. A high-aperture-ratio DVII display using a gradient width point light source array is reported in this work. The elemental Images 1 and 2, which are alternatively aligned on a liquid crystal panel, are illuminated by the light rays emitted from an assigned point light source. The optical path is optimized by optimizing the widths of the point light sources. The aperture ratio of the proposed DVII display was demonstrated as 1.88 times the conventional DVII display. Experiments showed that the vertical viewing range is related to the vertical width of the first row point light source, whereas the aperture ratio is related to the vertical widths of all point light sources. By optimizing the widths of the point light sources, the aperture ratio is enhanced without loss of viewing range.
Collapse
Affiliation(s)
- Bai-Chuan Zhao
- School of Information Engineering, Chengdu Aeronautic Polytechnic, Chengdu 610218, China
| | - Fan Yang
- Chengdu Institute of Computer Application, Chinese Academy of Sciences, Chengdu 610041, China
| | - Fei Wu
- School of Electronic Engineering, Chengdu Technological University, Chengdu 610073, China
| |
Collapse
|
4
|
Javidi B, Carnicer A, Arai J, Fujii T, Hua H, Liao H, Martínez-Corral M, Pla F, Stern A, Waller L, Wang QH, Wetzstein G, Yamaguchi M, Yamamoto H. Roadmap on 3D integral imaging: sensing, processing, and display. OPTICS EXPRESS 2020; 28:32266-32293. [PMID: 33114917 DOI: 10.1364/oe.402193] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 08/27/2020] [Indexed: 06/11/2023]
Abstract
This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field.
Collapse
|
5
|
Li Y, Sang X, Xing S, Guan Y, Yang S, Chen D, Yang L, Yan B. Real-time optical 3D reconstruction based on Monte Carlo integration and recurrent CNNs denoising with the 3D light field display. OPTICS EXPRESS 2019; 27:22198-22208. [PMID: 31510515 DOI: 10.1364/oe.27.022198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Accepted: 06/20/2019] [Indexed: 06/10/2023]
Abstract
A general integral imaging generation method based on the path-traced Monte Carlo (MC) method and recurrent convolutional neural networks denoising is presented. According to the optical layer structure of the three-dimensional (3D) light field display, screen pixels are encoded to specific viewpoints, then the directional rays are cast from viewpoints to screen pixels to preform the path integral. In the process of the integral, advanced illumination is used for high-quality elemental image array (EIA) generation. Recurrent convolutional neural networks are implemented as an auxiliary post-processing for the EIA to eliminate the noise of the 3D image in MC integration. 4K (3840 × 2160) resolution, 2 sample/pixel and the ray path tracing method are realized in the experiment. Experimental results demonstrate that the structural similarity metric (SSIM) value and peak signal-to-noise ratio (PSNR) gain of the reconstructed 3D image and target 3D image exceed 90% and 10 dB within 10 frames, respectively. Besides, real-time frame rate is more than 30 fps, showing the super efficiency and quality in optical 3D reconstruction.
Collapse
|
6
|
Fan Z, Xia Y, Liao H. 3-D spatial floating display using multi-wavelength integral photography. Sci Rep 2018; 8:15863. [PMID: 30367129 PMCID: PMC6203786 DOI: 10.1038/s41598-018-33730-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Accepted: 10/04/2018] [Indexed: 11/09/2022] Open
Abstract
Three-dimensional (3-D) autostereoscopic display with dedicated multiple spatial information under corresponding illumination is critical, especially for anti-counterfeiting, entertainment, etc. In this paper, we propose a 3-D spatial floating display using multi-wavelength integral photography (IP). Using dedicated inkjet printer and refraction-based IP algorithm, a complex two-dimensional (2-D) elemental image array (EIA) can be printed for both fluorescent and normal 3-D autostereoscopic display. With a micro-convex lens array (MLA) and a medium attached on the EIA, normal 3-D images are reconstructed under visible light, while fluorescent 3-D images can be reconstructed under ultraviolet (UV) light. Moreover, to provide comfortable 3-D images with multiple information in space, a feasible 3-D spatial floating display system is also proposed considering the spatial position of the observer with less UV radiation. The proposed method takes the wavelength of 3-D display into consideration to provide spatial multi-information, and can be applied for media, entertainment, etc. Experimental results verified the availability of the proposed method.
Collapse
Affiliation(s)
- Zhencheng Fan
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Yan Xia
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
7
|
Chen G, Wang H, Liu M, Liao H. Hybrid camera array based calibration for computer-generated integral photography display. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2018; 35:1567-1574. [PMID: 30183012 DOI: 10.1364/josaa.35.001567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Accepted: 07/24/2018] [Indexed: 06/08/2023]
Abstract
Integral photography (IP) is one of the most promising 3D displays that can achieve a full parallax 3D display without glasses. There is a great need to render a correct, high-precision 3D image from an IP display. To achieve a correct 3D display, calibration is needed to correct optical misalignment and optical aberrations, while it is challenging to achieve correct mapping between a microlens array and matrix display. We propose an IP calibration method for a 3D autostereoscopic integral photography display based on a sparse camera array. Our method distinguishes itself from previous methods by estimating parameters for a dense correspondence map of an IP display with a relatively flexible setup and high precision in a reasonable time cost. We also propose a workflow to enable our method to handle both a visible and invisible microlens array and obtain a great outcome. One prototype is fabricated to evaluate the feasibility of the proposed method. Moreover, we evaluate our proposed method in geometry accuracy and image quality.
Collapse
|
8
|
New Method of Microimages Generation for 3D Display. SENSORS 2018; 18:s18092805. [PMID: 30149639 PMCID: PMC6164900 DOI: 10.3390/s18092805] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2018] [Revised: 08/22/2018] [Accepted: 08/23/2018] [Indexed: 11/17/2022]
Abstract
In this paper, we propose a new method for the generation of microimages, which processes real 3D scenes captured with any method that permits the extraction of its depth information. The depth map of the scene, together with its color information, is used to create a point cloud. A set of elemental images of this point cloud is captured synthetically and from it the microimages are computed. The main feature of this method is that the reference plane of displayed images can be set at will, while the empty pixels are avoided. Another advantage of the method is that the center point of displayed images and also their scale and field of view can be set. To show the final results, a 3D InI display prototype is implemented through a tablet and a microlens array. We demonstrate that this new technique overcomes the drawbacks of previous similar ones and provides more flexibility setting the characteristics of the final image.
Collapse
|
9
|
3D Visualization and Augmented Reality for Orthopedics. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2018; 1093:193-205. [DOI: 10.1007/978-981-13-1396-7_16] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|