1
|
Li N, Yu X, Gao X, Yan B, Li D, Hong J, Tong Y, Wang Y, Hu Y, Ning C, He J, Ji L, Sang X. Real-time representation and rendering of high-resolution 3D light field based on texture-enhanced optical flow prediction. OPTICS EXPRESS 2024; 32:26478-26491. [PMID: 39538513 DOI: 10.1364/oe.529378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Accepted: 06/11/2024] [Indexed: 11/16/2024]
Abstract
Three-dimensional (3D) light field displays can provide an immersive visual perception and have attracted widespread attention, especially in 3D light field communications, where 3D light field displays can provide face-to-face communication experiences. However, due to limitations in 3D reconstruction and dense views rendering efficiency, generating high-quality 3D light field content in real-time remains a challenge. Traditional 3D light field capturing and reconstruction methods suffer from high reconstruction complexity and low rendering efficiency. Here, a Real-time optical flow representation for the high-resolution light field is proposed. Based on the principle of 3D light field display, we use optical flow to ray trace and multiplex sparse view pixels. We simultaneously synthesize 3D light field images during the real-time interpolation process of views. In addition, we built a complete capturing-display system to verify the effectiveness of our method. The experiments' results show that the proposed method can synthesize 8 K 3D light field videos containing 100 views in real-time. The PSNR of the virtual views is around 32 dB and SSIM is over 0.99, and the rendered frame rate is 32 fps. Qualitative experimental results show that this method can be used for high-resolution 3D light field communication.
Collapse
|
2
|
Chang J, Zhao Y, Li T, Wang S, Wei J. Display performance optimization method for light field displays based on a neural network. OPTICS EXPRESS 2024; 32:19265-19278. [PMID: 38859065 DOI: 10.1364/oe.521245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 05/02/2024] [Indexed: 06/12/2024]
Abstract
Crosstalk between adjacent views, lens aberrations, and low spatial resolution in light field displays limit the quality of 3D images. In the present study, we introduce a display performance optimization method for light field displays based on a neural network. The method pre-corrects the encoded image from a global perspective, which means that the encoded image is pre-corrected according to the light field display results. The display performance optimization network consists of two parts: the encoded image pre-correction network and the display network. The former realizes the pre-correction of the original encoded image (OEI), while the latter completes the modeling of the display unit and realizes the generation from the encoded image to the viewpoint images (VIs). The pre-corrected encoded image (PEI) obtained through the pre-correction network can reconstruct 3D images with higher quality. The VIs are accessible through the display network. Experimental results suggest that the proposed method can reduce the graininess of 3D images significantly without increasing the complexity of the system. It is promising for light field displays since it can provide improved 3D display performance.
Collapse
|
3
|
Rabia S, Allain G, Tremblay R, Thibault S. Orthoscopic elemental image synthesis for 3D light field display using lens design software and real-world captured neural radiance field. OPTICS EXPRESS 2024; 32:7800-7815. [PMID: 38439452 DOI: 10.1364/oe.510579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 01/22/2024] [Indexed: 03/06/2024]
Abstract
The elemental images (EIs) generation of complex real-world scenes can be challenging for conventional integral imaging (InIm) capture techniques since the pseudoscopic effect, characterized by a depth inversion of the reconstructed 3D scene, occurs in this process. To address this problem, we present in this paper a new approach using a custom neural radiance field (NeRF) model to form real and/or virtual 3D image reconstruction from a complex real-world scene while avoiding distortion and depth inversion. One of the advantages of using a NeRF is that the 3D information of a complex scene (including transparency and reflection) is not stored by meshes or voxel grid but by a neural network that can be queried to extract desired data. The Nerfstudio API was used to generate a custom NeRF-related model while avoiding the need for a bulky acquisition system. A general workflow that includes the use of ray-tracing-based lens design software is proposed to facilitate the different processing steps involved in managing NeRF data. Through this workflow, we introduced a new mapping method for extracting desired data from the custom-trained NeRF-related model, enabling the generation of undistorted orthoscopic EIs. An experimental 3D reconstruction was conducted using an InIm-based 3D light field display (LFD) prototype to validate the effectiveness of the proposed method. A qualitative comparison with the actual real-world scene showed that the 3D reconstructed scene is accurately rendered. The proposed work can be used to manage and render undistorted orthoscopic 3D images from custom-trained NeRF-related models for various InIm applications.
Collapse
|
4
|
Qin Z, Cheng Y, Dong J, Qiu Y, Yang W, Yang BR. Real-time computer-generated integral imaging light field displays: revisiting the point retracing rendering method from a signal processing perspective. OPTICS EXPRESS 2023; 31:35835-35849. [PMID: 38017747 DOI: 10.1364/oe.502141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 10/02/2023] [Indexed: 11/30/2023]
Abstract
Integral imaging light field displays (InIm-LFDs) can provide realistic 3D images by showing an elemental image array (EIA) under a lens array. However, it is always challenging to computationally generate an EIA in real-time with entry-level computing hardware because the current practice that projects many viewpoints to the EIA induces heavy computations. This study discards the viewpoint-based strategy, revisits the early point retracing rendering method, and proposes that InIm-LFDs and regular 2D displays share two similar signal processing phases: sampling and reconstructing. An InIm-LFD is demonstrated to create a finite number of static voxels for signal sampling. Each voxel is invariantly formed by homogeneous pixels for signal reconstructing. We obtain the static voxel-pixel mapping through arbitrarily accurate raytracing in advance and store it as a lookup table (LUT). Our EIA rendering method first resamples input 3D data with the pre-defined voxels and then assigns every voxel's value to its homogeneous pixels through the LUT. As a result, the proposed method reduces the computational complexity by several orders of magnitude. The experimental rendering speed is as fast as 7 to 10 ms for a full-HD EIA frame on an entry-level laptop. Finally, considering a voxel may not be perfectly integrated by its homogeneous pixels, called the sampling error, the proposed and conventional viewpoint-based methods are analyzed in the Fourier domain. We prove that even with severe sampling errors, the two methods negligibly differ in the output signal's frequency spectrum. We expect the proposed method to break the long-standing tradeoff between rendering speed, accuracy, and system complexity for computer-generated integral imaging.
Collapse
|
5
|
Yang Z, Sang X, Yan B, Chen D, Wang P, Wan H, Chen S, Li J. Real-time light-field generation based on the visual hull for the 3D light-field display with free-viewpoint texture mapping. OPTICS EXPRESS 2023; 31:1125-1140. [PMID: 36785154 DOI: 10.1364/oe.478853] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 12/16/2022] [Indexed: 06/18/2023]
Abstract
Real-time dense view synthesis based on three-dimensional (3D) reconstruction of real scenes is still a challenge for 3D light-field display. It's time-consuming to reconstruct an entire model, and then the target views are synthesized afterward based on volume rendering. To address this issue, Light-field Visual Hull (LVH) is presented with free-viewpoint texture mapping for 3D light-field display, which can directly produce synthetic images with the 3D reconstruction of real scenes in real-time based on forty free-viewpoint RGB cameras. An end-to-end subpixel calculation procedure of the synthetic image is demonstrated, which defines a rendering ray for each subpixel based on light-field image coding. In the ray propagation process, only the essential spatial point of the target model is located for the corresponding subpixel by projecting the frontmost point of the ray to all the free-viewpoints, and the color of each subpixel is identified in one pass. A dynamic free-viewpoint texture mapping method is proposed to solve the correct graphic texture considering the free-viewpoint cameras. To improve the efficiency, only the visible 3D position and texture that contributes to the synthetic image are calculated based on backward ray tracing rather than computing the entire 3D model and generating all elemental images. In addition, an incremental calibration method by dividing camera groups is proposed to satisfy the accuracy. Experimental results show the validity of our method. All the rendered views are analyzed for justifying the texture mapping method, and the PSNR is improved by an average of 11.88dB. Finally, LVH can achieve a natural and smooth viewing effect at 4K resolution and the frame rate of 25 ∼ 30fps with a large viewing angle.
Collapse
|
6
|
Li X, Yu C, Guo J. Multi-Image Encryption Method via Computational Integral Imaging Algorithm. ENTROPY (BASEL, SWITZERLAND) 2022; 24:e24070996. [PMID: 35885219 PMCID: PMC9319491 DOI: 10.3390/e24070996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 06/30/2022] [Accepted: 07/12/2022] [Indexed: 02/05/2023]
Abstract
Under the framework of computational integral imaging, a multi-image encryption scheme based on the DNA-chaos algorithm is proposed. In this scheme, multiple images are merged to one image by a computational integral imaging algorithm, which significantly improves the efficiency of image encryption. Meanwhile, the computational integral imaging algorithm can merge images at different depth distances, thereby the different depth distances of multiple images can also be used as keys to increase the security of the encryption method. In addition, the high randomness of the chaos algorithm is combined to address the outline effect caused by the DNA encryption algorithm. We have experimentally verified the proposed multi-image encryption scheme. The entropy value of the encrypted image is 7.6227, whereas the entropy value of the merge image with two input images is 3.2886, which greatly reduces the relevance of the image. The simulation results also confirm that the proposed encryption scheme has high key security and can protect against various attacks.
Collapse
Affiliation(s)
- Xiaowu Li
- The Second Affiliated Hospital of Shantou University Medical College, Shantou 515000, China
| | - Chuying Yu
- School of Physics and Electronic Engineering, Hanshan Normal University, Chaozhou 521041, China
| | - Junfeng Guo
- School of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
| |
Collapse
|
7
|
Guo X, Sang X, Yan B, Wang H, Ye X, Chen S, Wan H, Li N, Zeng Z, Chen D, Wang P, Xing S. Real-time dense-view imaging for three-dimensional light-field display based on image color calibration and self-supervised view synthesis. OPTICS EXPRESS 2022; 30:22260-22276. [PMID: 36224928 DOI: 10.1364/oe.461789] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 05/26/2022] [Indexed: 06/16/2023]
Abstract
Three-Dimensional (3D) light-field display has achieved promising improvement in recent years. However, since the dense-view images cannot be collected fast in real-world 3D scenes, the real-time 3D light-field display is still challenging to achieve in real scenes, especially at the high-resolution 3D display. Here, a real-time 3D light-field display method with dense-view is proposed based on image color correction and self-supervised optical flow estimation, and a high-quality and high frame rate of 3D light-field display can be realized simultaneously. A sparse camera array is firstly used to capture sparse-view images in the proposed method. To eliminate the color deviation of the sparse views, the imaging process of the camera is analyzed, and a practical multi-layer perception (MLP) network is proposed to perform color calibration. Given sparse views with consistent color, the optical flow can be estimated by a lightweight convolutional neural network (CNN) at high speed, which uses the input image pairs to learn the optical flow in a self-supervised manner. With inverse warp operation, dense-view images can be synthesized in the end. Quantitative and qualitative experiments are performed to evaluate the feasibility of the proposed method. Experimental results show that over 60 dense-view images at a resolution of 1024 × 512 can be generated with 11 input views at a frame rate over 20 fps, which is 4× faster than previous optical flow estimation methods PWC-Net and LiteFlowNet3. Finally, large viewing angles and high-quality 3D light-field display at 3840 × 2160 resolution can be achieved in real-time.
Collapse
|
8
|
Chen Y, Sang X, Xing S, Guan Y, Zhang H, Wang K. Automatic co-design of light field display system based on simulated annealing algorithm and visual simulation. OPTICS EXPRESS 2022; 30:17577-17590. [PMID: 36221577 DOI: 10.1364/oe.457341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 04/26/2022] [Indexed: 06/16/2023]
Abstract
Accurate, fast, and reliable modeling and optimization methods play a crucial role in designing light field display (LFD) system. Here, an automatic co-design method of LFD system based on simulated annealing and visual simulation is proposed. The process of LFD content acquisition and optical reconstruction are modeled and simulated, the objective function for evaluating the display effect of the LFD system is established according to the simulation results. In case of maximum objective function, the simulated annealing optimization method is used to find the optimal parameters of the LFD system. The validity of the proposed method is confirmed through optical experiments.
Collapse
|
9
|
Shedding Light on Capillary-Based Backscattering Interferometry. SENSORS 2022; 22:s22062157. [PMID: 35336326 PMCID: PMC8949530 DOI: 10.3390/s22062157] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 03/02/2022] [Accepted: 03/08/2022] [Indexed: 02/05/2023]
Abstract
Capillary-based backscattering interferometry has been used extensively as a tool to measure molecular binding via interferometric refractive index sensing. Previous studies have analysed the fringe patterns created in the backscatter direction. However, polarisation effects, spatial chirps in the fringe pattern and the practical impact of various approximations, and assumptions in existing models are yet to be fully explored. Here, two independent ray tracing approaches are applied, analysed, contrasted, compared to experimental data, and improved upon by introducing explicit polarisation dependence. In doing so, the significance of the inner diameter, outer diameter, and material of the capillary to the resulting fringe pattern and subsequent analysis are elucidated for the first time. The inner diameter is shown to dictate the fringe pattern seen, and therefore, the effectiveness of any dechirping algorithm, demonstrating that current dechirping methods are only valid for a subset of capillary dimensions. Potential improvements are suggested in order to guide further research, increase sensitivity, and promote wider applicability.
Collapse
|
10
|
Guo X, Sang X, Chen D, Wang P, Wang H, Liu X, Li Y, Xing S, Yan B. Real-time optical reconstruction for a three-dimensional light-field display based on path-tracing and CNN super-resolution. OPTICS EXPRESS 2021; 29:37862-37876. [PMID: 34808851 DOI: 10.1364/oe.441714] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 10/21/2021] [Indexed: 06/13/2023]
Abstract
Three-Dimensional (3D) light-field display plays a vital role in realizing 3D display. However, the real-time high quality 3D light-field display is difficult, because super high-resolution 3D light field images are hard to be achieved in real-time. Although extensive research has been carried out on fast 3D light-field image generation, no single study exists to satisfy real-time 3D image generation and display with super high-resolution such as 7680×4320. To fulfill real-time 3D light-field display with super high-resolution, a two-stage 3D image generation method based on path tracing and image super-resolution (SR) is proposed, which takes less time to render 3D images than previous methods. In the first stage, path tracing is used to generate low-resolution 3D images with sparse views based on Monte-Carlo integration. In the second stage, a lite SR algorithm based on a generative adversarial network (GAN) is presented to up-sample the low-resolution 3D images to high-resolution 3D images of dense views with photo-realistic image quality. To implement the second stage efficiently and effectively, the elemental images (EIs) are super-resolved individually for better image quality and geometry accuracy, and a foreground selection scheme based on ray casting is developed to improve the rendering performance. Finally, the output EIs from CNN are used to recompose the high-resolution 3D images. Experimental results demonstrate that real-time 3D light-field display over 30fps at 8K resolution can be realized, while the structural similarity (SSIM) can be over 0.90. It is hoped that the proposed method will contribute to the field of real-time 3D light-field display.
Collapse
|
11
|
Liu L, Sang X, Yu X, Gao X, Wang Y, Pei X, Xie X, Fu B, Dong H, Yan B. 3D light-field display with an increased viewing angle and optimized viewpoint distribution based on a ladder compound lenticular lens unit. OPTICS EXPRESS 2021; 29:34035-34050. [PMID: 34809202 DOI: 10.1364/oe.439805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 09/27/2021] [Indexed: 06/13/2023]
Abstract
Three-dimensional (3D) light-field displays (LFDs) suffer from a narrow viewing angle, limited depth range, and low spatial information capacity, which limit their diversified application. Because the number of pixels used to construct 3D spatial information is limited, increasing the viewing angle reduces the viewpoint density, which degrades the 3D performance. A solution based on a holographic functional screen (HFS) and a ladder-compound lenticular lens unit (LC-LLU) is proposed to increase the viewing angle while optimizing the viewpoint utilization. The LC-LLU and HFS are used to create 160 non-uniformly distributed viewpoints with low crosstalk, which increases the viewpoint density in the middle viewing zone and provides clear monocular depth cues. The corresponding coding method is presented as well. The optimized compound lenticular lens array can balance between suppressing aberration and improving displayed quality. The simulations and experiments show that the proposed 3D LFD can present natural 3D images with the right perception and occlusion relationship within a 65° viewing angle.
Collapse
|
12
|
Zhang L, Wang Y, Li DH, Li Q, Zhao W, Li X. Cryptanalysis for a light-field 3D cryptosystem based on M-cGAN. OPTICS LETTERS 2021; 46:4916-4919. [PMID: 34598233 DOI: 10.1364/ol.436049] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 08/27/2021] [Indexed: 06/13/2023]
Abstract
Integral imaging, as an excellent light-field three-dimensional (3D) imaging technique, is considered as one of the most important technologies for 3D encryption because of its obvious advantages of high robustness, security, and computational feasibility. However, to date, there is no effective cryptanalysis technology for the light-field 3D cryptosystem. In this Letter, a cryptanalysis algorithm based on deep learning for the light-field 3D cryptosystem is presented. The 3D image can be optically retrieved by the trained network model without encryption keys. The experimental results verify the feasibility and effectiveness of our proposed method.
Collapse
|
13
|
Khuderchuluun A, Piao YL, Erdenebat MU, Dashdavaa E, Lee MH, Jeon SH, Kim N. Simplified digital content generation based on an inverse-directed propagation algorithm for holographic stereogram printing. APPLIED OPTICS 2021; 60:4235-4244. [PMID: 33983180 DOI: 10.1364/ao.423205] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 04/18/2021] [Indexed: 06/12/2023]
Abstract
Holographic stereogram (HS) printing requires extensive memory capacity and long computation time during perspective acquisition and implementation of the pixel re-arrangement algorithm. Hogels contain very weak depth information of the object. We propose a HS printing system that uses simplified digital content generation based on the inverse-directed propagation (IDP) algorithm for hogel generation. Specifically, the IDP algorithm generates an array of hogels using a simple process that acquires the full three-dimensional (3D) information of the object, including parallax, depth, color, and shading, via a computer-generated integral imaging technique. This technique requires a short computation time and is capable of accounting for occlusion and accommodation effects of the object points via the IDP algorithm. Parallel computing is utilized to produce a high-resolution hologram based on the properties of independent hogels. To demonstrate the proposed approach, optical experiments are conducted in which the natural 3D visualizations of real and virtual objects are printed on holographic material. Experimental results demonstrate the simplified computation involved in content generation using the proposed IDP-based HS printing system and the improved image quality of the holograms.
Collapse
|
14
|
Yu X, Li H, Sang X, Su X, Gao X, Liu B, Chen D, Wang Y, Yan B. Aberration correction based on a pre-correction convolutional neural network for light-field displays. OPTICS EXPRESS 2021; 29:11009-11020. [PMID: 33820222 DOI: 10.1364/oe.419570] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 03/10/2021] [Indexed: 06/12/2023]
Abstract
Lens aberrations degrade the image quality and limit the viewing angle of light-field displays. In the present study, an approach to aberration reduction based on a pre-correction convolutional neural network (CNN) is demonstrated. The pre-correction CNN is employed to transform the elemental image array (EIA) generated by a virtual camera array into a pre-corrected EIA (PEIA). The pre-correction CNN is built and trained based on the aberrations of the lens array. The resulting PEIA, rather than the EIA, is presented on the liquid crystal display. Via the optical transformation of the lens array, higher quality 3D images are obtained. The validity of the proposed method is confirmed through simulations and optical experiments. A 70-degree viewing angle light field display with the improved image quality is demonstrated.
Collapse
|
15
|
Liu B, Sang X, Yu X, Ye X, Gao X, Liu L, Gao C, Wang P, Xie X, Yan B. Analysis and removal of crosstalk in a time-multiplexed light-field display. OPTICS EXPRESS 2021; 29:7435-7452. [PMID: 33726245 DOI: 10.1364/oe.418132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 02/18/2021] [Indexed: 06/12/2023]
Abstract
Time-multiplexed light-field displays (TMLFDs) can provide natural and realistic three-dimensional (3D) performance with a wide 120° viewing angle, which provides broad potential applications in 3D electronic sand table (EST) technology. However, current TMLFDs suffer from severe crosstalk, which can lead to image aliasing and the distortion of the depth information. In this paper, the mechanisms underlying the emergence of crosstalk in TMLFD systems are identified and analyzed. The results indicate that the specific structure of the slanted lenticular lens array (LLA) and the non-uniformity of the emergent light distribution in the lens elements are the two main factors responsible for the crosstalk. In order to produce clear depth perception and improve the image quality, a novel ladder-type LCD sub-pixel arrangement and a compound lens with three aspheric surfaces are proposed and introduced into a TMLFD to respectively reduce the two types of crosstalk. Crosstalk simulation experiments demonstrate the validity of the proposed methods. Structural similarity (SSIM) simulation experiments and light-field reconstruction experiments also indicate that aliasing is effectively reduced and the depth quality is significantly improved over the entire viewing range. In addition, a tabletop 3D EST based on the proposed TMLFD is presented. The proposed approaches to crosstalk reduction are also compatible with other lenticular lens-based 3D displays.
Collapse
|
16
|
Guan Y, Sang X, Xing S, Chen Y, Li Y, Chen D, Yu X, Yan B. Parallel multi-view polygon rasterization for 3D light field display. OPTICS EXPRESS 2020; 28:34406-34421. [PMID: 33182911 DOI: 10.1364/oe.408857] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 10/19/2020] [Indexed: 06/11/2023]
Abstract
Three-dimensional (3D) light field displays require samples of image data captured from a large number of regularly spaced camera images to produce a 3D image. Generally, it is inefficient to generate these images sequentially because a large number of rendering operations are repeated in different viewpoints. The current 3D image generation algorithm with traditional single viewpoint computer graphics techniques is not sufficiently well suited to the task of generating images for the light field displays. A highly parallel multi-view polygon rasterization (PMR) algorithm for 3D multi-view image generation is presented. Based on the coherence of the triangular rasterization calculation among different viewpoints, the related rasterization algorithms including primitive setup, plane function, and barycentric coordinate interpolation in the screen space are derived. To verify the proposed algorithm, a hierarchical soft rendering pipeline with GPU is designed and implemented. Several groups of images of 3D objects are used to verify the performance of the PMR method, and the correct 3D light field image can be achieved in real time.
Collapse
|
17
|
Deng H, Li Q, He W, Li X, Ren H, Chen C. 2D/3D mixed frontal projection system based on integral imaging. OPTICS EXPRESS 2020; 28:26385-26394. [PMID: 32906911 DOI: 10.1364/oe.402468] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Accepted: 08/14/2020] [Indexed: 06/11/2023]
Abstract
Two-dimensional (2D)/three-dimensional (3D) convertible or mixed display is one of the most important factors for the fast penetration of 3D display into the display market. In this paper, we propose a 2D/3D mixed frontal projection system that mainly contains a liquid crystal micro-lens array (LCMLA) and a quarter-wave retarding film with pinholes (QWRF-P). The LCMLA exhibits the focusing effect or no optical effect depending on the polarization direction of the incident lights. The forward incident lights pass through the LCMLA without any bending. After passing through the QWRF-P twice, half of the backward lights change the polarization direction with 90°, and the other half remains. Using our designed system, different display modes, including 2D display, 3D display, and 2D/3D mixed display, can be realized. The unique feature of the proposed 2D/3D mixed frontal projection system is that it can switch the display modes by simply changing the image sources without the need of any active optical devices. Moreover, the proposed system is compact, simple and space-efficient, which is suitable for the application in glassless 3D cinema and home 3D theatre.
Collapse
|
18
|
Ren H, Xing Y, Zhang HL, Li Q, Wang L, Deng H, Wang QH. 2D/3D mixed display based on integral imaging and a switchable diffuser element. APPLIED OPTICS 2019; 58:G276-G281. [PMID: 31873510 DOI: 10.1364/ao.58.00g276] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Accepted: 10/08/2019] [Indexed: 06/10/2023]
Abstract
In this paper, we present a 2D/3D mixed system with high image quality based on integral imaging and a switchable diffuser element. The proposed system comprises a liquid crystal display screen, lens array, switchable diffuser element and projector. The switchable diffuser element can be controlled to present 2D/3D mixed images or 2D and 3D images independently, and can reduce the Moire fringe and black grid. In addition to the improved display quality, the proposed system has advantages of a simple structure and is low cost, which contribute to the portability and practicability.
Collapse
|
19
|
Zhang W, Sang X, Gao X, Yu X, Gao C, Yan B, Yu C. A flipping-free 3D integral imaging display using a twice-imaging lens array. OPTICS EXPRESS 2019; 27:32810-32822. [PMID: 31684486 DOI: 10.1364/oe.27.032810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Accepted: 10/16/2019] [Indexed: 06/10/2023]
Abstract
Integral imaging is a promising 3D visualization technique for reconstructing 3D medical scenes to enhance medical analysis and diagnosis. However, the use of lens arrays inevitably introduces flipped images beyond the field of view, which cannot reproduce the correct parallax relation. To avoid the flipping effect in optical reconstruction, a twice-imaging lens array based integral display is presented. The proposed lens arrangement, which consists of a light-controlling lens array, a field lens array and an imaging lens array, allows the light rays from each elemental image only pass through its corresponding lens unit. The lens arrangement is optimized with geometrical optics method, and the proposed display system is experimentally demonstrated. A full-parallax 3D medical scene showing continuous viewpoint information without flipping is reconstructed in 45° field of view.
Collapse
|
20
|
Guan Y, Sang X, Xing S, Li Y, Chen Y, Chen D, Yang L, Yan B. Backward ray tracing based high-speed visual simulation for light field display and experimental verification. OPTICS EXPRESS 2019; 27:29309-29318. [PMID: 31684667 DOI: 10.1364/oe.27.029309] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Accepted: 09/18/2019] [Indexed: 06/10/2023]
Abstract
The exiting simulation method is not capable of achieving three-dimensional (3D) display result of the light field display (LFD) directly, which is important for design and optimization. Here, a high-speed visual simulation method to calculate the 3D image light field distribution is presented. Based on the backward ray tracing technique (BRT), the geometric and optical models of the LFD are constructed. The display result images are obtained, and the field of view angle (FOV) and depth of field (DOF) can be estimated, which are consistent with theoretical results and experimental results. The simulation time is 1s when the number of sampling rays is 3840×2160×100, and the computational speed of the method is at least 1000 times faster than that of the traditional physics-based renderer. A prototype was fabricated to evaluate the feasibility of the proposed method. From the results, our simulation method shows good potential for predicting the displayed image of the LFD for various positions of the observer's eye with sufficient calculation speed.
Collapse
|
21
|
Li Y, Sang X, Xing S, Guan Y, Yang S, Chen D, Yang L, Yan B. Real-time optical 3D reconstruction based on Monte Carlo integration and recurrent CNNs denoising with the 3D light field display. OPTICS EXPRESS 2019; 27:22198-22208. [PMID: 31510515 DOI: 10.1364/oe.27.022198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Accepted: 06/20/2019] [Indexed: 06/10/2023]
Abstract
A general integral imaging generation method based on the path-traced Monte Carlo (MC) method and recurrent convolutional neural networks denoising is presented. According to the optical layer structure of the three-dimensional (3D) light field display, screen pixels are encoded to specific viewpoints, then the directional rays are cast from viewpoints to screen pixels to preform the path integral. In the process of the integral, advanced illumination is used for high-quality elemental image array (EIA) generation. Recurrent convolutional neural networks are implemented as an auxiliary post-processing for the EIA to eliminate the noise of the 3D image in MC integration. 4K (3840 × 2160) resolution, 2 sample/pixel and the ray path tracing method are realized in the experiment. Experimental results demonstrate that the structural similarity metric (SSIM) value and peak signal-to-noise ratio (PSNR) gain of the reconstructed 3D image and target 3D image exceed 90% and 10 dB within 10 frames, respectively. Besides, real-time frame rate is more than 30 fps, showing the super efficiency and quality in optical 3D reconstruction.
Collapse
|
22
|
Yu X, Sang X, Gao X, Chen D, Liu B, Liu L, Gao C, Wang P. Dynamic three-dimensional light-field display with large viewing angle based on compound lenticular lens array and multi-projectors. OPTICS EXPRESS 2019; 27:16024-16031. [PMID: 31163789 DOI: 10.1364/oe.27.016024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Accepted: 05/07/2019] [Indexed: 06/09/2023]
Abstract
Real-time terrain rendering with high-resolution has been a hot spot in computer graphics for many years, which is widely used in electronic maps. However, the traditional two-dimensional display cannot provide the occlusion relationship between buildings, which restricts the observers' judgment of spatial accuracy. With three projectors, compound lenticular lens array and holographic functional screen, a dynamic three-dimensional (3D) light-field display with 90° viewing angle is demonstrated. Three projectors provide views for the right 30 degrees, center 30 degrees and left 30 degrees, respectively. The holographic functional screen recomposes the light distribution, and the compound lenticular lens array is optimized to balance the aberrations and improve the display quality. In our experiment, the 3D light-field image with 96 perspectives provides the right geometric occlusion and smooth parallax in the viewing range. By rendering 3D images and synchronizing projectors, the dynamic light field display is obtained.
Collapse
|
23
|
Ren H, Wang QH, Xing Y, Zhao M, Luo L, Deng H. Super-multiview integral imaging scheme based on sparse camera array and CNN super-resolution. APPLIED OPTICS 2019; 58:A190-A196. [PMID: 30873977 DOI: 10.1364/ao.58.00a190] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
In this paper, we propose a scheme based on sparse camera array and convolution neural network super-resolution for super-multiview integral imaging. In particular, the proposed scheme is adequate to not only the virtual-world three-dimensional scene with high performance and efficiency, but also the real-world 3D scene with higher availability than the traditional methods. In the proposed scheme, we first adopt the sparse camera array strategy to capture the sparse viewpoint images and use these images to synthesize the low-resolution elemental image array, then the convolution neural network super-resolution scheme is used to restore the high-resolution elemental image array from the low-resolution elemental image array for super-multiview integral image display. Experimental results verify the feasibility of the proposed scheme.
Collapse
|
24
|
Wei J, Wang S, Zhao Y, Piao M. Synthetic aperture integral imaging using edge depth maps of unstructured monocular video. OPTICS EXPRESS 2018; 26:34894-34908. [PMID: 30650906 DOI: 10.1364/oe.26.034894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2018] [Accepted: 12/07/2018] [Indexed: 06/09/2023]
Abstract
Synthetic aperture integral imaging using monocular video with arbitrary camera trajectory enables casual acquisition of three-dimensional information of any-scale scenes. This paper presents a novel algorithm for computational reconstruction and imaging of the scenes in this SAII system. Since dense geometry recovery and virtual view rendering are required to handle such unstructured input, for less computational costs and artifacts in both stages, we assume flat surfaces in homogeneous areas and take full advantage of the per-frame edges which are accurately reconstructed beforehand. A dense depth map of each real view is first estimated by successively generating two complete, named smoothest- and densest-surface, depth maps, both respecting local cues, and then merging them via Markov random field global optimization. This way, high-quality perspective images of any virtual camera array can be synthesized simply by back-projecting the obtained closest surfaces into the new views. The pixel-level operations throughout most parts of our pipeline allow high parallelism. Simulation results have shown that the proposed approach is robust to view-dependent occlusions and lack of textures in original frames and can produce recognizable slice images at different depths.
Collapse
|
25
|
Yang S, Sang X, Yu X, Gao X, Liu L, Liu B, Yang L. 162-inch 3D light field display based on aspheric lens array and holographic functional screen. OPTICS EXPRESS 2018; 26:33013-33021. [PMID: 30645459 DOI: 10.1364/oe.26.033013] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2018] [Accepted: 11/10/2018] [Indexed: 06/09/2023]
Abstract
Large-scale three-dimensional (3D) display can evoke a great sense of true presence and immersion. Nowadays, most of the large-scale autostereoscopic displays are based on parallax barrier with view zone jumping, which also sacrifices much brightness and leads to uneven illumination. With a 3840 × 2160 LED panel, a large-scale horizontal light field display based on aspheric lens array (ALA) and holographic functional screen (HFS) is demonstrated, which can display high quality 3D image. The HFS recomposes the light distribution, while the ALA improves the quantity of perspective information in a horizontal direction by using vertical pixels and it can suppress the aberration that is mainly caused by marginal light rays. The 162-inch horizontal light field display can reconstruct 3D images with the depth range of 1.5 m within the viewing angle of 40°. The feasibility of the proposed display method is verified by the experimental results.
Collapse
|
26
|
Zhang W, Sang X, Gao X, Yu X, Yan B, Yu C. Wavefront aberration correction for integral imaging with the pre-filtering function array. OPTICS EXPRESS 2018; 26:27064-27075. [PMID: 30469781 DOI: 10.1364/oe.26.027064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 09/23/2018] [Indexed: 06/09/2023]
Abstract
In integral imaging, the quality of a reconstructed image degrades with increasing viewing angle due to the wavefront aberrations introduced by the lens-array. A wavefront aberration correction method is proposed to enhance the image quality with a pre-filtering function array (PFA). To derive the PFA for an integral imaging display, the wavefront aberration characteristic of the lens-array is analyzed and the intensity distribution of the reconstructed image is calculated based on the wave optics theory. The minimum mean square error method is applied to manipulate the elemental image array (EIA) with a PFA. The validity of the proposed method is confirmed through simulations as well as optical experiments. A 45-degree viewing angle integral imaging display with enhanced image quality is achieved.
Collapse
|
27
|
Chen G, Ma C, Fan Z, Cui X, Liao H. Real-Time Lens Based Rendering Algorithm for Super-Multiview Integral Photography without Image Resampling. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2600-2609. [PMID: 28961116 DOI: 10.1109/tvcg.2017.2756634] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We propose a computer generated integral photography (CGIP) method that employs a lens based rendering (LBR) algorithm for super-multiview displays to achieve higher frame rates and better image quality without pixel resampling or view interpolation. The algorithm can utilize both fixed and programmable graphics pipelines to accelerate CGIP rendering and inter-perspective antialiasing. Two hardware prototypes were fabricated with two high-resolution liquid crystal displays and micro-lens arrays (MLA). Qualitative and quantitative experiments were performed to evaluate the feasibility of the proposed algorithm. To the best of our knowledge, the proposed LBR method outperforms state-of-the-art CGIP algorithms relative to rendering speed and image quality with our super-multiview hardware configurations. A demonstration experiment was also conducted to reveal the interactivity of a super-multiview display utilizing the proposed algorithm.
Collapse
|
28
|
Wen J, Yan X, Jiang X, Yan Z, Wang Y, Wang J. Nonlinear mapping method for the generation of an elemental image array in a photorealistic pseudoscopic free 3D display. APPLIED OPTICS 2018; 57:6375-6382. [PMID: 30117866 DOI: 10.1364/ao.57.006375] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2018] [Accepted: 07/02/2018] [Indexed: 06/08/2023]
Abstract
Limited available elemental image array resources may be the most severe bottleneck for the promotion and application of integral-imaging-based 3D display. We propose a nonlinear mapping method for the generation of an elemental image array to get a photorealistic pseudoscopic free 3D display based on the parallel light field reconstruction nature of the integral imaging system. All the light rays emitted from the display panel are classified into a corresponding parallel light field according to its direction, and all the parallel light fields are captured as orthogonal projections of the scene before synthesizing all the orthogonal projections using a nonlinear mapping method to form the final elemental image array. Preliminary optical experiments as well as ray optical analysis are conducted to prove the feasibility and validity of the proposed method. The proposed method can exploit most of the current 3D platform. It is an effective and efficient way to generate an elemental image array.
Collapse
|
29
|
Sang X, Gao X, Yu X, Xing S, Li Y, Wu Y. Interactive floating full-parallax digital three-dimensional light-field display based on wavefront recomposing. OPTICS EXPRESS 2018; 26:8883-8889. [PMID: 29715849 DOI: 10.1364/oe.26.008883] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Accepted: 03/21/2018] [Indexed: 06/08/2023]
Abstract
Advanced three-dimensional (3D) imaging techniques can acquire high-resolution 3D biomedical and biological data, but available digital display methods show this data in restricted two dimensions. 3D light-field displays optically reconstruct realistic 3D image by carefully tailoring light fields, and a natural and comfortable 3D sense of real objects or scenes is expected. An interactive floating full-parallax 3D light-field display with all depth cues is demonstrated with 3D biomedical and biological data, which are capable of achieving high efficiency and high image quality. A compound lens-array with two pieces of lens in each lens unit is designed and fabricated to suppress the aberrations and increase the viewing angle. The optimally designed holographic functional screen is used to recompose the light distribution from the lens-array. The imaging distortion can be decreased to less than 1.9% from more than 20%. The real time interactive floating full-parallax 3D light-field image with the clear displayed depth of 30 cm can be perceived with the right geometric occlusion and smooth parallax in the viewing angle of 45°, where 9216 viewpoints are used.
Collapse
|