1
|
Deng L, Li Z, Gu Y, Wang Q. Integral Imaging Display System Based on Human Visual Distance Perception Model. Sensors (Basel) 2023; 23:9011. [PMID: 37960709 PMCID: PMC10650752 DOI: 10.3390/s23219011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 10/16/2023] [Accepted: 11/03/2023] [Indexed: 11/15/2023]
Abstract
In an integral imaging (II) display system, the self-adjustment ability of the human eye can result in blurry observations when viewing 3D targets outside the focal plane within a specific range. This can impact the overall imaging quality of the II system. This research examines the visual characteristics of the human eye and analyzes the path of light from a point source to the eye in the process of capturing and reconstructing the light field. Then, an overall depth of field (DOF) model of II is derived based on the human visual system (HVS). On this basis, an II system based on the human visual distance (HVD) perception model is proposed, and an interactive II display system is constructed. The experimental results confirm the effectiveness of the proposed method. The display system improves the viewing distance range, enhances spatial resolution and provides better stereoscopic display effects. When comparing our method with three other methods, it is clear that our approach produces better results in optical experiments and objective evaluations: the cumulative probability of blur detection (CPBD) value is 38.73%, the structural similarity index (SSIM) value is 86.56%, and the peak signal-to-noise ratio (PSNR) value is 31.12. These values align with subjective evaluations based on the characteristics of the human visual system.
Collapse
Affiliation(s)
- Lijin Deng
- School of Artificial Intelligence, Changchun University of Science and Technology, No. 7089, Weixing Road, Changchun 130022, China; (L.D.); (Z.L.)
- Zhongshan Institute, Changchun University of Science and Technology, No. 16, Huizhan East Road, Zhongsha 528437, China
| | - Zhihong Li
- School of Artificial Intelligence, Changchun University of Science and Technology, No. 7089, Weixing Road, Changchun 130022, China; (L.D.); (Z.L.)
| | - Yuejianan Gu
- School of Artificial Intelligence, Changchun University of Science and Technology, No. 7089, Weixing Road, Changchun 130022, China; (L.D.); (Z.L.)
| | - Qi Wang
- School of Electronics and Information Engineering, Changchun University of Sciences and Technology, No. 7089, Weixing Road, Changchun 130022, China;
| |
Collapse
|
2
|
Kim HW, Cho M, Lee MC. Three-Dimensional (3D) Visualization under Extremely Low Light Conditions Using Kalman Filter. Sensors (Basel) 2023; 23:7571. [PMID: 37688025 PMCID: PMC10490719 DOI: 10.3390/s23177571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 08/28/2023] [Accepted: 08/31/2023] [Indexed: 09/10/2023]
Abstract
In recent years, research on three-dimensional (3D) reconstruction under low illumination environment has been reported. Photon-counting integral imaging is one of the techniques for visualizing 3D images under low light conditions. However, conventional photon-counting integral imaging has the problem that results are random because Poisson random numbers are temporally and spatially independent. Therefore, in this paper, we apply a technique called Kalman filter to photon-counting integral imaging, which corrects data groups with errors, to improve the visual quality of results. The purpose of this paper is to reduce randomness and improve the accuracy of visualization for results by incorporating the Kalman filter into 3D reconstruction images under extremely low light conditions. Since the proposed method has better structure similarity (SSIM), peak signal-to-noise ratio (PSNR) and cross-correlation values than the conventional method, it can be said that the visualization of low illuminated images can be accurate. In addition, the proposed method is expected to accelerate the development of autonomous driving technology and security camera technology.
Collapse
Affiliation(s)
- Hyun-Woo Kim
- Department of Computer Science and Networks, Kyushu Institute of Technology, 680-4 Kawazu, Iizuka-shi, Fukuoka 820-8502, Japan
| | - Myungjin Cho
- School of ICT, Robotics, and Mechanical Engineering, Hankyong National University, IITC, 327 Chungang-ro, Anseong 17579, Kyonggi-do, Republic of Korea
| | - Min-Chul Lee
- Department of Computer Science and Networks, Kyushu Institute of Technology, 680-4 Kawazu, Iizuka-shi, Fukuoka 820-8502, Japan
| |
Collapse
|
3
|
Lee E, Cho H, Yoo H. Computational Integral Imaging Reconstruction via Elemental Image Blending without Normalization. Sensors (Basel) 2023; 23:5468. [PMID: 37420635 DOI: 10.3390/s23125468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 05/28/2023] [Accepted: 06/07/2023] [Indexed: 07/09/2023]
Abstract
This paper presents a novel computational integral imaging reconstruction (CIIR) method using elemental image blending to eliminate the normalization process in CIIR. Normalization is commonly used in CIIR to address uneven overlapping artifacts. By incorporating elemental image blending, we remove the normalization step in CIIR, leading to decreased memory consumption and computational time compared to those of existing techniques. We conducted a theoretical analysis of the impact of elemental image blending on a CIIR method using windowing techniques, and the results showed that the proposed method is superior to the standard CIIR method in terms of image quality. We also performed computer simulations and optical experiments to evaluate the proposed method. The experimental results showed that the proposed method enhances the image quality over that of the standard CIIR method, while also reducing memory usage and processing time.
Collapse
Affiliation(s)
- Eunsu Lee
- Department of Computer Science, Sangmyung University, Seoul 110-743, Republic of Korea
| | - Hyunji Cho
- Department of Computer Science, Sangmyung University, Seoul 110-743, Republic of Korea
| | - Hoon Yoo
- Department of Intelligent IOT, Sangmyung University, Seoul 110-743, Republic of Korea
| |
Collapse
|
4
|
Kwon KH, Erdenebat MU, Kim N, Khuderchuluun A, Imtiaz SM, Kim MY, Kwon KC. High-Quality 3D Visualization System for Light-Field Microscopy with Fine-Scale Shape Measurement through Accurate 3D Surface Data. Sensors (Basel) 2023; 23:2173. [PMID: 36850772 PMCID: PMC9967073 DOI: 10.3390/s23042173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 02/10/2023] [Accepted: 02/11/2023] [Indexed: 06/18/2023]
Abstract
We propose a light-field microscopy display system that provides improved image quality and realistic three-dimensional (3D) measurement information. Our approach acquires both high-resolution two-dimensional (2D) and light-field images of the specimen sequentially. We put forward a matting Laplacian-based depth estimation algorithm to obtain nearly realistic 3D surface data, allowing the calculation of depth data, which is relatively close to the actual surface, and measurement information from the light-field images of specimens. High-reliability area data of the focus measure map and spatial affinity information of the matting Laplacian are used to estimate nearly realistic depths. This process represents a reference value for the light-field microscopy depth range that was not previously available. A 3D model is regenerated by combining the depth data and the high-resolution 2D image. The element image array is rendered through a simplified direction-reversal calculation method, which depends on user interaction from the 3D model and is displayed on the 3D display device. We confirm that the proposed system increases the accuracy of depth estimation and measurement and improves the quality of visualization and 3D display images.
Collapse
Affiliation(s)
- Ki Hoon Kwon
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Munkh-Uchral Erdenebat
- School of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - Nam Kim
- School of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - Anar Khuderchuluun
- School of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - Shariar Md Imtiaz
- School of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - Min Young Kim
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Ki-Chul Kwon
- School of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| |
Collapse
|
5
|
Liu Z, Li D, Deng H. Wide-Viewing-Angle Integral Imaging System with Full-Effective-Pixels Elemental Image Array. Micromachines (Basel) 2023; 14:225. [PMID: 36677286 PMCID: PMC9860876 DOI: 10.3390/mi14010225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 12/30/2022] [Accepted: 01/13/2023] [Indexed: 06/17/2023]
Abstract
There exists a defect of the narrow viewing angle in the conventional integral imaging system. One reason for this is that only partial pixels of each elemental image contribute to the viewing angle and the others cause image flips. In this paper, a wide-viewing-angle integral imaging system with a full-effective-pixels elemental image array (FEP-EIA) was proposed. The correspondence between viewpoints and pixel coordinates within the elemental image array was built up, and effective pixel blocks and pixels leading to flipping images were deduced. Then, a pixel replacement method was proposed to generate the FEP-EIAs, which adapt to different viewing distances. As a result, the viewing angle of the proposed integral imaging system was effectively extended through the replacement of the pixels, which caused the image flips. Experiment results demonstrated that wide viewing angles are available for the proposed integral imaging system regardless of the viewing distances.
Collapse
|
6
|
Zhao BC, Yang F, Wu F. High-Aperture-Ratio Dual-View Integral Imaging Display. Micromachines (Basel) 2022; 13:2213. [PMID: 36557512 PMCID: PMC9785181 DOI: 10.3390/mi13122213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 12/04/2022] [Accepted: 12/13/2022] [Indexed: 06/17/2023]
Abstract
Low aperture ratio is a problem in the conventional dual-view integral imaging (DVII) display using a point light source array. A high-aperture-ratio DVII display using a gradient width point light source array is reported in this work. The elemental Images 1 and 2, which are alternatively aligned on a liquid crystal panel, are illuminated by the light rays emitted from an assigned point light source. The optical path is optimized by optimizing the widths of the point light sources. The aperture ratio of the proposed DVII display was demonstrated as 1.88 times the conventional DVII display. Experiments showed that the vertical viewing range is related to the vertical width of the first row point light source, whereas the aperture ratio is related to the vertical widths of all point light sources. By optimizing the widths of the point light sources, the aperture ratio is enhanced without loss of viewing range.
Collapse
Affiliation(s)
- Bai-Chuan Zhao
- School of Information Engineering, Chengdu Aeronautic Polytechnic, Chengdu 610218, China
| | - Fan Yang
- Chengdu Institute of Computer Application, Chinese Academy of Sciences, Chengdu 610041, China
| | - Fei Wu
- School of Electronic Engineering, Chengdu Technological University, Chengdu 610073, China
| |
Collapse
|
7
|
Lee J, Cho M. Three-Dimensional Integral Imaging with Enhanced Lateral and Longitudinal Resolutions Using Multiple Pickup Positions. Sensors (Basel) 2022; 22:s22239199. [PMID: 36501901 PMCID: PMC9737089 DOI: 10.3390/s22239199] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 11/21/2022] [Accepted: 11/23/2022] [Indexed: 05/27/2023]
Abstract
In this paper, we propose an enhancement of three-dimensional (3D) image visualization techniques by using different pickup plane reconstructions. In conventional 3D visualization techniques, synthetic aperture integral imaging (SAII) and volumetric computational reconstruction (VCR) can be utilized. However, due to the lack of image information and shifting pixels, it may be difficult to obtain better lateral and longitudinal resolutions of 3D images. Thus, we propose a new elemental image acquisition and computational reconstruction to improve both the lateral and longitudinal resolutions of 3D objects. To prove the feasibility of our proposed method, we present the performance metrics, such as mean squared error (MSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and peak-to-sidelobe ratio (PSR). Therefore, our method can improve both the lateral and longitudinal resolutions of 3D objects more than the conventional technique.
Collapse
|
8
|
Oiknine Y, August I, Farber V, Gedalin D, Stern A. Compressive Sensing Hyperspectral Imaging by Spectral Multiplexing with Liquid Crystal. J Imaging 2018; 5:3. [PMID: 34470182 DOI: 10.3390/jimaging5010003] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Revised: 11/25/2018] [Accepted: 12/18/2018] [Indexed: 11/16/2022] Open
Abstract
Hyperspectral (HS) imaging involves the sensing of a scene’s spectral properties, which are often redundant in nature. The redundancy of the information motivates our quest to implement Compressive Sensing (CS) theory for HS imaging. This article provides a review of the Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) camera, its evolution, and its different applications. The CS-MUSI camera was designed within the CS framework and uses a liquid crystal (LC) phase retarder in order to modulate the spectral domain. The outstanding advantage of the CS-MUSI camera is that the entire HS image is captured from an order of magnitude fewer measurements of the sensor array, compared to conventional HS imaging methods.
Collapse
|
9
|
Incardona N, Hong S, Martínez-Corral M, Saavedra G. New Method of Microimages Generation for 3D Display. Sensors (Basel) 2018; 18:E2805. [PMID: 30149639 DOI: 10.3390/s18092805] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2018] [Revised: 08/22/2018] [Accepted: 08/23/2018] [Indexed: 11/17/2022]
Abstract
In this paper, we propose a new method for the generation of microimages, which processes real 3D scenes captured with any method that permits the extraction of its depth information. The depth map of the scene, together with its color information, is used to create a point cloud. A set of elemental images of this point cloud is captured synthetically and from it the microimages are computed. The main feature of this method is that the reference plane of displayed images can be set at will, while the empty pixels are avoided. Another advantage of the method is that the center point of displayed images and also their scale and field of view can be set. To show the final results, a 3D InI display prototype is implemented through a tablet and a microlens array. We demonstrate that this new technique overcomes the drawbacks of previous similar ones and provides more flexibility setting the characteristics of the final image.
Collapse
|