1
|
Lee J, Cho M. Three-Dimensional Integral Imaging with Enhanced Lateral and Longitudinal Resolutions Using Multiple Pickup Positions. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22239199. [PMID: 36501901 PMCID: PMC9737089 DOI: 10.3390/s22239199] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 11/21/2022] [Accepted: 11/23/2022] [Indexed: 05/27/2023]
Abstract
In this paper, we propose an enhancement of three-dimensional (3D) image visualization techniques by using different pickup plane reconstructions. In conventional 3D visualization techniques, synthetic aperture integral imaging (SAII) and volumetric computational reconstruction (VCR) can be utilized. However, due to the lack of image information and shifting pixels, it may be difficult to obtain better lateral and longitudinal resolutions of 3D images. Thus, we propose a new elemental image acquisition and computational reconstruction to improve both the lateral and longitudinal resolutions of 3D objects. To prove the feasibility of our proposed method, we present the performance metrics, such as mean squared error (MSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and peak-to-sidelobe ratio (PSR). Therefore, our method can improve both the lateral and longitudinal resolutions of 3D objects more than the conventional technique.
Collapse
|
2
|
Imtiaz SM, Kwon KC, Hossain MB, Alam MS, Jeon SH, Kim N. Depth Estimation for Integral Imaging Microscopy Using a 3D-2D CNN with a Weighted Median Filter. SENSORS 2022; 22:s22145288. [PMID: 35890968 PMCID: PMC9316143 DOI: 10.3390/s22145288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 07/07/2022] [Accepted: 07/13/2022] [Indexed: 11/16/2022]
Abstract
This study proposes a robust depth map framework based on a convolutional neural network (CNN) to calculate disparities using multi-direction epipolar plane images (EPIs). A combination of three-dimensional (3D) and two-dimensional (2D) CNN-based deep learning networks is used to extract the features from each input stream separately. The 3D convolutional blocks are adapted according to the disparity of different directions of epipolar images, and 2D-CNNs are employed to minimize data loss. Finally, the multi-stream networks are merged to restore the depth information. A fully convolutional approach is scalable, which can handle any size of input and is less prone to overfitting. However, there is some noise in the direction of the edge. A weighted median filtering (WMF) is used to acquire the boundary information and improve the accuracy of the results to overcome this issue. Experimental results indicate that the suggested deep learning network architecture outperforms other architectures in terms of depth estimation accuracy.
Collapse
Affiliation(s)
- Shariar Md Imtiaz
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Korea; (S.M.I.); (K.-C.K.); (M.B.H.)
| | - Ki-Chul Kwon
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Korea; (S.M.I.); (K.-C.K.); (M.B.H.)
| | - Md. Biddut Hossain
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Korea; (S.M.I.); (K.-C.K.); (M.B.H.)
| | - Md. Shahinur Alam
- VL2 Center, Gallaudet University, 800 Florida Avenue NE, Washington, DC 20002, USA;
| | - Seok-Hee Jeon
- Department of Electronics Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon-si 22012, Gyeonggi-do, Korea;
| | - Nam Kim
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Korea; (S.M.I.); (K.-C.K.); (M.B.H.)
- Correspondence: ; Tel.: +82-043-261-2482
| |
Collapse
|
3
|
Optical See-through 2D/3D Compatible Display Using Variable-Focus Lens and Multiplexed Holographic Optical Elements. PHOTONICS 2021. [DOI: 10.3390/photonics8080297] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
An optical see-through two-dimensional (2D)/three-dimensional (3D) compatible display using variable-focus lens and multiplexed holographic optical elements (MHOE) is presented. It mainly consists of a MHOE, a variable-focus lens and a projection display device. The customized MHOE, by using the angular multiplexing technology of volumetric holographic grating, records the scattering wavefront and spherical wavefront array required for 2D/3D compatible display. In particular, we proposed a feasible method to switch the 2D and 3D display modes by using a variable-focus lens in the reconstruction process. The proposed system solves the problem of bulky volume, and makes the MHOE more efficient to use. Based on the requirements of 2D and 3D displays, we calculated the liquid pumping volume of the variable-focus lens under two kinds of diopters.
Collapse
|
4
|
Javidi B, Carnicer A, Arai J, Fujii T, Hua H, Liao H, Martínez-Corral M, Pla F, Stern A, Waller L, Wang QH, Wetzstein G, Yamaguchi M, Yamamoto H. Roadmap on 3D integral imaging: sensing, processing, and display. OPTICS EXPRESS 2020; 28:32266-32293. [PMID: 33114917 DOI: 10.1364/oe.402193] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 08/27/2020] [Indexed: 06/11/2023]
Abstract
This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field.
Collapse
|
5
|
Linda Liu F, Kuo G, Antipa N, Yanny K, Waller L. Fourier DiffuserScope: single-shot 3D Fourier light field microscopy with a diffuser. OPTICS EXPRESS 2020; 28:28969-28986. [PMID: 33114805 DOI: 10.1364/oe.400876] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 09/02/2020] [Indexed: 06/11/2023]
Abstract
Light field microscopy (LFM) uses a microlens array (MLA) near the sensor plane of a microscope to achieve single-shot 3D imaging of a sample without any moving parts. Unfortunately, the 3D capability of LFM comes with a significant loss of lateral resolution at the focal plane. Placing the MLA near the pupil plane of the microscope, instead of the image plane, can mitigate the artifacts and provide an efficient forward model, at the expense of field-of-view (FOV). Here, we demonstrate improved resolution across a large volume with Fourier DiffuserScope, which uses a diffuser in the pupil plane to encode 3D information, then computationally reconstructs the volume by solving a sparsity-constrained inverse problem. Our diffuser consists of randomly placed microlenses with varying focal lengths; the random positions provide a larger FOV compared to a conventional MLA, and the diverse focal lengths improve the axial depth range. To predict system performance based on diffuser parameters, we, for the first time, establish a theoretical framework and design guidelines, which are verified by numerical simulations, and then build an experimental system that achieves < 3 µm lateral and 4 µm axial resolution over a 1000 × 1000 × 280 µm3 volume. Our diffuser design outperforms the MLA used in LFM, providing more uniform resolution over a larger volume, both laterally and axially.
Collapse
|
6
|
O’Connor T, Anand A, Andemariam B, Javidi B. Deep learning-based cell identification and disease diagnosis using spatio-temporal cellular dynamics in compact digital holographic microscopy. BIOMEDICAL OPTICS EXPRESS 2020; 11:4491-4508. [PMID: 32923059 PMCID: PMC7449709 DOI: 10.1364/boe.399020] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 07/01/2020] [Accepted: 07/12/2020] [Indexed: 05/14/2023]
Abstract
We demonstrate a successful deep learning strategy for cell identification and disease diagnosis using spatio-temporal cell information recorded by a digital holographic microscopy system. Shearing digital holographic microscopy is employed using a low-cost, compact, field-portable and 3D-printed microscopy system to record video-rate data of live biological cells with nanometer sensitivity in terms of axial membrane fluctuations, then features are extracted from the reconstructed phase profiles of segmented cells at each time instance for classification. The time-varying data of each extracted feature is input into a recurrent bi-directional long short-term memory (Bi-LSTM) network which learns to classify cells based on their time-varying behavior. Our approach is presented for cell identification between the morphologically similar cases of cow and horse red blood cells. Furthermore, the proposed deep learning strategy is demonstrated as having improved performance over conventional machine learning approaches on a clinically relevant dataset of human red blood cells from healthy individuals and those with sickle cell disease. The results are presented at both the cell and patient levels. To the best of our knowledge, this is the first report of deep learning for spatio-temporal-based cell identification and disease detection using a digital holographic microscopy system.
Collapse
Affiliation(s)
- Timothy O’Connor
- Biomedical Engineering Department, University of Connecticut, Storrs, Connecticut 06269, USA
| | - Arun Anand
- Applied Physics Department, Faculty of Tech. & Engineering, M.S. University of Baroda, Vadodara 390001, India
| | - Biree Andemariam
- New England Sickle Cell Institute, University of Connecticut Health, Farmington, Connecticut 06030, USA
| | - Bahram Javidi
- Electrical and Computer Engineering Department, University of Connecticut, Storrs, Connecticut 06269, USA
| |
Collapse
|
7
|
Kasztelanic R, Pysz D, Stepien R, Buczynski R. Light field camera based on hexagonal array of flat-surface nanostructured GRIN lenses. OPTICS EXPRESS 2019; 27:34985-34996. [PMID: 31878676 DOI: 10.1364/oe.27.034985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 11/05/2019] [Indexed: 06/10/2023]
Abstract
In this paper we present a light field camera system where a flat-surface hexagonal array of nanostructured gradient index lenses was used as a lens matrix. In our approach we use an array of 469 gradient index microlenses with a diameter of 20 µm and 100% fill factor. To develop the single lens and the lenslet array we used a modified stack-and-draw technology. In this technique, variation of refractive index is achieved by using quantized gradient index profiles and rods from different types of glasses. We show experimental results of using this type of lenses for imaging in a system of two kinds of light field cameras. In the first one, the microlens array is located in the focal plane of the main lens. The image is reconstructed, in this case using a Fourier slice photography algorithm. This allowed a partial reconstruction of a 3D scene with spatial and depth resolution of 20 µm and field of view of 500×500×500 µm. In the second configuration, the microlens array is located between a sample and a microscopic objective, thus allowing for superresolution 3D reconstruction of a microscopic image. The scale-invariant feature transform method was used for image reconstruction and obtained a partial 3D reconstruction with a field of view of 150×115×80 µm and a spatial resolution of 2 µm and depth resolution of 10 µm.
Collapse
|
8
|
Jaferzadeh K, Hwang SH, Moon I, Javidi B. No-search focus prediction at the single cell level in digital holographic imaging with deep convolutional neural network. BIOMEDICAL OPTICS EXPRESS 2019; 10:4276-4289. [PMID: 31453010 PMCID: PMC6701551 DOI: 10.1364/boe.10.004276] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Revised: 07/11/2019] [Accepted: 07/23/2019] [Indexed: 05/05/2023]
Abstract
Digital propagation of an off-axis hologram can provide the quantitative phase-contrast image if the exact distance between the sensor plane (such as CCD) and the reconstruction plane is correctly provided. In this paper, we present a deep-learning convolutional neural network with a regression layer as the top layer to estimate the best reconstruction distance. The experimental results obtained using microsphere beads and red blood cells show that the proposed method can accurately predict the propagation distance from a filtered hologram. The result is compared with the conventional automatic focus-evaluation function. Additionally, our approach can be utilized at the single-cell level, which is useful for cell-to-cell depth measurement and cell adherent studies.
Collapse
Affiliation(s)
- Keyvan Jaferzadeh
- Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science & Technology, Dalseong-gun, Daegu, 42988, South Korea
| | - Seung-Hyeon Hwang
- Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science & Technology, Dalseong-gun, Daegu, 42988, South Korea
| | - Inkyu Moon
- Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science & Technology, Dalseong-gun, Daegu, 42988, South Korea
- Corresponding author:
| | - Bahram Javidi
- Department of Electrical and Computer Engineering, U-4157, University of Connecticut, Storrs, Connecticut 06269-4157, USA
| |
Collapse
|
9
|
Carnicer A, Bosch S, Javidi B. Mueller matrix polarimetry with 3D integral imaging. OPTICS EXPRESS 2019; 27:11525-11536. [PMID: 31052996 DOI: 10.1364/oe.27.011525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Accepted: 03/28/2019] [Indexed: 06/09/2023]
Abstract
In this paper, we introduce the Mueller matrix imaging concepts for 3D Integral Imaging Polarimetry. The Mueller matrix of a complex scene is measured and estimated with 3D integral imaging. This information can be used to analyze the complex polarimetric behavior of any 3D scene. In particular, we show that the degree of polarization can be estimated at any selected plane for any arbitrary synthetic illumination source which may be difficult to produce in practice. This tool might open new perspectives for polarimetric analysis in the 3D domain. Also, we illustrate that 2D polarimetric images are noisier than 3D reconstructed polarimetric integral imaging. To the best of our knowledge, this is the first report on Mueller matrix polarimetry in 3D Integral Imaging.
Collapse
|
10
|
Sotoca JM, Latorre-Carmona P, Pla F, Shen X, Komatsu S, Javidi B. Integral imaging techniques for flexible sensing through image-based reprojection. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2017; 34:1776-1786. [PMID: 29036047 DOI: 10.1364/josaa.34.001776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2017] [Accepted: 08/15/2017] [Indexed: 06/07/2023]
Abstract
In this work, a 3D reconstruction approach for flexible sensing inspired by integral imaging techniques is proposed. This method allows the application of different integral imaging techniques, such as generating a depth map or the reconstruction of images on a certain 3D plane of the scene that were taken with a set of cameras located at unknown and arbitrary positions and orientations. By means of a photo-consistency measure proposed in this work, all-in-focus images can also be generated by projecting the points of the 3D plane into the sensor planes of the cameras and thereby capturing the associated RGB values. The proposed method obtains consistent results in real scenes with different surfaces of objects as well as changes in texture and lighting.
Collapse
|
11
|
A survey for the applications of content-based microscopic image analysis in microorganism classification domains. Artif Intell Rev 2017. [DOI: 10.1007/s10462-017-9572-4] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
12
|
Llavador A, Sola-Pikabea J, Saavedra G, Javidi B, Martínez-Corral M. Resolution improvements in integral microscopy with Fourier plane recording. OPTICS EXPRESS 2016; 24:20792-8. [PMID: 27607682 DOI: 10.1364/oe.24.020792] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Integral microscopes (IMic) have been recently developed in order to capture the spatial and the angular information of 3D microscopic samples with a single exposure. Computational post-processing of this information permits to carry out a 3D reconstruction of the sample. By applying conventional algorithms, both depth and also view reconstructions are possible. However, the main drawback of IMic is that the resolution of the reconstructed images is low and axially heterogeneous. In this paper, we propose a new configuration of the IMic by placing the lens array not at the image plane, but at the pupil (or Fourier) plane of the microscope objective. With this novel system, the spatial resolution is increased by factor 1.4, and the depth of field is substantially enlarged. Our experiments show the feasibility of the proposed method.
Collapse
|
13
|
Karimzadeh A. Analysis of the depth of field in hexagonal array integral imaging systems based on modulation transfer function and Strehl ratio. APPLIED OPTICS 2016; 55:3045-3050. [PMID: 27139873 DOI: 10.1364/ao.55.003045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Integral imaging is a technique for displaying three-dimensional images using microlens arrays. In this paper, a method for calculating root mean squared wavefront error and modulation transfer function (MTF) of a defocused integral imaging capture system with hexagonal aperture microlens arrays is introduced. Also, maximum allowable depth of field with Century MTF analyzing and Strehl criterion are obtained.
Collapse
|
14
|
Ando T, Horisaki R, Tanida J. Three-dimensional imaging through scattering media using three-dimensionally coded pattern projection. APPLIED OPTICS 2015; 54:7316-7322. [PMID: 26368767 DOI: 10.1364/ao.54.007316] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We propose a method for visualizing three-dimensional objects in scattering media. Our method is based on active illumination using three-dimensionally coded patterns and a numerical algorithm employing a sparsity constraint. We experimentally demonstrated the proposed imaging method for test charts located three-dimensionally at different depths in the space behind a translucent sheet.
Collapse
|
15
|
Sommer H, Ihrig A, Ebenau M, Flühs D, Spaan B, Eichmann M. Integral image rendering procedure for aberration correction and size measurement. APPLIED OPTICS 2014; 53:3176-3182. [PMID: 24922201 DOI: 10.1364/ao.53.003176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2014] [Accepted: 04/04/2014] [Indexed: 06/03/2023]
Abstract
The challenge in rendering integral images is to use as much information preserved by the light field as possible to reconstruct a captured scene in a three-dimensional way. We propose a rendering algorithm based on the projection of rays through a detailed simulation of the optical path, considering all the physical properties and locations of the optical elements. The rendered images contain information about the correct size of imaged objects without the need to calibrate the imaging device. Additionally, aberrations of the optical system may be corrected, depending on the setup of the integral imaging device. We show simulation data that illustrates the aberration correction ability and experimental data from our plenoptic camera, which illustrates the capability of our proposed algorithm to measure size and distance. We believe this rendering procedure will be useful in the future for three-dimensional ophthalmic imaging of the human retina.
Collapse
|
16
|
Yi F, Lee J, Moon I. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit. APPLIED OPTICS 2014; 53:2777-2786. [PMID: 24921860 DOI: 10.1364/ao.53.002777] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2014] [Accepted: 03/28/2014] [Indexed: 06/03/2023]
Abstract
The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.
Collapse
|
17
|
Luo CG, Xiao X, Martínez-Corral M, Chen CW, Javidi B, Wang QH. Analysis of the depth of field of integral imaging displays based on wave optics. OPTICS EXPRESS 2013; 21:31263-31273. [PMID: 24514700 DOI: 10.1364/oe.21.031263] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, we analyze the depth of field (DOF) of integral imaging displays based on wave optics. With considering the diffraction effect, we analyze the intensity distribution of light with multiple micro-lenses and derive a DOF calculation formula for integral imaging display system. We study the variations of DOF values with different system parameters. Experimental results are provided to verify the accuracy of the theoretical analysis. The analyses and experimental results presented in this paper could be beneficial for better understanding and designing of integral imaging displays.
Collapse
|
18
|
Arai J, Kawakita M, Yamashita T, Sasaki H, Miura M, Hiura H, Okui M, Okano F. Integral three-dimensional television with video system using pixel-offset method. OPTICS EXPRESS 2013; 21:3474-3485. [PMID: 23481805 DOI: 10.1364/oe.21.003474] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Integral three-dimensional (3D) television based on integral imaging requires huge amounts of information. Previously, we constructed an Integral 3D television using Super Hi-Vision (SHV) technology, with 7680 pixels horizontally and 4320 pixels vertically. We report on improved image quality through the development of video system with an equivalent of 8000 scan lines for use with Integral 3D television. We conducted experiments to evaluate the resolution of 3D images using an experimental setup and were able to show that by using the pixel-offset method we have eliminated aliasing produced by full-resolution SHV video equipment. We confirmed that the application of the pixel-offset method to integral 3D television is effective in increasing the resolution of reconstructed images.
Collapse
Affiliation(s)
- Jun Arai
- Science and Technology Research Laboratories, NHK (Japan Broadcasting Corporation), 1-10-11 Kinuta, Setagaya, Tokyo 1578510, Japan.
| | | | | | | | | | | | | | | |
Collapse
|
19
|
Xiao X, Javidi B, Martinez-Corral M, Stern A. Advances in three-dimensional integral imaging: sensing, display, and applications [Invited]. APPLIED OPTICS 2013; 52:546-60. [PMID: 23385893 DOI: 10.1364/ao.52.000546] [Citation(s) in RCA: 159] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Three-dimensional (3D) sensing and imaging technologies have been extensively researched for many applications in the fields of entertainment, medicine, robotics, manufacturing, industrial inspection, security, surveillance, and defense due to their diverse and significant benefits. Integral imaging is a passive multiperspective imaging technique, which records multiple two-dimensional images of a scene from different perspectives. Unlike holography, it can capture a scene such as outdoor events with incoherent or ambient light. Integral imaging can display a true 3D color image with full parallax and continuous viewing angles by incoherent light; thus it does not suffer from speckle degradation. Because of its unique properties, integral imaging has been revived over the past decade or so as a promising approach for massive 3D commercialization. A series of key articles on this topic have appeared in the OSA journals, including Applied Optics. Thus, it is fitting that this Commemorative Review presents an overview of literature on physical principles and applications of integral imaging. Several data capture configurations, reconstruction, and display methods are overviewed. In addition, applications including 3D underwater imaging, 3D imaging in photon-starved environments, 3D tracking of occluded objects, 3D optical microscopy, and 3D polarimetric imaging are reviewed.
Collapse
Affiliation(s)
- Xiao Xiao
- Electrical and Computer Engineering Department, University of Connecticut, Storrs, Connecticut 06269-4157, USA
| | | | | | | |
Collapse
|
20
|
El Mallahi A, Minetti C, Dubois F. Automated three-dimensional detection and classification of living organisms using digital holographic microscopy with partial spatial coherent source: application to the monitoring of drinking water resources. APPLIED OPTICS 2013; 52:A68-80. [PMID: 23292424 DOI: 10.1364/ao.52.000a68] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
In this paper, we investigate the use of a digital holographic microscope working with partially coherent spatial illumination for an automated detection and classification of living organisms. A robust automatic method based on the computation of propagating matrices is proposed to detect the 3D position of organisms. We apply this procedure to the evaluation of drinking water resources by developing a classification process to identify parasitic protozoan Giardia lamblia cysts among two other similar organisms. By selecting textural features from the quantitative optical phase instead of morphological ones, a robust classifier is built to propose a new method for the unambiguous detection of Giardia lamblia cyst that present a critical contamination risk.
Collapse
Affiliation(s)
- Ahmed El Mallahi
- Microgravity Research Center, Université Libre de Bruxelles, 50 Avenue F. Roosevelt, CP 165/62, Brussels B-1050, Belgium.
| | | | | |
Collapse
|
21
|
Park JH, Jeong KM. Frequency domain depth filtering of integral imaging. OPTICS EXPRESS 2011; 19:18729-18741. [PMID: 21935243 DOI: 10.1364/oe.19.018729] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
A novel technique for depth filtering of integral imaging is proposed. Integral imaging captures spatio-angular distribution of the light rays which delivers three-dimensional information of the object scene. The proposed method performs filtering operation in the frequency domain of the captured spatio-angular light ray distribution, achieving depth selective reconstruction. Grating projection further enhances the depth discrimination performance. The principle is verified experimentally.
Collapse
Affiliation(s)
- Jae-Hyeung Park
- School of Electrical & Computer Engineering, Chungbuk National University, Chungbuk, Korea.
| | | |
Collapse
|
22
|
Rajasekharan R, Wilkinson TD, Hands PJW, Dai Q. Nanophotonic three-dimensional microscope. NANO LETTERS 2011; 11:2770-2773. [PMID: 21657239 DOI: 10.1021/nl201056s] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Three-dimensional (3D) optical microscopy based on integral imaging techniques is limited mainly by diffraction effects and the pitch of the microlens array used to sample the specimen. We integrate nanotechnology to the integral imaging technique and demonstrate a nanophotonic 3D microscope, where a nanophotonic lens array is used to finely sample the specimen. The resolution limitation due to diffraction is reduced by capturing images before the diffraction effects predominate and hence overcomes the bottleneck of achieving high resolution in an integral imaging 3D microscope.
Collapse
Affiliation(s)
- Ranjith Rajasekharan
- Department of Engineering, Centre of Molecular Materials for Photonics and Electronics, University of Cambridge, 9 J.J. Thomson Avenue, Cambridge CB3 0FA, UK
| | | | | | | |
Collapse
|
23
|
Yang Yu B, Elbuken C, Ren CL, Huissoon JP. Image processing and classification algorithm for yeast cell morphology in a microfluidic chip. JOURNAL OF BIOMEDICAL OPTICS 2011; 16:066008. [PMID: 21721809 DOI: 10.1117/1.3589100] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
The study of yeast cell morphology requires consistent identification of cell cycle phases based on cell bud size. A computer-based image processing algorithm is designed to automatically classify microscopic images of yeast cells in a microfluidic channel environment. The images were enhanced to reduce background noise, and a robust segmentation algorithm is developed to extract geometrical features including compactness, axis ratio, and bud size. The features are then used for classification, and the accuracy of various machine-learning classifiers is compared. The linear support vector machine, distance-based classification, and k-nearest-neighbor algorithm were the classifiers used in this experiment. The performance of the system under various illumination and focusing conditions were also tested. The results suggest it is possible to automatically classify yeast cells based on their morphological characteristics with noisy and low-contrast images.
Collapse
Affiliation(s)
- Bo Yang Yu
- University of Waterloo, Department of Mechanical and Mechatronics Engineering, Waterloo, Ontario, N2L 3G1, Canada
| | | | | | | |
Collapse
|
24
|
Shin D, Daneshpanah M, Anand A, Javidi B. Optofluidic system for three-dimensional sensing and identification of micro-organisms with digital holographic microscopy. OPTICS LETTERS 2010; 35:4066-4068. [PMID: 21124614 DOI: 10.1364/ol.35.004066] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Optofluidic devices offer flexibility for a variety of tasks involving biological specimen. We propose a system for three-dimensional (3D) sensing and identification of biological micro-organisms. This system consists of a microfluidic device along with a digital holographic microscope and relevant statistical recognition algorithms. The microfluidic channel is used to house the micro-organisms, while the holographic microscope and a CCD camera record their digital holograms. The holograms can be computationally reconstructed in 3D using a variety of algorithms, such as the Fresnel transform. Statistical recognition algorithms are used to analyze and identify the micro-organisms from the reconstructed wavefront. Experimental results are presented. Because of computational reconstruction of wavefronts in holographic imaging, this technique offers unique advantages that allow one to image micro-organisms within a deep channel while removing the inherent microfluidic-induced aberration through interferometery.
Collapse
Affiliation(s)
- Donghak Shin
- Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut 06269-2157, USA
| | | | | | | |
Collapse
|
25
|
Shin D, Cho M, Javidi B. Three-dimensional optical microscopy using axially distributed image sensing. OPTICS LETTERS 2010; 35:3646-3648. [PMID: 21042378 DOI: 10.1364/ol.35.003646] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We propose three-dimensional (3D) optical microscopy using axially distributed image sensing. In the proposed method, the micro-objects are optically magnified and their axially distributed images are recorded by moving the image sensor along a common optical axis. The 3D volumetric images are generated from the recorded axial image set using a computational reconstruction algorithm based on ray backprojection. Preliminary experimental results are presented. To the best of our knowledge, this is the first report on 3D optical microscopy using axially distributed sensing.
Collapse
Affiliation(s)
- Donghak Shin
- Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut 06269-2157, USA
| | | | | |
Collapse
|
26
|
Park JH, Hong K, Lee B. Recent progress in three-dimensional information processing based on integral imaging. APPLIED OPTICS 2009; 48:H77-94. [PMID: 19956305 DOI: 10.1364/ao.48.000h77] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Recently developed integral imaging techniques are reviewed. Integral imaging captures and reproduces the light rays from the object space, enabling the acquisition and the display of the three-dimensional information of the object in an efficient way. Continuous effort on integral imaging has been improving the performance of the capture and display process in various aspects, including distortion, resolution, viewing angle, and depth range. Digital data processing of the captured light rays can now visualize the three-dimensional structure of the object with a high degree of freedom and enhanced quality. This recent progress is of high interest for both industrial applications and academic research.
Collapse
Affiliation(s)
- Jae-Hyeung Park
- School of Electrical & Computer Engineering, Chungbuk National University, 410 SungBong-Ro, Heungduk-Gu, Cheongju-Si, Chungbuk, 361-763, Korea
| | | | | |
Collapse
|
27
|
Moon I, Javidi B. Three-dimensional visualization of objects in scattering medium by use of computational integral imaging. OPTICS EXPRESS 2008; 16:13080-13089. [PMID: 18711547 DOI: 10.1364/oe.16.013080] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
In this paper, we propose a method to three-dimensionally visualize objects in a scattering medium using integral imaging. Our approach is based on a particular use of the interference phenomenon between the ballistic photons getting through the scattering medium and the scattered photons being scattered by the medium. For three-dimensional (3D) sensing of the scattered objects, the synthetic aperture integral imaging system under coherent illumination records the scattered elemental images of the objects. Then, the computational geometrical ray propagation algorithm is applied to the scattered elemental images in order to eliminate the interference patterns between scattered and object beams. The original 3D information of the scattered objects is recovered by multiple imaging channels, each with a unique perspective of the object. We present both simulation and experimental results with virtual and real objects to demonstrate the proposed concepts.
Collapse
Affiliation(s)
- Inkyu Moon
- Dept of Electrical and Computer Engineering, U-2157, University of Connecticut, Storrs, CT 06269-2157, USA.
| | | |
Collapse
|