1
|
Wani P, Usmani K, Krishnan G, Javidi B. 3D object tracking using integral imaging with mutual information and Bayesian optimization. OPTICS EXPRESS 2024; 32:7495-7512. [PMID: 38439428 DOI: 10.1364/oe.517312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 02/05/2024] [Indexed: 03/06/2024]
Abstract
Integral imaging has proven useful for three-dimensional (3D) object visualization in adverse environmental conditions such as partial occlusion and low light. This paper considers the problem of 3D object tracking. Two-dimensional (2D) object tracking within a scene is an active research area. Several recent algorithms use object detection methods to obtain 2D bounding boxes around objects of interest in each frame. Then, one bounding box can be selected out of many for each object of interest using motion prediction algorithms. Many of these algorithms rely on images obtained using traditional 2D imaging systems. A growing literature demonstrates the advantage of using 3D integral imaging instead of traditional 2D imaging for object detection and visualization in adverse environmental conditions. Integral imaging's depth sectioning ability has also proven beneficial for object detection and visualization. Integral imaging captures an object's depth in addition to its 2D spatial position in each frame. A recent study uses integral imaging for the 3D reconstruction of the scene for object classification and utilizes the mutual information between the object's bounding box in this 3D reconstructed scene and the 2D central perspective to achieve passive depth estimation. We build over this method by using Bayesian optimization to track the object's depth in as few 3D reconstructions as possible. We study the performance of our approach on laboratory scenes with occluded objects moving in 3D and show that the proposed approach outperforms 2D object tracking. In our experimental setup, mutual information-based depth estimation with Bayesian optimization achieves depth tracking with as few as two 3D reconstructions per frame which corresponds to the theoretical minimum number of 3D reconstructions required for depth estimation. To the best of our knowledge, this is the first report on 3D object tracking using the proposed approach.
Collapse
|
2
|
Wani P, Javidi B. 3D integral imaging depth estimation of partially occluded objects using mutual information and Bayesian optimization. OPTICS EXPRESS 2023; 31:22863-22884. [PMID: 37475387 DOI: 10.1364/oe.492160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 06/12/2023] [Indexed: 07/22/2023]
Abstract
Integral imaging (InIm) is useful for passive ranging and 3D visualization of partially-occluded objects. We consider 3D object localization within a scene and in occlusions. 2D localization can be achieved using machine learning and non-machine learning-based techniques. These techniques aim to provide a 2D bounding box around each one of the objects of interest. A recent study uses InIm for the 3D reconstruction of the scene with occlusions and utilizes mutual information (MI) between the bounding box in this 3D reconstructed scene and the corresponding bounding box in the central elemental image to achieve passive depth estimation of partially occluded objects. Here, we improve upon this InIm method by using Bayesian optimization to minimize the number of required 3D scene reconstructions. We evaluate the performance of the proposed approach by analyzing different kernel functions, acquisition functions, and parameter estimation algorithms for Bayesian optimization-based inference for simultaneous depth estimation of objects and occlusion. In our optical experiments, mutual-information-based depth estimation with Bayesian optimization achieves depth estimation with a handful of 3D reconstructions. To the best of our knowledge, this is the first report to use Bayesian optimization for mutual information-based InIm depth estimation.
Collapse
|
3
|
Wani P, Krishnan G, O'Connor T, Javidi B. Information theoretic performance evaluation of 3D integral imaging. OPTICS EXPRESS 2022; 30:43157-43171. [PMID: 36523020 DOI: 10.1364/oe.475086] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 10/11/2022] [Indexed: 06/17/2023]
Abstract
Integral imaging (InIm) has proved useful for three-dimensional (3D) object sensing, visualization, and classification of partially occluded objects. This paper presents an information-theoretic approach for simulating and evaluating the integral imaging capture and reconstruction process. We utilize mutual information (MI) as a metric for evaluating the fidelity of the reconstructed 3D scene. Also we consider passive depth estimation using mutual information. We apply this formulation for optimal pitch estimation of integral-imaging capture and reconstruction to maximize the longitudinal resolution. The effect of partial occlusion in integral imaging 3D reconstruction using mutual information is evaluated. Computer simulation tests and experiments are presented.
Collapse
|
4
|
Yi X, Xu W, Li A. The Clinical Application of Remimazolam Benzenesulfonate Combined with Esketamine Intravenous Anesthesia in Endoscopic Retrograde Cholangiopancreatography. BIOMED RESEARCH INTERNATIONAL 2022; 2022:5628687. [PMID: 35813222 PMCID: PMC9262575 DOI: 10.1155/2022/5628687] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 05/24/2022] [Indexed: 11/18/2022]
Abstract
In this project, algorithm-based image processing methods in 3D endoscopic image processing endoscopic retrograde cholangiopancreatography (ERCP) were analyzed. To enhance local information of images, an adaptive histogram equalization method with limited contrast is introduced. The influences of the algorithm on 3D endoscopic image peak signal-to-noise ratio (PSNR), image discrete information entropy (DE), and average mean brightness error (AMBE) of images before and after the optimization before were compared. A total of 92 patients receiving ERCP at Yuhuangding Hospital between December 2019 and December 2021 were selected and divided into the control group (fentanyl+propofol) and the observation group (remimazolam benzenesulfonate+esketamine). Mean arterial pressure heart rate (HR), oxygen saturation (SpO2), and respiratory rate (RR) of the patients at each time point including the entry into the operation room (T0), 2 minutes after the beginning of medication (T1), after endoscopy (T2), endoscopy withdrawal (T3), and postoperative awakening (T4) were recorded. The comparison of MAP between T1, T2, T3, and T4 and T0 among patients in the observation group and the control group showed statistical differences (P < 0.05). Besides, HR and RR at T4 in the observation group were obviously higher than those in the control group (P < 0.05). The comparison of SpO2 at T3 and T4 and that at T0 both showed statistical differences (P < 0.05). Awakening time and VAS scores in the observation group were obviously lower than those in the control group (P < 0.05). The incidence of bradycardia, nausea, vomiting, and chill in the observation group was all lower than that in the control group (P < 0.05). The results indicated that an effective endoscopic image processing method was established based on an image enhancement algorithm, and the combination of remimazolam benzenesulfonate and esketamine showed high safety and efficacy in ERCP.
Collapse
Affiliation(s)
- Xiuna Yi
- Department of Anesthesiology, Yantaishan Hospital, Yantai, 264003 Shandong, China
| | - Weiwei Xu
- Department of Anesthesiology, Yuhuangding Hospital, Yantai, Shandong 264000, China
| | - Aizhi Li
- Department of Anesthesiology, Yuhuangding Hospital, Yantai, Shandong 264000, China
| |
Collapse
|
5
|
Kwan E, Hua H. Prism-based tri-aperture laparoscopic objective for multi-view acquisition. OPTICS EXPRESS 2022; 30:2836-2851. [PMID: 35209416 PMCID: PMC8970697 DOI: 10.1364/oe.448164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 12/28/2021] [Accepted: 01/03/2022] [Indexed: 06/14/2023]
Abstract
This paper presents the design and prototype of a novel tri-aperture monocular laparoscopic objective that can acquire both stereoscopic views for depth information and a wide field of view (FOV) for situational awareness. The stereoscopic views are simultaneously captured via a shared objective with two displaced apertures and a custom prism. Overlapping crosstalk between the stereoscopic views is diminished by incorporating a strategically placed vignetting aperture. Meanwhile, the wide FOV is captured via a central third aperture of the same objective and provides a 2D view of the surgical field 2x as large as the area imaged by the stereoscopic views. We also demonstrate how the wide FOV provides a reference data set for stereo calibration, which enables absolute depth mapping in our experimental prototype.
Collapse
Affiliation(s)
- Elliott Kwan
- 3D Visualization and Imaging Systems Laboratory, James C. Wyant College of Optical Sciences, University of Arizona, 1630 E University Blvd., Tucson, AZ 85721, USA
| | - Hong Hua
- 3D Visualization and Imaging Systems Laboratory, James C. Wyant College of Optical Sciences, University of Arizona, 1630 E University Blvd., Tucson, AZ 85721, USA
| |
Collapse
|
6
|
Wani P, Usmani K, Krishnan G, O'Connor T, Javidi B. Lowlight object recognition by deep learning with passive three-dimensional integral imaging in visible and long wave infrared wavelengths. OPTICS EXPRESS 2022; 30:1205-1218. [PMID: 35209285 DOI: 10.1364/oe.443657] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 12/20/2021] [Indexed: 06/14/2023]
Abstract
Traditionally, long wave infrared imaging has been used in photon starved conditions for object detection and classification. We investigate passive three-dimensional (3D) integral imaging (InIm) in visible spectrum for object classification using deep neural networks in photon-starved conditions and under partial occlusion. We compare the proposed passive 3D InIm operating in the visible domain with that of the long wave infrared sensing in both 2D and 3D imaging cases for object classification in degraded conditions. This comparison is based on average precision, recall, and miss rates. Our experimental results demonstrate that cold and hot object classification using 3D InIm in the visible spectrum may outperform both 2D and 3D imaging implemented in long wave infrared spectrum for photon-starved and partially occluded scenes. While these experiments are not comprehensive, they demonstrate the potential of 3D InIm in the visible spectrum for low light applications. Imaging in the visible spectrum provides higher spatial resolution, more compact optics, and lower cost hardware compared with long wave infrared imaging. In addition, higher spatial resolution obtained in the visible spectrum can improve object classification accuracy. Our experimental results provide a proof of concept for implementing visible spectrum imaging in place of the traditional LWIR spectrum imaging for certain object recognition tasks.
Collapse
|
7
|
Usmani K, O'Connor T, Javidi B. Three-dimensional polarimetric image restoration in low light with deep residual learning and integral imaging. OPTICS EXPRESS 2021; 29:29505-29517. [PMID: 34615059 DOI: 10.1364/oe.435900] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 08/22/2021] [Indexed: 06/13/2023]
Abstract
Polarimetric imaging can become challenging in degraded environments such as low light illumination conditions or in partial occlusions. In this paper, we propose the denoising convolutional neural network (DnCNN) model with three-dimensional (3D) integral imaging to enhance the reconstructed image quality of polarimetric imaging in degraded environments such as low light and partial occlusions. The DnCNN is trained based on the physical model of the image capture in degraded environments to enhance the visualization of polarimetric imaging where simulated low light polarimetric images are used in the training process. The DnCNN model is experimentally tested on real polarimetric images captured in real low light environments and in partial occlusion. The performance of DnCNN model is compared with that of total variation denoising. Experimental results demonstrate that DnCNN performs better than total variation denoising for polarimetric integral imaging in terms of signal-to-noise ratio and structural similarity index measure in low light environments as well as low light environments under partial occlusions. To the best of our knowledge, this is the first report of polarimetric 3D object visualization and restoration in low light environments and occlusions using DnCNN with integral imaging. The proposed approach is also useful for 3D image restoration in conventional (non-polarimetric) integral imaging in a degraded environment.
Collapse
|
8
|
Usmani K, Krishnan G, O'Connor T, Javidi B. Deep learning polarimetric three-dimensional integral imaging object recognition in adverse environmental conditions. OPTICS EXPRESS 2021; 29:12215-12228. [PMID: 33984986 DOI: 10.1364/oe.421287] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 03/28/2021] [Indexed: 06/12/2023]
Abstract
Polarimetric imaging is useful for object recognition and material classification because of its ability to discriminate objects based on polarimetric signatures of materials. Polarimetric imaging of an object captures important physical properties such as shape and surface properties and can be effective even in low light environments. Integral imaging is a passive three-dimensional (3D) imaging approach that takes advantage of multiple 2D imaging perspectives to perform 3D reconstruction. In this paper, we propose a unified polarimetric detection and classification of objects in degraded environments such as low light and the presence of occlusion. This task is accomplished using a deep learning model for 3D polarimetric integral imaging data captured in the visible spectral domain. The neural network system is designed and trained for 3D object detection and classification using polarimetric integral images. We compare the detection and classification results between polarimetric and non-polarimetric 2D and 3D imaging. The system performance in degraded environmental conditions is evaluated using average miss rate, average precision, and F-1 score. The results indicate that for the experiments we have performed, polarimetric 3D integral imaging outperforms 2D polarimetric imaging as well as non-polarimetric 2D and 3D imaging for object recognition in adverse conditions such as low light and occlusions. To the best of our knowledge, this is the first report for polarimetric 3D object recognition in low light environments and occlusions using a deep learning-based integral imaging. The proposed approach is attractive because low light polarimetric object recognition in the visible spectral band benefits from much higher spatial resolution, more compact optics, and lower system cost compared with long wave infrared imaging which is the conventional imaging approach for low light environments.
Collapse
|
9
|
Lee JH, Chang S, Kim MS, Kim YJ, Kim HM, Song YM. High-Identical Numerical Aperture, Multifocal Microlens Array through Single-Step Multi-Sized Hole Patterning Photolithography. MICROMACHINES 2020; 11:mi11121068. [PMID: 33266141 PMCID: PMC7761445 DOI: 10.3390/mi11121068] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 11/28/2020] [Accepted: 11/29/2020] [Indexed: 01/20/2023]
Abstract
Imaging applications based on microlens arrays (MLAs) have a great potential for the depth sensor, wide field-of-view camera and the reconstructed hologram. However, the narrow depth-of-field remains the challenge for accurate, reliable depth estimation. Multifocal microlens array (Mf-MLAs) is perceived as a major breakthrough, but existing fabrication methods are still hindered by the expensive, low-throughput, and dissimilar numerical aperture (NA) of individual lenses due to the multiple steps in the photolithography process. This paper reports the fabrication method of high NA, Mf-MLAs for the extended depth-of-field using single-step photolithography assisted by chemical wet etching. The various lens parameters of Mf-MLAs are manipulated by the multi-sized hole photomask and the wet etch time. Theoretical and experimental results show that the Mf-MLAs have three types of lens with different focal lengths, while maintaining the uniform and high NA irrespective of the lens type. Additionally, we demonstrate the multi-focal plane image acquisition via Mf-MLAs integrated into a microscope.
Collapse
|
10
|
Usmani K, O'Connor T, Shen X, Marasco P, Carnicer A, Dey D, Javidi B. Three-dimensional polarimetric integral imaging in photon-starved conditions: performance comparison between visible and long wave infrared imaging. OPTICS EXPRESS 2020; 28:19281-19294. [PMID: 32672208 DOI: 10.1364/oe.395301] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 06/09/2020] [Indexed: 06/11/2023]
Abstract
Three-dimensional (3D) polarimetric integral imaging (InIm) to extract the 3D polarimetric information of objects in photon-starved conditions is investigated using a low noise visible range camera and a long wave infrared (LWIR) range camera, and the performance between the two sensors is compared. Stokes polarization parameters and degree of polarization (DoP) are calculated to extract the polarimetric information of the 3D scene while integral imaging reconstruction provides depth information and improves the performance of low-light imaging tasks. An LWIR wire grid polarizer and a linear polarizer film are used as polarimetric objects for the LWIR range and visible range cameras, respectively. To account for a limited number of photons per pixel using the visible range camera in low light conditions, we apply a mathematical restoration model at each elemental image of visible camera to enhance the signal. We show that the low noise visible range camera may outperform the LWIR camera in detection of polarimetric objects under low illumination conditions. Our experiments indicate that for 3D polarimetric measurements under photon-starved conditions, visible range sensing may produce a signal-to-noise ratio (SNR) that is not lower than the LWIR range sensing. We derive the probability density function (PDF) of the 2D and 3D degree of polarization (DoP) images and show that the theoretical model demonstrates agreement to that of the experimentally obtained results. To the best of our knowledge, this is the first report comparing the polarimetric imaging performance between visible range and infrared (IR) range sensors under photon-starved conditions and the relevant statistical models of 3D polarimetric integral imaging.
Collapse
|
11
|
Cai Z, Pedrini G, Osten W, Liu X, Peng X. Single-shot structured-light-field three-dimensional imaging. OPTICS LETTERS 2020; 45:3256-3259. [PMID: 32538956 DOI: 10.1364/ol.393911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Accepted: 05/06/2020] [Indexed: 06/11/2023]
Abstract
This Letter reports an approach to single-shot three-dimensional (3D) imaging that is combining structured illumination and light-field imaging. The sinusoidal distribution of the radiance in the structured-light field can be processed and transformed to compute the angular variance of the local radiance difference. The angular variance across the depth range exhibits a single-peak distribution trend that can be used to obtain the unambiguous depth. The phase computation that generally requires the acquisition of multi-frame phase-shifting images is no longer mandatory, thus enabling single-shot structured-light-field 3D imaging. The proposed approach was experimentally demonstrated through a dynamic scene.
Collapse
|
12
|
Kwan E, Qin Y, Hua H. High resolution, programmable aperture light field laparoscope for quantitative depth mapping. OSA CONTINUUM 2020; 3:194-203. [PMID: 34553128 PMCID: PMC8455120 DOI: 10.1364/osac.382558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 01/10/2020] [Indexed: 06/13/2023]
Abstract
Recent applications have shown that light field imaging can be useful for developing uniaxial three-dimensional (3D) endoscopes. The immediate challenges in implementation are a tradeoff in lateral resolution and acquiring enough depth information in the physically limited environment of minimally invasive surgery. Here we propose using programmable aperture light field imaging in laparoscopy to capture 3D information without sacrificing the camera sensor's native, high spatial resolution. This hybrid design utilizes a programmable aperture to preserve the conventional laparoscope's functionality and, upon demand, to compute a depth map for surgical guidance. A working prototype is demonstrated.
Collapse
|
13
|
Cui Q, Park J, Smith RT, Gao L. Snapshot hyperspectral light field imaging using image mapping spectrometry. OPTICS LETTERS 2020; 45:772-775. [PMID: 32004308 PMCID: PMC7472785 DOI: 10.1364/ol.382088] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 11/23/2019] [Indexed: 05/22/2023]
Abstract
In this Letter, we present a snapshot hyperspectral light field imaging system using a single camera. By integrating an unfocused light field camera with a snapshot hyperspectral imager, the image mapping spectrometer, we captured a five-dimensional (5D) ($x,y,u,v,\lambda $x,y,u,v,λ) ($x,y,$x,y, spatial coordinates; $u,v,$u,v, emittance angles; $\lambda ,$λ, wavelength) datacube in a single camera exposure. The corresponding volumetric image ($x,y,z$x,y,z) at each wavelength is then computed through a scale-depth space transform. We demonstrated the snapshot advantage of our system by imaging the spectral-volumetric scenes in real time.
Collapse
Affiliation(s)
- Qi Cui
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405N Mathews Avenue, Urbana, Illinois 61801, USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 306N Wright St., Urbana, Illinois 61801, USA
| | - Jongchan Park
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405N Mathews Avenue, Urbana, Illinois 61801, USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 306N Wright St., Urbana, Illinois 61801, USA
| | - R. Theodore Smith
- Department of Ophthalmology, New York Eye and Ear Infirmary of Mount Sinai, New York, New York 10003, USA
| | - Liang Gao
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405N Mathews Avenue, Urbana, Illinois 61801, USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 306N Wright St., Urbana, Illinois 61801, USA
- Corresponding author:
| |
Collapse
|
14
|
Yao M, Cheng J, Huang Z, Zhang Z, Li S, Peng J, Zhong J. Reflection light-field microscope with a digitally tunable aperture by single-pixel imaging. OPTICS EXPRESS 2019; 27:33040-33050. [PMID: 31878378 DOI: 10.1364/oe.27.033040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Accepted: 10/20/2019] [Indexed: 06/10/2023]
Abstract
Reflected light microscope is a tool for imaging opaque specimens. However, most of the existing reflected light microscopes can only obtain the two-dimensional image of the specimen. Here we demonstrate that with the help of single-pixel imaging, we can develop a reflection light-field microscopy for volumetric imaging. Importantly, using single-pixel imaging, we can digitally adjust the size of the aperture diaphragm of the proposed reflection light-field microscope for changing the depth of field and for achieving three-dimensional differential phase-contrast imaging in an arbitrary direction, without a hardware change. Our approach may benefit various reflective specimens with wide depth information in the semiconductor industry and material science.
Collapse
|
15
|
Lin RJ, Su VC, Wang S, Chen MK, Chung TL, Chen YH, Kuo HY, Chen JW, Chen J, Huang YT, Wang JH, Chu CH, Wu PC, Li T, Wang Z, Zhu S, Tsai DP. Achromatic metalens array for full-colour light-field imaging. NATURE NANOTECHNOLOGY 2019; 14:227-231. [PMID: 30664753 DOI: 10.1038/s41565-018-0347-0] [Citation(s) in RCA: 162] [Impact Index Per Article: 32.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Accepted: 12/11/2018] [Indexed: 05/21/2023]
Abstract
A light-field camera captures both the intensity and the direction of incoming light1-5. This enables a user to refocus pictures and afterwards reconstruct information on the depth of field. Research on light-field imaging can be divided into two components: acquisition and rendering. Microlens arrays have been used for acquisition, but obtaining broadband achromatic images with no spherical aberration remains challenging. Here, we describe a metalens array made of gallium nitride (GaN) nanoantennas6 that can be used to capture light-field information and demonstrate a full-colour light-field camera devoid of chromatic aberration. The metalens array contains an array of 60 × 60 metalenses with diameters of 21.65 μm. The camera has a diffraction-limited resolution of 1.95 μm under white light illumination. The depth of every object in the scene can be reconstructed slice by slice from a series of rendered images with different depths of focus. Full-colour, achromatic light-field cameras could find applications in a variety of fields such as robotic vision, self-driving vehicles and virtual and augmented reality.
Collapse
Affiliation(s)
- Ren Jie Lin
- Department of Physics, National Taiwan University, Taipei, Taiwan
| | - Vin-Cent Su
- Department of Electrical Engineering, National United University, Miaoli, Taiwan
| | - Shuming Wang
- National Laboratory of Solid State Microstructures, School of Physics, College of Engineering and Applied Sciences, Nanjing University, Nanjing, China
- Key Laboratory of Intelligent Optical Sensing and Manipulation, Ministry of Education, Nanjing, China
- Collaborative Innovation Center of Advanced Microstructures, Nanjing, China
| | - Mu Ku Chen
- Department of Physics, National Taiwan University, Taipei, Taiwan
| | - Tsung Lin Chung
- Department of Physics, National Taiwan University, Taipei, Taiwan
| | - Yu Han Chen
- Department of Physics, National Taiwan University, Taipei, Taiwan
| | - Hsin Yu Kuo
- Department of Physics, National Taiwan University, Taipei, Taiwan
| | - Jia-Wern Chen
- Department of Physics, National Taiwan University, Taipei, Taiwan
| | - Ji Chen
- National Laboratory of Solid State Microstructures, School of Physics, College of Engineering and Applied Sciences, Nanjing University, Nanjing, China
- Key Laboratory of Intelligent Optical Sensing and Manipulation, Ministry of Education, Nanjing, China
- Collaborative Innovation Center of Advanced Microstructures, Nanjing, China
| | - Yi-Teng Huang
- Department of Physics, National Taiwan University, Taipei, Taiwan
| | - Jung-Hsi Wang
- Department of Electrical Engineering and Graduate Institute of Electronics Engineering, National Taiwan University, Taipei, Taiwan
| | - Cheng Hung Chu
- Research Center for Applied Sciences, Academia Sinica, Taipei, Taiwan
| | - Pin Chieh Wu
- Research Center for Applied Sciences, Academia Sinica, Taipei, Taiwan
| | - Tao Li
- National Laboratory of Solid State Microstructures, School of Physics, College of Engineering and Applied Sciences, Nanjing University, Nanjing, China
- Key Laboratory of Intelligent Optical Sensing and Manipulation, Ministry of Education, Nanjing, China
- Collaborative Innovation Center of Advanced Microstructures, Nanjing, China
| | - Zhenlin Wang
- National Laboratory of Solid State Microstructures, School of Physics, College of Engineering and Applied Sciences, Nanjing University, Nanjing, China
- Collaborative Innovation Center of Advanced Microstructures, Nanjing, China
| | - Shining Zhu
- National Laboratory of Solid State Microstructures, School of Physics, College of Engineering and Applied Sciences, Nanjing University, Nanjing, China.
- Key Laboratory of Intelligent Optical Sensing and Manipulation, Ministry of Education, Nanjing, China.
- Collaborative Innovation Center of Advanced Microstructures, Nanjing, China.
| | - Din Ping Tsai
- Department of Physics, National Taiwan University, Taipei, Taiwan.
- Research Center for Applied Sciences, Academia Sinica, Taipei, Taiwan.
- College of Engineering, Chang Gung University, Taoyuan, Taiwan.
| |
Collapse
|
16
|
Cai Z, Liu X, Chen Z, Tang Q, Gao BZ, Pedrini G, Osten W, Peng X. Light-field-based absolute phase unwrapping. OPTICS LETTERS 2018; 43:5717-5720. [PMID: 30499976 DOI: 10.1364/ol.43.005717] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 10/24/2018] [Indexed: 06/09/2023]
Abstract
Ambiguity caused by a wrapped phase is an intrinsic problem in fringe projection-based 3D shape measurement. Among traditional methods for avoiding phase ambiguity, spatial phase unwrapping is sensitive to sensor noise and depth discontinuity, and temporal phase unwrapping requires additional encoding information that leads to an increase of image sequence acquisition time or a reduction of fringe contrast. Here, to the best of our knowledge, we report a novel method of absolute phase unwrapping based on light field imaging. In a recorded light field under structured illumination, i.e., a structured light field, a wrapped phase-encoded field can be retrieved and resampled in diverse image planes associated with several possible fringe orders in a measurement volume. Then, by leveraging phase consistency constraint in the resampled wrapped phase-encoded field, correct fringe orders can be determined to unwrap the wrapped phase without any additional encoding information. Experimental results demonstrated that the proposed method was suitable for accurate and robust absolute phase unwrapping.
Collapse
|
17
|
Cai Z, Liu X, Tang Q, Peng X, Gao BZ. Light field 3D measurement using unfocused plenoptic cameras. OPTICS LETTERS 2018; 43:3746-3749. [PMID: 30067670 DOI: 10.1364/ol.43.003746] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Accepted: 07/06/2018] [Indexed: 06/08/2023]
Abstract
This Letter reports a novel method to establish the metric relationship of depth value between object space and image space for unfocused plenoptic cameras. A three-dimensional (3D) measurement system was introduced to precisely construct benchmarks and matching features to compute the metric depths in the object space and the corresponding depth values in the image space for metric calibration. After metric calibration, precise measurement of the depth dimension was possible. Furthermore, with the aid of metric spatio-angular parameters determined via light field ray calibration, transverse dimensions were computed from the measured depth, realizing light field 3D measurement for unfocused plenoptic cameras. Finally, we experimentally performed accuracy analysis of the proposed method with depth measurement precision of 0.5 mm in a depth range of 300 mm, which illuminated potential applications of unfocused plenoptic cameras in the field of 3D measurement.
Collapse
|
18
|
Palmer DW, Coppin T, Rana K, Dansereau DG, Suheimat M, Maynard M, Atchison DA, Roberts J, Crawford R, Jaiprakash A. Glare-free retinal imaging using a portable light field fundus camera. BIOMEDICAL OPTICS EXPRESS 2018; 9:3178-3192. [PMID: 29984092 PMCID: PMC6033554 DOI: 10.1364/boe.9.003178] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Revised: 05/16/2018] [Accepted: 05/17/2018] [Indexed: 05/28/2023]
Abstract
We present the retinal plenoptoscope, a novel light field retinal imaging device designed to overcome many of the problems that limit the use of portable non-mydriatic fundus cameras, including image quality and lack of stereopsis. The design and prototype construction of this device is detailed and the ideal relationship between the eye pupil, system aperture stop and micro-image separation is investigated. A comparison of the theoretical entrance pupil size, multi-view baseline and depth resolution indicates that a higher degree of stereopsis is possible than with stereo fundus cameras. We also show that the effects of corneal backscatter on image quality can be removed through a novel method of glare identification and selective image rendering. This method is then extended to produce glare-free depth maps from densely estimated depth fields, creating representations of retinal topography from a single exposure. These methods are demonstrated on physical models and live human eyes using a prototype device based on a Lytro Illum consumer light field camera. The Retinal Plenoptoscope offers a viable, robust modality for non-mydriatic color and 3-D retinal imaging.
Collapse
Affiliation(s)
- Douglas W. Palmer
- Queensland University of Technology, Brisbane, QLD 4000,
Australia
- Medical and Healthcare Robotics, Australian Centre for Robotic Vision, Brisbane, QLD 4000,
Australia
| | - Thomas Coppin
- Queensland University of Technology, Brisbane, QLD 4000,
Australia
- Medical and Healthcare Robotics, Australian Centre for Robotic Vision, Brisbane, QLD 4000,
Australia
| | - Krishan Rana
- Queensland University of Technology, Brisbane, QLD 4000,
Australia
- Medical and Healthcare Robotics, Australian Centre for Robotic Vision, Brisbane, QLD 4000,
Australia
| | | | - Marwan Suheimat
- Queensland University of Technology, Brisbane, QLD 4000,
Australia
- Institute of Health and Biomedical Innovation, Brisbane, QLD 4059,
Australia
| | - Michelle Maynard
- Queensland University of Technology, Brisbane, QLD 4000,
Australia
- Medical and Healthcare Robotics, Australian Centre for Robotic Vision, Brisbane, QLD 4000,
Australia
- Institute of Health and Biomedical Innovation, Brisbane, QLD 4059,
Australia
| | - David A. Atchison
- Queensland University of Technology, Brisbane, QLD 4000,
Australia
- Institute of Health and Biomedical Innovation, Brisbane, QLD 4059,
Australia
| | - Jonathan Roberts
- Queensland University of Technology, Brisbane, QLD 4000,
Australia
- Medical and Healthcare Robotics, Australian Centre for Robotic Vision, Brisbane, QLD 4000,
Australia
| | - Ross Crawford
- Queensland University of Technology, Brisbane, QLD 4000,
Australia
- Medical and Healthcare Robotics, Australian Centre for Robotic Vision, Brisbane, QLD 4000,
Australia
| | - Anjali Jaiprakash
- Queensland University of Technology, Brisbane, QLD 4000,
Australia
- Medical and Healthcare Robotics, Australian Centre for Robotic Vision, Brisbane, QLD 4000,
Australia
| |
Collapse
|
19
|
Kim J, Moon S, Jeong Y, Jang C, Kim Y, Lee B. Dual-dimensional microscopy: real-time in vivo three-dimensional observation method using high-resolution light-field microscopy and light-field display. JOURNAL OF BIOMEDICAL OPTICS 2018; 23:1-11. [PMID: 29931838 DOI: 10.1117/1.jbo.23.6.066502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Accepted: 05/30/2018] [Indexed: 06/08/2023]
Abstract
Here, we present dual-dimensional microscopy that captures both two-dimensional (2-D) and light-field images of an in-vivo sample simultaneously, synthesizes an upsampled light-field image in real time, and visualizes it with a computational light-field display system in real time. Compared with conventional light-field microscopy, the additional 2-D image greatly enhances the lateral resolution at the native object plane up to the diffraction limit and compensates for the image degradation at the native object plane. The whole process from capturing to displaying is done in real time with the parallel computation algorithm, which enables the observation of the sample's three-dimensional (3-D) movement and direct interaction with the in-vivo sample. We demonstrate a real-time 3-D interactive experiment with Caenorhabditis elegans.
Collapse
Affiliation(s)
- Jonghyun Kim
- Seoul National University, School of Electrical and Computer Engineering, Seoul, Republic of Korea
| | - Seokil Moon
- Seoul National University, School of Electrical and Computer Engineering, Seoul, Republic of Korea
| | - Youngmo Jeong
- Seoul National University, School of Electrical and Computer Engineering, Seoul, Republic of Korea
| | - Changwon Jang
- Seoul National University, School of Electrical and Computer Engineering, Seoul, Republic of Korea
| | - Youngmin Kim
- Korea Electronics Technology Institute, VR/AR Research Center, Seoul, Republic of Korea
| | - Byoungho Lee
- Seoul National University, School of Electrical and Computer Engineering, Seoul, Republic of Korea
| |
Collapse
|
20
|
Scrofani G, Sola-Pikabea J, Llavador A, Sanchez-Ortiga E, Barreiro JC, Saavedra G, Garcia-Sucerquia J, Martínez-Corral M. FIMic: design for ultimate 3D-integral microscopy of in-vivo biological samples. BIOMEDICAL OPTICS EXPRESS 2018; 9:335-346. [PMID: 29359107 PMCID: PMC5772586 DOI: 10.1364/boe.9.000335] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 12/09/2017] [Accepted: 12/10/2017] [Indexed: 05/12/2023]
Abstract
In this work, Fourier integral microscope (FIMic), an ultimate design of 3D-integral microscopy, is presented. By placing a multiplexing microlens array at the aperture stop of the microscope objective of the host microscope, FIMic shows extended depth of field and enhanced lateral resolution in comparison with regular integral microscopy. As FIMic directly produces a set of orthographic views of the 3D-micrometer-sized sample, it is suitable for real-time imaging. Following regular integral-imaging reconstruction algorithms, a 2.75-fold enhanced depth of field and [Formula: see text]-time better spatial resolution in comparison with conventional integral microscopy is reported. Our claims are supported by theoretical analysis and experimental images of a resolution test target, cotton fibers, and in-vivo 3D-imaging of biological specimens.
Collapse
Affiliation(s)
- G. Scrofani
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain
| | - J. Sola-Pikabea
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain
| | - A. Llavador
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain
| | - E. Sanchez-Ortiga
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain
| | - J. C. Barreiro
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain
| | - G. Saavedra
- Department of Optics, University of Valencia, E-46100 Burjassot, Spain
| | - J. Garcia-Sucerquia
- Universidad Nacional de Colombia, Sede Medellin, School of Physics, A.A. 3840 Medellín 050034, Colombia
| | | |
Collapse
|