1
|
Kim K. Single-Shot Light-Field Microscopy: An Emerging Tool for 3D Biomedical Imaging. BIOCHIP JOURNAL 2022. [DOI: 10.1007/s13206-022-00077-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Abstract3D microscopy is a useful tool to visualize the detailed structures and mechanisms of biomedical specimens. In particular, biophysical phenomena such as neural activity require fast 3D volumetric imaging because fluorescence signals degrade quickly. A light-field microscope (LFM) has recently attracted attention as a high-speed volumetric imaging technique by recording 3D information in a single-snapshot. This review highlighted recent progress in LFM techniques for 3D biomedical applications. In detail, various image reconstruction algorithms according to LFM configurations are explained, and several biomedical applications such as neuron activity localization, live-cell imaging, locomotion analysis, and single-molecule visualization are introduced. We also discuss deep learning-based LFMs to enhance image resolution and reduce reconstruction artifacts.
Collapse
|
2
|
3D Copyright Protection Based on Binarized Computational Ghost Imaging Encryption and Cellular Automata Transform. Symmetry (Basel) 2022. [DOI: 10.3390/sym14030595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
In this paper, a watermark embedding scheme based on ghost image encryption and cellular automata transformation is proposed. In this scheme, the watermark forms speckle through different light intensities into a key, and the cellular automata transformation algorithm is embedded into the 3D image. Compared with the traditional watermarking encryption method, this scheme combines ghost imaging and the cellular automata transformation algorithm, which double guarantees and increases the confidentiality of the watermark. The binary computing ghost imaging discussed in this paper saves the storage space of password text and makes the transmission of password text more convenient and faster. Experiments on this method also verify that the watermark-embedded image has higher imperceptibility and higher robustness against attacks, and that the extracted watermark has good integrity.
Collapse
|
3
|
Super-Resolution Enhancement Method Based on Generative Adversarial Network for Integral Imaging Microscopy. SENSORS 2021; 21:s21062164. [PMID: 33808866 PMCID: PMC8003741 DOI: 10.3390/s21062164] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 03/12/2021] [Accepted: 03/16/2021] [Indexed: 01/12/2023]
Abstract
The integral imaging microscopy system provides a three-dimensional visualization of a microscopic object. However, it has a low-resolution problem due to the fundamental limitation of the F-number (the aperture stops) by using micro lens array (MLA) and a poor illumination environment. In this paper, a generative adversarial network (GAN)-based super-resolution algorithm is proposed to enhance the resolution where the directional view image is directly fed as input. In a GAN network, the generator regresses the high-resolution output from the low-resolution input image, whereas the discriminator distinguishes between the original and generated image. In the generator part, we use consecutive residual blocks with the content loss to retrieve the photo-realistic original image. It can restore the edges and enhance the resolution by ×2, ×4, and even ×8 times without seriously hampering the image quality. The model is tested with a variety of low-resolution microscopic sample images and successfully generates high-resolution directional view images with better illumination. The quantitative analysis shows that the proposed model performs better for microscopic images than the existing algorithms.
Collapse
|
4
|
Chen M, He W, Wei D, Hu C, Shi J, Zhang X, Wang H, Xie C. Depth-of-Field-Extended Plenoptic Camera Based on Tunable Multi-Focus Liquid-Crystal Microlens Array. SENSORS 2020; 20:s20154142. [PMID: 32722494 PMCID: PMC7435381 DOI: 10.3390/s20154142] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 06/25/2020] [Accepted: 07/24/2020] [Indexed: 12/26/2022]
Abstract
Plenoptic cameras have received a wide range of research interest because it can record the 4D plenoptic function or radiance including the radiation power and ray direction. One of its important applications is digital refocusing, which can obtain 2D images focused at different depths. To achieve digital refocusing in a wide range, a large depth of field (DOF) is needed, but there are fundamental optical limitations to this. In this paper, we proposed a plenoptic camera with an extended DOF by integrating a main lens, a tunable multi-focus liquid-crystal microlens array (TMF-LCMLA), and a complementary metal oxide semiconductor (CMOS) sensor together. The TMF-LCMLA was fabricated by traditional photolithography and standard microelectronic techniques, and its optical characteristics including interference patterns, focal lengths, and point spread functions (PSFs) were experimentally analyzed. Experiments demonstrated that the proposed plenoptic camera has a wider range of digital refocusing compared to the plenoptic camera based on a conventional liquid-crystal microlens array (LCMLA) with only one corresponding focal length at a certain voltage, which is equivalent to the extension of DOF. In addition, it also has a 2D/3D switchable function, which is not available with conventional plenoptic cameras.
Collapse
Affiliation(s)
- Mingce Chen
- National Key Laboratory of Science & Technology on Multispectral Information Processing, Huazhong University of Science & Technology, Wuhan 430074, China; (M.C.); (W.H.); (D.W.); (C.H.); (J.S.)
- School of Artificial Intelligence and Automation, Huazhong University of Science & Technology, Wuhan 430074, China
| | - Wenda He
- National Key Laboratory of Science & Technology on Multispectral Information Processing, Huazhong University of Science & Technology, Wuhan 430074, China; (M.C.); (W.H.); (D.W.); (C.H.); (J.S.)
- School of Artificial Intelligence and Automation, Huazhong University of Science & Technology, Wuhan 430074, China
| | - Dong Wei
- National Key Laboratory of Science & Technology on Multispectral Information Processing, Huazhong University of Science & Technology, Wuhan 430074, China; (M.C.); (W.H.); (D.W.); (C.H.); (J.S.)
- School of Artificial Intelligence and Automation, Huazhong University of Science & Technology, Wuhan 430074, China
| | - Chai Hu
- National Key Laboratory of Science & Technology on Multispectral Information Processing, Huazhong University of Science & Technology, Wuhan 430074, China; (M.C.); (W.H.); (D.W.); (C.H.); (J.S.)
- School of Artificial Intelligence and Automation, Huazhong University of Science & Technology, Wuhan 430074, China
- Innovation Institute, Huazhong University of Science & Technology, Wuhan 430074, China
| | - Jiashuo Shi
- National Key Laboratory of Science & Technology on Multispectral Information Processing, Huazhong University of Science & Technology, Wuhan 430074, China; (M.C.); (W.H.); (D.W.); (C.H.); (J.S.)
- School of Artificial Intelligence and Automation, Huazhong University of Science & Technology, Wuhan 430074, China
| | - Xinyu Zhang
- National Key Laboratory of Science & Technology on Multispectral Information Processing, Huazhong University of Science & Technology, Wuhan 430074, China; (M.C.); (W.H.); (D.W.); (C.H.); (J.S.)
- School of Artificial Intelligence and Automation, Huazhong University of Science & Technology, Wuhan 430074, China
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science & Technology, Wuhan 430074, China; (H.W.); (C.X.)
- Correspondence:
| | - Haiwei Wang
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science & Technology, Wuhan 430074, China; (H.W.); (C.X.)
| | - Changsheng Xie
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science & Technology, Wuhan 430074, China; (H.W.); (C.X.)
| |
Collapse
|
5
|
Li X, Wang Y, Li Q, Wang QH, Li J, Kim ST, Zhou X. Optical 3D object security and reconstruction using pixel-evaluated integral imaging algorithm. OPTICS EXPRESS 2019; 27:20720-20733. [PMID: 31510161 DOI: 10.1364/oe.27.020720] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Accepted: 06/29/2019] [Indexed: 06/10/2023]
Abstract
Under the framework of computational integral imaging, an optical 3D objects security and high-quality reconstruction method based on pixel-evaluating mapping (PEM) algorithm is proposed. In this method, the pixel crosstalk caused by noneffective pixel overlap is effectively reduced by a pixel-evaluated mask, which can improve the image quality of the reconstructed 3D objects. Meanwhile, compared with the other computational integral imaging reconstruction methods, our proposed PEM algorithm can obtain more accurate pixel mapping weight parameters, thereby the reconstructed 3D objects provide higher quality. In addition, the nonlinear feedback shift register cellular automata algorithm is proposed to increase the security of the proposed method. We have experimentally verified the proposed 3D objects encryption and reconstruction algorithm. The experimental results show that the proposed method is superior to the other computational reconstruction methods.
Collapse
|
6
|
Madrid-Wolff J, Forero-Shelton M. Protocol for the Design and Assembly of a Light Sheet Light Field Microscope. Methods Protoc 2019; 2:mps2030056. [PMID: 31277384 PMCID: PMC6789549 DOI: 10.3390/mps2030056] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Revised: 06/29/2019] [Accepted: 07/01/2019] [Indexed: 11/23/2022] Open
Abstract
Light field microscopy is a recent development that makes it possible to obtain images of volumes with a single camera exposure, enabling studies of fast processes such as neural activity in zebrafish brains at high temporal resolution, at the expense of spatial resolution. Light sheet microscopy is also a recent method that reduces illumination intensity while increasing the signal-to-noise ratio with respect to confocal microscopes. While faster and gentler to samples than confocals for a similar resolution, light sheet microscopy is still slower than light field microscopy since it must collect volume slices sequentially. Nonetheless, the combination of the two methods, i.e., light field microscopes that have light sheet illumination, can help to improve the signal-to-noise ratio of light field microscopes and potentially improve their resolution. Building these microscopes requires much expertise, and the resources for doing so are limited. Here, we present a protocol to build a light field microscope with light sheet illumination. This protocol is also useful to build a light sheet microscope.
Collapse
Affiliation(s)
- Jorge Madrid-Wolff
- Biomedical Computer Vision Group, Universidad de los Andes, Bogota 111711, Colombia
| | | |
Collapse
|
7
|
Li X, Zhao M, Zhou X, Wang QH. Ownership protection of holograms using quick-response encoded plenoptic watermark. OPTICS EXPRESS 2018; 26:30492-30508. [PMID: 30469948 DOI: 10.1364/oe.26.030492] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Accepted: 10/02/2018] [Indexed: 06/09/2023]
Abstract
In actual applications of three-dimensional (3D) holographic display, the holograms need to be effectively stored and transmitted through the network. Thereby, there is an urgent demand for protecting the ownership of holograms against piracy and malicious manipulation. This paper realizes an ownership protection for holograms by embedding the watermark into the optimized cellular automata (OCA) domains. This work has the advantages of simultaneously improving the imperceptibility by selecting the "best" rule and OCA domains for watermark embedding and increasing robustness via the property of the multiple memory of the plenoptic image. We present experimental results of the visual quality of watermarked holograms and the robustness of 3D watermark to verify the performance of the proposed watermarking method. Experimental results confirm the imperceptibility and robustness of the proposed method.
Collapse
|
8
|
Insect-Mimetic Imaging System Based on a Microlens Array Fabricated by a Patterned-Layer Integrating Soft Lithography Process. SENSORS 2018; 18:s18072011. [PMID: 29932163 PMCID: PMC6068472 DOI: 10.3390/s18072011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Revised: 06/14/2018] [Accepted: 06/21/2018] [Indexed: 11/17/2022]
Abstract
In nature, arthropods have evolved to utilize a multiaperture vision system with a micro-optical structure which has advantages, such as compact size and wide-angle view, compared to that of a single-aperture vision system. In this paper, we present a multiaperture imaging system using a microlens array fabricated by a patterned-layer integrating soft lithography (PLISL) process which is based on a molding technique that can transfer three-dimensional structures and a gold screening layer simultaneously. The imaging system consists of a microlens array, a lens-adjusting jig, and a conventional (charge-coupled device) CCD image sensor. The microlens array has a light screening layer patterned among all the microlenses by the PLISL process to prevent light interference. The three-dimensionally printed jig adjusts the microlens array on the conventional CCD sensor for the focused image. The manufactured imaging system has a thin optic system and a large field-of-view of 100 degrees. The developed imaging system takes multiple images at once. To show its possible applications, multiple depth plane images were reconstructed based on the taken subimages with a single shot.
Collapse
|
9
|
Kim J, Moon S, Jeong Y, Jang C, Kim Y, Lee B. Dual-dimensional microscopy: real-time in vivo three-dimensional observation method using high-resolution light-field microscopy and light-field display. JOURNAL OF BIOMEDICAL OPTICS 2018; 23:1-11. [PMID: 29931838 DOI: 10.1117/1.jbo.23.6.066502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Accepted: 05/30/2018] [Indexed: 06/08/2023]
Abstract
Here, we present dual-dimensional microscopy that captures both two-dimensional (2-D) and light-field images of an in-vivo sample simultaneously, synthesizes an upsampled light-field image in real time, and visualizes it with a computational light-field display system in real time. Compared with conventional light-field microscopy, the additional 2-D image greatly enhances the lateral resolution at the native object plane up to the diffraction limit and compensates for the image degradation at the native object plane. The whole process from capturing to displaying is done in real time with the parallel computation algorithm, which enables the observation of the sample's three-dimensional (3-D) movement and direct interaction with the in-vivo sample. We demonstrate a real-time 3-D interactive experiment with Caenorhabditis elegans.
Collapse
Affiliation(s)
- Jonghyun Kim
- Seoul National University, School of Electrical and Computer Engineering, Seoul, Republic of Korea
| | - Seokil Moon
- Seoul National University, School of Electrical and Computer Engineering, Seoul, Republic of Korea
| | - Youngmo Jeong
- Seoul National University, School of Electrical and Computer Engineering, Seoul, Republic of Korea
| | - Changwon Jang
- Seoul National University, School of Electrical and Computer Engineering, Seoul, Republic of Korea
| | - Youngmin Kim
- Korea Electronics Technology Institute, VR/AR Research Center, Seoul, Republic of Korea
| | - Byoungho Lee
- Seoul National University, School of Electrical and Computer Engineering, Seoul, Republic of Korea
| |
Collapse
|
10
|
One Shot 360-Degree Light Field Capture and Reconstruction with Depth Extraction Based on Optical Flow for Light Field Camera. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8060890] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
11
|
Li X, Zhao M, Xing Y, Zhang HL, Li L, Kim ST, Zhou X, Wang QH. Designing optical 3D images encryption and reconstruction using monospectral synthetic aperture integral imaging. OPTICS EXPRESS 2018; 26:11084-11099. [PMID: 29716046 DOI: 10.1364/oe.26.011084] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Accepted: 04/07/2018] [Indexed: 06/08/2023]
Abstract
This paper realizes an optical 3D images encryption and reconstruction by employing the geometric calibration algorithm to the monospectral synthetic aperture integral imaging system. This method has the simultaneous advantages of improving the quality of 3D images by eliminating the crosstalk from the unaligned cameras and increasing security of the multispectral 3D images encryption by importing the random generated maximum-length cellular automata into the Fresnel transform encoding algorithm. Furthermore, compared with the previous 3D images encryption methods of encrypting 3D multispectral information, the proposed method only encrypts monospectral data, which will greatly minimize the complexity. We present experimental results of 3D image encryption and volume pixel computational reconstruction to test and verify the performance of the proposed method. Experimental results validate the feasibility and robustness of our proposed approach, even under severe degradation.
Collapse
|
12
|
Li X, Zhao M, Xing Y, Li L, Kim ST, Zhou X, Wang QH. Optical encryption via monospectral integral imaging. OPTICS EXPRESS 2017; 25:31516-31527. [PMID: 29245826 DOI: 10.1364/oe.25.031516] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2017] [Accepted: 11/23/2017] [Indexed: 06/07/2023]
Abstract
Optical integral imaging (II) uses a lenslet array and CCD sensor as the 3D acquisition device, in which the multispectral information is acquired by a color filter array (CFA). However, color crosstalk exists in CFA that diminishes color gamut, resulting in the reduced resolution. In this paper, we present a monospectral II encryption approach with a monospectral camera array (MCA). The monospectral II system captures images with the MCA that can eliminate color crosstalk among the adjacent spectral channels. It is noteworthy that the captured elemental images (EIs) from the colored scene belong to grayscale; the colored image encryption is converted to grayscale encryption. Consequently, this study will significantly save the calculation load in image encoding and decoding (nearly reduced 2/3) compared with the similar works. Afterwards, an optimized super-resolution reconstruction algorithm is introduced to improve the viewing resolution.
Collapse
|
13
|
Choi HJ, Kang EK, Ju GW, Song YM, Lee YT. Shape-controllable, bottom-up fabrication of microlens using oblique angle deposition. OPTICS LETTERS 2016; 41:3328-3330. [PMID: 27420527 DOI: 10.1364/ol.41.003328] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This Letter reports a novel method for the simple fabrication of microlens arrays with a controlled shape and diameter on glass substrates. Multilayer stacks of silicon dioxide deposited by oblique angle deposition with hole mask patterns enable microlens formation. Precise control of mask height and distance, as well as oblique angle steps between deposited layers, supports the controllability of microlens geometry. The fabricated microlens arrays with designed geometry exhibit uniform optical properties.
Collapse
|
14
|
McLeod E, Ozcan A. Unconventional methods of imaging: computational microscopy and compact implementations. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2016; 79:076001. [PMID: 27214407 DOI: 10.1088/0034-4885/79/7/076001] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
In the past two decades or so, there has been a renaissance of optical microscopy research and development. Much work has been done in an effort to improve the resolution and sensitivity of microscopes, while at the same time to introduce new imaging modalities, and make existing imaging systems more efficient and more accessible. In this review, we look at two particular aspects of this renaissance: computational imaging techniques and compact imaging platforms. In many cases, these aspects go hand-in-hand because the use of computational techniques can simplify the demands placed on optical hardware in obtaining a desired imaging performance. In the first main section, we cover lens-based computational imaging, in particular, light-field microscopy, structured illumination, synthetic aperture, Fourier ptychography, and compressive imaging. In the second main section, we review lensfree holographic on-chip imaging, including how images are reconstructed, phase recovery techniques, and integration with smart substrates for more advanced imaging tasks. In the third main section we describe how these and other microscopy modalities have been implemented in compact and field-portable devices, often based around smartphones. Finally, we conclude with some comments about opportunities and demand for better results, and where we believe the field is heading.
Collapse
Affiliation(s)
- Euan McLeod
- College of Optical Sciences, University of Arizona, Tucson, AZ 85721, USA
| | | |
Collapse
|
15
|
Kim J, Jeong Y, Kim H, Lee CK, Lee B, Hong J, Kim Y, Hong Y, Lee SD, Lee B. F-number matching method in light field microscopy using an elastic micro lens array. OPTICS LETTERS 2016; 41:2751-2754. [PMID: 27304280 DOI: 10.1364/ol.41.002751] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In light field microscopy (LFM), the F-number of the micro lens array (MLA) should be matched with the image-side F-number of the objective lens to utilize full resolution of an image sensor. We propose a new F-number matching method that can be applied to multiple objective lenses by using an elastic MLA. We fabricate an elastic MLA with polydimethylsiloxane (PDMS) using a micro contact printing method and address the strain for the F-number variation. The strain response is analyzed, and the LFM system with the elastic MLA is demonstrated. Our proposed system can increase the F-number up to 27.3% and can be applied to multiple objective lenses.
Collapse
|
16
|
Jeong Y, Kim J, Yeom J, Lee CK, Lee B. Real-time depth controllable integral imaging pickup and reconstruction method with a light field camera. APPLIED OPTICS 2015; 54:10333-10341. [PMID: 26836855 DOI: 10.1364/ao.54.010333] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, we develop a real-time depth controllable integral imaging system. With a high-frame-rate camera and a focus controllable lens, light fields from various depth ranges can be captured. According to the image plane of the light field camera, the objects in virtual and real space are recorded simultaneously. The captured light field information is converted to the elemental image in real time without pseudoscopic problems. In addition, we derive characteristics and limitations of the light field camera as a 3D broadcasting capturing device with precise geometry optics. With further analysis, the implemented system provides more accurate light fields than existing devices without depth distortion. We adapt an f-number matching method at the capture and display stage to record a more exact light field and solve depth distortion, respectively. The algorithm allows the users to adjust the pixel mapping structure of the reconstructed 3D image in real time. The proposed method presents a possibility of a handheld real-time 3D broadcasting system in a cheaper and more applicable way as compared to the previous methods.
Collapse
|
17
|
Jung H, Jeong KH. Monolithic polymer microlens arrays with high numerical aperture and high packing density. ACS APPLIED MATERIALS & INTERFACES 2015; 7:2160-5. [PMID: 25612820 DOI: 10.1021/am5077809] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
This work reports a novel method for monolithic fabrication of high numerical aperture polymer microlens arrays (high-NA MLAs) with high packing density (PD) at wafer level. The close-packed high-NA MLAs were fabricated by incorporating conformal deposition of ultrathin fluorocarbon nanofilm and melting the cylindrical polymer islands. The NA and PD of hemispherical MLAs with a hexagonal arrangement increase up to 0.6 and 89%, respectively. The increase of NA enhances the lens transmission securing the beam width down to 1.1 μm. The close-packed high-NA MLAs enable high photon collection efficiency with signal-to-noise ratio greater than 50:1.
Collapse
Affiliation(s)
- Hyukjin Jung
- Department of Bio and Brain Engineering and KAIST Institute for Optical Science and Technology, Korea Advanced Institute of Science and Technology (KAIST) , 291 Daehak-ro, Yuseong-gu, Daejeon 305-701, Republic of Korea
| | | |
Collapse
|
18
|
Jung JH, Aloni D, Yitzhaky Y, Peli E. Active confocal imaging for visual prostheses. Vision Res 2014; 111:182-96. [PMID: 25448710 DOI: 10.1016/j.visres.2014.10.023] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2014] [Revised: 10/14/2014] [Accepted: 10/25/2014] [Indexed: 11/26/2022]
Abstract
There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other "sensory substitution devices" that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and "see" only an object of interest while suppressing interference from background clutter. The system captures three-dimensional scene information using a light-field sensor and displays only an in-focused plane with objects in it. After capturing a confocal image, a de-cluttering process removes the clutter based on blur difference. In preliminary experiments we verified the positive impact of confocal-based background clutter removal on recognition of objects in low resolution and limited dynamic range simulated phosphene images. Using a custom-made multiple-camera system based on light-field imaging, we confirmed that the concept of a confocal de-cluttered image can be realized effectively.
Collapse
Affiliation(s)
- Jae-Hyun Jung
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Doron Aloni
- Department of Electro-Optics Engineering, Ben-Gurion University of the Negev, Beer Sheva, Israel
| | - Yitzhak Yitzhaky
- Department of Electro-Optics Engineering, Ben-Gurion University of the Negev, Beer Sheva, Israel
| | - Eli Peli
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
19
|
Park JH, Lee SK, Jo NY, Kim HJ, Kim YS, Lim HG. Light ray field capture using focal plane sweeping and its optical reconstruction using 3D displays. OPTICS EXPRESS 2014; 22:25444-25454. [PMID: 25401577 DOI: 10.1364/oe.22.025444] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We propose a method to capture light ray field of three-dimensional scene using focal plane sweeping. Multiple images are captured using a usual camera at different focal distances, spanning the three-dimensional scene. The captured images are then back-projected to four-dimensional spatio-angular space to obtain the light ray field. The obtained light ray field can be visualized either using digital processing or optical reconstruction using various three-dimensional display techniques including integral imaging, layered display, and holography.
Collapse
|