1
|
Bai B, Li Y, Luo Y, Li X, Çetintaş E, Jarrahi M, Ozcan A. All-optical image classification through unknown random diffusers using a single-pixel diffractive network. LIGHT, SCIENCE & APPLICATIONS 2023; 12:69. [PMID: 36894546 PMCID: PMC9998891 DOI: 10.1038/s41377-023-01116-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 02/22/2023] [Accepted: 02/22/2023] [Indexed: 06/01/2023]
Abstract
Classification of an object behind a random and unknown scattering medium sets a challenging task for computational imaging and machine vision fields. Recent deep learning-based approaches demonstrated the classification of objects using diffuser-distorted patterns collected by an image sensor. These methods demand relatively large-scale computing using deep neural networks running on digital computers. Here, we present an all-optical processor to directly classify unknown objects through unknown, random phase diffusers using broadband illumination detected with a single pixel. A set of transmissive diffractive layers, optimized using deep learning, forms a physical network that all-optically maps the spatial information of an input object behind a random diffuser into the power spectrum of the output light detected through a single pixel at the output plane of the diffractive network. We numerically demonstrated the accuracy of this framework using broadband radiation to classify unknown handwritten digits through random new diffusers, never used during the training phase, and achieved a blind testing accuracy of 87.74 ± 1.12%. We also experimentally validated our single-pixel broadband diffractive network by classifying handwritten digits "0" and "1" through a random diffuser using terahertz waves and a 3D-printed diffractive network. This single-pixel all-optical object classification system through random diffusers is based on passive diffractive layers that process broadband input light and can operate at any part of the electromagnetic spectrum by simply scaling the diffractive features proportional to the wavelength range of interest. These results have various potential applications in, e.g., biomedical imaging, security, robotics, and autonomous driving.
Collapse
Affiliation(s)
- Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, 90095, USA
- Bioengineering Department, University of California, Los Angeles, California, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California, 90095, USA
| | - Yuhang Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, 90095, USA
- Bioengineering Department, University of California, Los Angeles, California, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California, 90095, USA
| | - Yi Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, 90095, USA
- Bioengineering Department, University of California, Los Angeles, California, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California, 90095, USA
| | - Xurong Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California, 90095, USA
| | - Ege Çetintaş
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, 90095, USA
- Bioengineering Department, University of California, Los Angeles, California, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California, 90095, USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, California, 90095, USA.
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California, 90095, USA.
| |
Collapse
|
2
|
Zhang Y, Wu Z, Lin P, Pan Y, Wu Y, Zhang L, Huangfu J. Hand gestures recognition in videos taken with a lensless camera. OPTICS EXPRESS 2022; 30:39520-39533. [PMID: 36298902 DOI: 10.1364/oe.470324] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 09/29/2022] [Indexed: 06/16/2023]
Abstract
A lensless camera is an imaging system that uses a mask in place of a lens, making it thinner, lighter, and less expensive than a lensed camera. However, additional complex computation and time are required for image reconstruction. This work proposes a deep learning model named Raw3dNet that recognizes hand gestures directly on raw videos captured by a lensless camera without the need for image restoration. In addition to conserving computational resources, the reconstruction-free method provides privacy protection. Raw3dNet is a novel end-to-end deep neural network model for the recognition of hand gestures in lensless imaging systems. It is created specifically for raw video captured by a lensless camera and has the ability to properly extract and combine temporal and spatial features. The network is composed of two stages: 1. spatial feature extractor (SFE), which enhances the spatial features of each frame prior to temporal convolution; 2. 3D-ResNet, which implements spatial and temporal convolution of video streams. The proposed model achieves 98.59% accuracy on the Cambridge Hand Gesture dataset in the lensless optical experiment, which is comparable to the lensed-camera result. Additionally, the feasibility of physical object recognition is assessed. Further, we show that the recognition can be achieved with respectable accuracy using only a tiny portion of the original raw data, indicating the potential for reducing data traffic in cloud computing scenarios.
Collapse
|
3
|
Douglass PM, O'Connor T, Javidi B. Automated sickle cell disease identification in human red blood cells using a lensless single random phase encoding biosensor and convolutional neural networks. OPTICS EXPRESS 2022; 30:35965-35977. [PMID: 36258535 DOI: 10.1364/oe.469199] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 09/04/2022] [Indexed: 06/16/2023]
Abstract
We present a compact, field portable, lensless, single random phase encoding biosensor for automated classification between healthy and sickle cell disease human red blood cells. Microscope slides containing 3 µl wet mounts of whole blood samples from healthy and sickle cell disease afflicted human donors are input into a lensless single random phase encoding (SRPE) system for disease identification. A partially coherent laser source (laser diode) illuminates the cells under inspection wherein the object complex amplitude propagates to and is pseudorandomly encoded by a diffuser, then the intensity of the diffracted complex waveform is captured by a CMOS image sensor. The recorded opto-biological signatures are transformed using local binary pattern map generation during preprocessing then input into a pretrained convolutional neural network for classification between healthy and disease-states. We further provide analysis that compares the performance of several neural network architectures to optimize our classification strategy. Additionally, we assess the performance and computational savings of classifying on subsets of the opto-biological signatures with substantially reduced dimensionality, including one dimensional cropping of the recorded signatures. To the best of our knowledge, this is the first report of a lensless SRPE biosensor for human disease identification. As such, the presented approach and results can be significant for low-cost disease identification both in the field and for healthcare systems in developing countries which suffer from constrained resources.
Collapse
|
4
|
Gao Y, Xu W, Chen Y, Xie W, Cheng Q. Deep Learning-Based Photoacoustic Imaging of Vascular Network Through Thick Porous Media. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2191-2204. [PMID: 35294347 DOI: 10.1109/tmi.2022.3158474] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Photoacoustic imaging is a promising approach used to realize in vivo transcranial cerebral vascular imaging. However, the strong attenuation and distortion of the photoacoustic wave caused by the thick porous skull greatly affect the imaging quality. In this study, we developed a convolutional neural network based on U-Net to extract the effective photoacoustic information hidden in the speckle patterns obtained from vascular network images datasets under porous media. Our simulation and experimental results show that the proposed neural network can learn the mapping relationship between the speckle pattern and the target, and extract the photoacoustic signals of the vessels submerged in noise to reconstruct high-quality images of the vessels with a sharp outline and a clean background. Compared with the traditional photoacoustic reconstruction methods, the proposed deep learning-based reconstruction algorithm has a better performance with a lower mean absolute error, higher structural similarity, and higher peak signal-to-noise ratio of reconstructed images. In conclusion, the proposed neural network can effectively extract valid information from highly blurred speckle patterns for the rapid reconstruction of target images, which offers promising applications in transcranial photoacoustic imaging.
Collapse
|
5
|
Cheng Q, Guo E, Gu J, Bai L, Han J, Zheng D. De-noising imaging through diffusers with autocorrelation. APPLIED OPTICS 2021; 60:7686-7695. [PMID: 34613238 DOI: 10.1364/ao.425099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 08/02/2021] [Indexed: 06/13/2023]
Abstract
Recovering targets through diffusers is an important topic as well as a general problem in optical imaging. The difficulty of recovering is increased due to the noise interference caused by an imperfect imaging environment. Existing approaches generally require a high-signal-to-noise-ratio (SNR) speckle pattern to recover the target, but still have limitations in de-noising or generalizability. Here, featuring information of high-SNR autocorrelation as a physical constraint, we propose a two-stage (de-noising and reconstructing) method to improve robustness based on data driving. Specifically, a two-stage convolutional neural network (CNN) called autocorrelation reconstruction (ACR) CNN is designed to de-noise and reconstruct targets from low-SNR speckle patterns. We experimentally demonstrate the robustness through various diffusers with different levels of noise, from simulative Gaussian noise to the detector and photon noise captured by the actual optical system. The de-noising stage improves the peak SNR from 20 to 38 dB in the system data, and the reconstructing stage, compared with the unconstrained method, successfully recovers targets hidden in unknown diffusers with the detector and photon noise. With the help of the physical constraint to optimize the learning process, our two-stage method is realized to improve generalizability and has potential in various fields such as imaging in low illumination.
Collapse
|
6
|
Pan X, Nakamura T, Chen X, Yamaguchi M. Lensless inference camera: incoherent object recognition through a thin mask with LBP map generation. OPTICS EXPRESS 2021; 29:9758-9771. [PMID: 33820129 DOI: 10.1364/oe.416613] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Accepted: 03/08/2021] [Indexed: 06/12/2023]
Abstract
We propose a preliminary lensless inference camera (LLI camera) specialized for object recognition. The LLI camera performs computationally efficient data preprocessing on the optically encoded pattern through the mask, rather than performing computationally expensive image reconstruction before inference. Therefore, the LLI camera avoids expensive computation and achieves real-time inference. This work proposes a new data preprocessing approach, named local binary patterns map generation, dedicated for optically encoded pattern through the mask. This preprocessing approach greatly improves encoded pattern's robustness to local disturbances in the scene, making the LLI camera's practical application possible. The performance of the LLI camera is analyzed through optical experiments on handwritten digit recognition and gender estimation under conditions with changing illumination and a moving target.
Collapse
|
7
|
Lee YTC, Fang YC, Tien CH. Deep neural network for coded mask cryptographical imaging. APPLIED OPTICS 2021; 60:1686-1693. [PMID: 33690506 DOI: 10.1364/ao.415120] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 01/21/2021] [Indexed: 06/12/2023]
Abstract
We proposed a novel cryptographic imaging scheme that is the combination of optical encryption and computational decryption. To prevent personal privacy from being spied upon amid the imaging formation process, in this study we applied a coded mask to optically encrypt the scene and utilized the deep neural network for computational decryption. For encryption, the sensor recorded a new representation of the original signal, not being distinguishable by humans on purpose. For decryption, we successfully reconstructed the image with the mean squared error equal to 0.028, and 100% for the classification through the Japanese Female Facial Expression dataset. By means of the feature visualization, we found that the coded mask served as a linear operator to synthesize the spatial fidelity of the original scene, but kept the features for the post-recognition process. We believe the proposed framework can inspire more possibilities for the unconventional imaging system.
Collapse
|
8
|
Kang I, Pang S, Zhang Q, Fang N, Barbastathis G. Recurrent neural network reveals transparent objects through scattering media. OPTICS EXPRESS 2021; 29:5316-5326. [PMID: 33726070 DOI: 10.1364/oe.412890] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 01/29/2021] [Indexed: 06/12/2023]
Abstract
Scattering generally worsens the condition of inverse problems, with the severity depending on the statistics of the refractive index gradient and contrast. Removing scattering artifacts from images has attracted much work in the literature, including recently the use of static neural networks. S. Li et al. [Optica5(7), 803 (2018)10.1364/OPTICA.5.000803] trained a convolutional neural network to reveal amplitude objects hidden by a specific diffuser; whereas Y. Li et al. [Optica5(10), 1181 (2018)10.1364/OPTICA.5.001181] were able to deal with arbitrary diffusers, as long as certain statistical criteria were met. Here, we propose a novel dynamical machine learning approach for the case of imaging phase objects through arbitrary diffusers. The motivation is to strengthen the correlation among the patterns during the training and to reveal phase objects through scattering media. We utilize the on-axis rotation of a diffuser to impart dynamics and utilize multiple speckle measurements from different angles to form a sequence of images for training. Recurrent neural networks (RNN) embedded with the dynamics filter out useful information and discard the redundancies, thus quantitative phase information in presence of strong scattering. In other words, the RNN effectively averages out the effect of the dynamic random scattering media and learns more about the static pattern. The dynamical approach reveals transparent images behind the scattering media out of speckle correlation among adjacent measurements in a sequence. This method is also applicable to other imaging applications that involve any other spatiotemporal dynamics.
Collapse
|
9
|
Wetzstein G, Ozcan A, Gigan S, Fan S, Englund D, Soljačić M, Denz C, Miller DAB, Psaltis D. Inference in artificial intelligence with deep optics and photonics. Nature 2020; 588:39-47. [PMID: 33268862 DOI: 10.1038/s41586-020-2973-6] [Citation(s) in RCA: 138] [Impact Index Per Article: 34.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Accepted: 08/20/2020] [Indexed: 12/30/2022]
Abstract
Artificial intelligence tasks across numerous applications require accelerators for fast and low-power execution. Optical computing systems may be able to meet these domain-specific needs but, despite half a century of research, general-purpose optical computing systems have yet to mature into a practical technology. Artificial intelligence inference, however, especially for visual computing applications, may offer opportunities for inference based on optical and photonic systems. In this Perspective, we review recent work on optical computing for artificial intelligence applications and discuss its promise and challenges.
Collapse
Affiliation(s)
| | - Aydogan Ozcan
- University of California, Los Angeles, Los Angeles, CA, USA
| | - Sylvain Gigan
- Laboratoire Kastler Brossel, Sorbonne Université, École Normale Supérieure, Collège de France, CNRS UMR 8552, Paris, France
| | | | - Dirk Englund
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Marin Soljačić
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | | | - Demetri Psaltis
- École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
10
|
Yamazaki K, Horisaki R, Tanida J. Imaging through scattering media based on semi-supervised learning. APPLIED OPTICS 2020; 59:9850-9854. [PMID: 33175824 DOI: 10.1364/ao.402428] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 10/06/2020] [Indexed: 06/11/2023]
Abstract
We present a method for less-invasive imaging through scattering media. We use an image-to-image translation, which is called a cycle generative adversarial network (CycleGAN), based on semi-supervised learning with an unlabeled dataset. Our method was experimentally demonstrated by reconstructing object images displayed on a spatial light modulator between diffusers. In the demonstration, CycleGAN was trained with captured images and object candidate images that were not used for image capturing through the diffusers and were not paired with the captured images.
Collapse
|
11
|
Sun L, Shi J, Wu X, Sun Y, Zeng G. Photon-limited imaging through scattering medium based on deep learning. OPTICS EXPRESS 2019; 27:33120-33134. [PMID: 31878386 DOI: 10.1364/oe.27.033120] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Accepted: 10/22/2019] [Indexed: 06/10/2023]
Abstract
Imaging under ultra-weak light conditions is affected by Poisson noise heavily. The problem becomes worse if a scattering media is present in the optical path. Speckle patterns detected under ultra-weak light condition carry very little information which makes it difficult to reconstruct the image. Off-the-shelf methods are no longer available in this condition. In this paper, we experimentally demonstrate the use of a deep learning network to reconstruct images through scattering media under ultra-weak light illumination. The weak light limitation of this method is analyzed. The random Poisson detection under weak light condition obtains partial information of the object. Based on this property, we demonstrated better performance of our method by enlarging the training dataset with multiple detections of the speckle patterns. Our results demonstrate that our approach can reconstruct images through scattering media from close to 1 detected signal photon per pixel (PPP) per image.
Collapse
|
12
|
Sun Y, Shi J, Sun L, Fan J, Zeng G. Image reconstruction through dynamic scattering media based on deep learning. OPTICS EXPRESS 2019; 27:16032-16046. [PMID: 31163790 DOI: 10.1364/oe.27.016032] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Accepted: 05/10/2019] [Indexed: 06/09/2023]
Abstract
Under complex scattering conditions, it is very difficult to capture clear object images hidden behind the media by modelling the inverse problem. With regard to dynamic scattering media, the challenge increases. For solving the inverse problem, we propose a new class-specific image reconstruction algorithm. The method based on deep learning classifies blurred scattering images according to scattering conditions and then recovers to clear images hidden behind the media. The deep learning network is used to learn the mapping relationship between the object and the scattering image rather than characterizing the scattering media explicitly or parametrically. 25000 scattering images are obtained under five sets of dynamic scattering condition to verify the feasibility of the proposed method. In addition, the generalizability of the method has been verified successfully. Compared with common CNN method, it's confirmed that our algorithm has better performance in reconstructing higher-quality images. Furthermore, for a given scattering image with unknown scattering condition, the closest scattering condition information can be given by classification network, and then the corresponding clear image is restored by reconstruction network.
Collapse
|
13
|
Nishizaki Y, Valdivia M, Horisaki R, Kitaguchi K, Saito M, Tanida J, Vera E. Deep learning wavefront sensing. OPTICS EXPRESS 2019; 27:240-251. [PMID: 30645371 DOI: 10.1364/oe.27.000240] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Accepted: 12/19/2018] [Indexed: 05/20/2023]
Abstract
We present a new class of wavefront sensors by extending their design space based on machine learning. This approach simplifies both the optical hardware and image processing in wavefront sensing. We experimentally demonstrated a variety of image-based wavefront sensing architectures that can directly estimate Zernike coefficients of aberrated wavefronts from a single intensity image by using a convolutional neural network. We also demonstrated that the proposed deep learning wavefront sensor can be trained to estimate wavefront aberrations stimulated by a point source and even extended sources.
Collapse
|
14
|
Turpin A, Vishniakou I, Seelig JD. Light scattering control in transmission and reflection with neural networks. OPTICS EXPRESS 2018; 26:30911-30929. [PMID: 30469982 DOI: 10.1364/oe.26.030911] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Scattering often limits the controlled delivery of light in applications such as biomedical imaging, optogenetics, optical trapping, and fiber-optic communication or imaging. Such scattering can be controlled by appropriately shaping the light wavefront entering the material. Here, we develop a machine-learning approach for light control. Using pairs of binary intensity patterns and intensity measurements we train neural networks (NNs) to provide the wavefront corrections necessary to shape the beam after the scatterer. Additionally, we demonstrate that NNs can be used to find a functional relationship between transmitted and reflected speckle patterns. Establishing the validity of this relationship, we focus and scan in transmission through opaque media using reflected light. Our approach shows the versatility of NNs for light shaping, for efficiently and flexibly correcting for scattering, and in particular the feasibility of transmission control based on reflected light.
Collapse
|
15
|
Wang P, Di J. Deep learning-based object classification through multimode fiber via a CNN-architecture SpeckleNet. APPLIED OPTICS 2018; 57:8258-8263. [PMID: 30461775 DOI: 10.1364/ao.57.008258] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Accepted: 08/30/2018] [Indexed: 06/09/2023]
Abstract
With the fast development of deep learning, its performance in image classification and object recognition has presented dramatic improvements. These promising results could also be applied to better understand speckle patterns in scattering media imaging. In this paper, a multimode fiber is used as the scattering media, and 4000 face and nonface original images are transmitted generating speckle patterns. A SpeckleNet is proposed and trained with these 3600 speckle patterns based on a convolutional neural network, and its output layer is activated for a support vector machine (SVM) classifier. The binary classification accuracy of the proposed CNN-architecture SpeckleNet for face and nonface speckle patterns classification tested on another 400 speckle patterns is about 96%, which has been improved compared with the accuracy of the pure SVM method. The promising results confirm that the combination with deep learning could lead to lower optical and computation costs in optical sensing and contribute to practical applications in optics.
Collapse
|
16
|
Niu Z, Shi J, Sun L, Zhu Y, Fan J, Zeng G. Photon-limited face image super-resolution based on deep learning. OPTICS EXPRESS 2018; 26:22773-22782. [PMID: 30184932 DOI: 10.1364/oe.26.022773] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Accepted: 07/20/2018] [Indexed: 06/08/2023]
Abstract
With one single photon camera (SPC), imaging under ultra weak-lighting conditions may have wide-ranging applications ranging from remote sensing to night vision, but it may seriously suffer from the problem of under-sampled inherent in SPC detection. Some approaches have been proposed to solve the under-sampled problem by detecting the objects many times to generate high-resolution images and performing noise reduction to suppress the Poission noise inherent in low-flux operation. To address the under-sampled problem more effectively, a new approach is developed in this paper to reconstruct high-resolution images with lower-noise by seamlessly integrating low-light-level imaging with deep learning. In our new approach, all the objects are detected only once by SPC, where a deep network is learned to reduce noise and reconstruct high-resolution images from the detected noisy under-sampled images. In order to demonstrate our proposal is feasible, we first select a special category to verify by experiment, which are human faces. Such deep network is able to recover high-resolution and lower-noise face images from new noisy under-sampled face images and the resolution can achieve 4× up-scaling factor. Our experimental results have demonstrated that our proposed method can generate high-quality images from only ~0.2 detected signal photon per pixel.
Collapse
|
17
|
Horisaki R, Takagi R, Tanida J. Deep-learning-generated holography. APPLIED OPTICS 2018; 57:3859-3863. [PMID: 29791353 DOI: 10.1364/ao.57.003859] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
We present a method for computer-generated holography based on deep learning. The inverse process of light propagation is regressed with a number of computationally generated speckle data sets. This method enables noniterative calculation of computer-generated holograms (CGHs). The proposed method was experimentally verified with a phase-only CGH.
Collapse
|
18
|
Horisaki R, Takagi R, Tanida J. Learning-based single-shot superresolution in diffractive imaging. APPLIED OPTICS 2017; 56:8896-8901. [PMID: 29131168 DOI: 10.1364/ao.56.008896] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2017] [Accepted: 10/05/2017] [Indexed: 06/07/2023]
Abstract
We present a method of retrieving a superresolved object field from a single captured intensity image in diffraction-limited diffractive imaging based on machine learning. In this method, the inverse process of diffractive imaging is regressed by using a number of pairs, each consisting of object and captured images. The proposed method is experimentally demonstrated by using a lensless imaging setup with or without scattering media.
Collapse
|
19
|
Satat G, Tancik M, Gupta O, Heshmat B, Raskar R. Object classification through scattering media with deep learning on time resolved measurement. OPTICS EXPRESS 2017; 25:17466-17479. [PMID: 28789238 DOI: 10.1364/oe.25.017466] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2017] [Accepted: 06/23/2017] [Indexed: 06/07/2023]
Abstract
We demonstrate an imaging technique that allows identification and classification of objects hidden behind scattering media and is invariant to changes in calibration parameters within a training range. Traditional techniques to image through scattering solve an inverse problem and are limited by the need to tune a forward model with multiple calibration parameters (like camera field of view, illumination position etc.). Instead of tuning a forward model and directly inverting the optical scattering, we use a data driven approach and leverage convolutional neural networks (CNN) to learn a model that is invariant to calibration parameters variations within the training range and nearly invariant beyond that. This effectively allows robust imaging through scattering conditions that is not sensitive to calibration. The CNN is trained with a large synthetic dataset generated with a Monte Carlo (MC) model that contains random realizations of major calibration parameters. The method is evaluated with a time-resolved camera and multiple experimental results are provided including pose estimation of a mannequin hidden behind a paper sheet with 23 correct classifications out of 30 tests in three poses (76.6% accuracy on real-world measurements). This approach paves the way towards real-time practical non line of sight (NLOS) imaging applications.
Collapse
|
20
|
Horisaki R, Takagi R, Tanida J. Learning-based focusing through scattering media. APPLIED OPTICS 2017; 56:4358-4362. [PMID: 29047862 DOI: 10.1364/ao.56.004358] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We present a machine-learning-based method for light focusing through scattering media. In this method, the optical process in a scattering medium is computationally inverted based on a nonlinear regression algorithm with a number of training input-output pairs through the medium, and an input optimized for a target output is calculated. We experimentally demonstrate focusing via a process involving randomness due to a scattering medium and nonlinearity due to double modulation with a spatial light modulator. Our approach realizes model-free control of optical fields, where optical processes or models are unknown.
Collapse
|
21
|
Horisaki R, Takagi R, Tanida J. Learning-based imaging through scattering media. OPTICS EXPRESS 2016; 24:13738-13743. [PMID: 27410537 DOI: 10.1364/oe.24.013738] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We present a machine-learning-based method for single-shot imaging through scattering media. The inverse scattering process was calculated based on a nonlinear regression algorithm by learning a number of training object-speckle pairs. In the experimental demonstration, multilayer phase objects between scattering plates were reconstructed from intensity measurements. Our approach enables model-free sensing, where it is not necessary to know the sensing processes/models.
Collapse
|