51
|
Chen H, Gao Y, Liu X, Zhou Z. Imaging through scattering media using speckle pattern classification based support vector regression. OPTICS EXPRESS 2018; 26:26663-26678. [PMID: 30469748 DOI: 10.1364/oe.26.026663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Accepted: 09/03/2018] [Indexed: 06/09/2023]
Abstract
Imaging through scattering media is a common practice in many applications of biomedical imaging. Object image would deteriorate into unrecognizable speckle pattern when scattering media is presented. Many methods have been investigated to reconstruct the object image when only speckle pattern is available. In this paper, we demonstrate a method of single-shot imaging through scattering media. This method is based on classification and support vector regression of the measured speckle pattern. We prove the possibility of speckle pattern classification and related formulas are presented. The specified and limited imaging capability without speckle pattern classification is demonstrated. Our proposed approach, that is, speckle pattern classification based support vector regression method, makes up the deficiency. Experimental results show that, with our approach, speckle patterns could be utilized for classification when object images are unavailable, and object images can be reconstructed with high fidelity. The proposed approach for imaging through scattering media is expected to be applicable to various sensing schemes.
Collapse
|
52
|
Wang H, Lyu M, Situ G. eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction. OPTICS EXPRESS 2018; 26:22603-22614. [PMID: 30184918 DOI: 10.1364/oe.26.022603] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2018] [Accepted: 08/05/2018] [Indexed: 05/23/2023]
Abstract
It is well known that in-line digital holography (DH) makes use of the full pixel count in forming the holographic imaging. But it usually requires phase-shifting or phase retrieval techniques to remove the zero-order and twin-image terms, resulting in the so-called two-step reconstruction process, i.e., phase recovery and focusing. Here, we propose a one-step end-to-end learning-based method for in-line holography reconstruction, namely, the eHoloNet, which can reconstruct the object wavefront directly from a single-shot in-line digital hologram. In addition, the proposed learning-based DH technique has strong robustness to the change of optical path difference between reference beam and object light and does not require the reference beam to be a plane or spherical wave.
Collapse
|
53
|
Niu Z, Shi J, Sun L, Zhu Y, Fan J, Zeng G. Photon-limited face image super-resolution based on deep learning. OPTICS EXPRESS 2018; 26:22773-22782. [PMID: 30184932 DOI: 10.1364/oe.26.022773] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Accepted: 07/20/2018] [Indexed: 06/08/2023]
Abstract
With one single photon camera (SPC), imaging under ultra weak-lighting conditions may have wide-ranging applications ranging from remote sensing to night vision, but it may seriously suffer from the problem of under-sampled inherent in SPC detection. Some approaches have been proposed to solve the under-sampled problem by detecting the objects many times to generate high-resolution images and performing noise reduction to suppress the Poission noise inherent in low-flux operation. To address the under-sampled problem more effectively, a new approach is developed in this paper to reconstruct high-resolution images with lower-noise by seamlessly integrating low-light-level imaging with deep learning. In our new approach, all the objects are detected only once by SPC, where a deep network is learned to reduce noise and reconstruct high-resolution images from the detected noisy under-sampled images. In order to demonstrate our proposal is feasible, we first select a special category to verify by experiment, which are human faces. Such deep network is able to recover high-resolution and lower-noise face images from new noisy under-sampled face images and the resolution can achieve 4× up-scaling factor. Our experimental results have demonstrated that our proposed method can generate high-quality images from only ~0.2 detected signal photon per pixel.
Collapse
|
54
|
Horisaki R, Takagi R, Tanida J. Deep-learning-generated holography. APPLIED OPTICS 2018; 57:3859-3863. [PMID: 29791353 DOI: 10.1364/ao.57.003859] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
We present a method for computer-generated holography based on deep learning. The inverse process of light propagation is regressed with a number of computationally generated speckle data sets. This method enables noniterative calculation of computer-generated holograms (CGHs). The proposed method was experimentally verified with a phase-only CGH.
Collapse
|
55
|
Yuan X, Pu Y. Parallel lensless compressive imaging via deep convolutional neural networks. OPTICS EXPRESS 2018; 26:1962-1977. [PMID: 29401917 DOI: 10.1364/oe.26.001962] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2017] [Accepted: 01/14/2018] [Indexed: 06/07/2023]
Abstract
We report a parallel lensless compressive imaging system, which enjoys real-time reconstruction using deep convolutional neural networks. A prototype composed of a low-cost LCD, 16 photo-diodes and isolation chambers, has been built. Each of these 16 channels captures a fraction of the scene with 16×16 pixels and they are performing in parallel. An efficient inversion algorithm based on deep convolutional neural networks is developed to reconstruct the image. We have demonstrated encouraging results using only 2% (relative to pixel numbers, e.g. 5 for a block with 16×16 pixels) measurements per sensor for digits and around 10% measurements per sensor for facial images.
Collapse
|
56
|
Lyu M, Wang W, Wang H, Wang H, Li G, Chen N, Situ G. Deep-learning-based ghost imaging. Sci Rep 2017; 7:17865. [PMID: 29259269 PMCID: PMC5736587 DOI: 10.1038/s41598-017-18171-7] [Citation(s) in RCA: 106] [Impact Index Per Article: 15.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2017] [Accepted: 12/05/2017] [Indexed: 11/10/2022] Open
Abstract
In this manuscript, we propose a novel framework of computational ghost imaging, i.e., ghost imaging using deep learning (GIDL). With a set of images reconstructed using traditional GI and the corresponding ground-truth counterparts, a deep neural network was trained so that it can learn the sensing model and increase the quality image reconstruction. Moreover, detailed comparisons between the image reconstructed using deep learning and compressive sensing shows that the proposed GIDL has a much better performance in extremely low sampling rate. Numerical simulations and optical experiments were carried out for the demonstration of the proposed GIDL.
Collapse
Affiliation(s)
- Meng Lyu
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Wei Wang
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Hao Wang
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Haichao Wang
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Guowei Li
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Ni Chen
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Guohai Situ
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
57
|
Horisaki R, Takagi R, Tanida J. Learning-based single-shot superresolution in diffractive imaging. APPLIED OPTICS 2017; 56:8896-8901. [PMID: 29131168 DOI: 10.1364/ao.56.008896] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2017] [Accepted: 10/05/2017] [Indexed: 06/07/2023]
Abstract
We present a method of retrieving a superresolved object field from a single captured intensity image in diffraction-limited diffractive imaging based on machine learning. In this method, the inverse process of diffractive imaging is regressed by using a number of pairs, each consisting of object and captured images. The proposed method is experimentally demonstrated by using a lensless imaging setup with or without scattering media.
Collapse
|
58
|
Satat G, Tancik M, Gupta O, Heshmat B, Raskar R. Object classification through scattering media with deep learning on time resolved measurement. OPTICS EXPRESS 2017; 25:17466-17479. [PMID: 28789238 DOI: 10.1364/oe.25.017466] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2017] [Accepted: 06/23/2017] [Indexed: 06/07/2023]
Abstract
We demonstrate an imaging technique that allows identification and classification of objects hidden behind scattering media and is invariant to changes in calibration parameters within a training range. Traditional techniques to image through scattering solve an inverse problem and are limited by the need to tune a forward model with multiple calibration parameters (like camera field of view, illumination position etc.). Instead of tuning a forward model and directly inverting the optical scattering, we use a data driven approach and leverage convolutional neural networks (CNN) to learn a model that is invariant to calibration parameters variations within the training range and nearly invariant beyond that. This effectively allows robust imaging through scattering conditions that is not sensitive to calibration. The CNN is trained with a large synthetic dataset generated with a Monte Carlo (MC) model that contains random realizations of major calibration parameters. The method is evaluated with a time-resolved camera and multiple experimental results are provided including pose estimation of a mannequin hidden behind a paper sheet with 23 correct classifications out of 30 tests in three poses (76.6% accuracy on real-world measurements). This approach paves the way towards real-time practical non line of sight (NLOS) imaging applications.
Collapse
|
59
|
Horisaki R, Takagi R, Tanida J. Learning-based focusing through scattering media. APPLIED OPTICS 2017; 56:4358-4362. [PMID: 29047862 DOI: 10.1364/ao.56.004358] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We present a machine-learning-based method for light focusing through scattering media. In this method, the optical process in a scattering medium is computationally inverted based on a nonlinear regression algorithm with a number of training input-output pairs through the medium, and an input optimized for a target output is calculated. We experimentally demonstrate focusing via a process involving randomness due to a scattering medium and nonlinearity due to double modulation with a spatial light modulator. Our approach realizes model-free control of optical fields, where optical processes or models are unknown.
Collapse
|