1
|
Bian Y, Wang F, Liu H, Yuan H, Li S, Huang W, Situ G. Passive imaging through inhomogeneous scattering media. Sci Rep 2024; 14:15857. [PMID: 38982213 PMCID: PMC11233588 DOI: 10.1038/s41598-024-66449-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 07/01/2024] [Indexed: 07/11/2024] Open
Abstract
According to the atmospheric scattering model (ASM), the object signal's attenuation diminishes exponentially as the imaging distance increases. This imposes limitations on ASM-based methods in situations where the scattering medium one wish to look through is inhomogeneous. Here, we extend ASM by taking into account the spatial variation of the medium density, and propose a two-step method for imaging through inhomogeneous scattering media. In the first step, the proposed method eliminates the direct current component of the scattered pattern by subscribing to the estimated global distribution (background). In the second step, it eliminates the randomized components of the scattered light by using threshold truncation, followed by the histogram equalization to further enhance the contrast. Outdoor experiments were carried out to demonstrate the proposed method.
Collapse
Affiliation(s)
- Yaoming Bian
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Fei Wang
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
| | - Haishan Liu
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Haiming Yuan
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Siteng Li
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Wenxin Huang
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Guohai Situ
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China.
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China.
- Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou, 310024, China.
| |
Collapse
|
2
|
Matsuda N, Tanida J, Naruse M, Horisaki R. Noninvasive holographic imaging through dynamically scattering media. OPTICS LETTERS 2024; 49:2389-2392. [PMID: 38691726 DOI: 10.1364/ol.516083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 03/26/2024] [Indexed: 05/03/2024]
Abstract
We present a noninvasive method for quantitative phase imaging through dynamically scattering media. A complex amplitude object, illuminated with coherent light, is captured through a dynamically scattering medium and a variable coded aperture, without the need for interferometric measurements or imaging optics. The complex amplitude of the object is computationally retrieved from intensity images that use multiple coded aperture patterns, employing a stochastic gradient descent algorithm. We demonstrate the proposed method both numerically and experimentally.
Collapse
|
3
|
Osorio Quero C, Leykam D, Rondon Ojeda I. Res-U2Net: untrained deep learning for phase retrieval and image reconstruction. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2024; 41:766-773. [PMID: 38856563 DOI: 10.1364/josaa.511074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 03/12/2024] [Indexed: 06/11/2024]
Abstract
Conventional deep learning-based image reconstruction methods require a large amount of training data, which can be hard to obtain in practice. Untrained deep learning methods overcome this limitation by training a network to invert a physical model of the image formation process. Here we present a novel, to our knowledge, untrained Res-U2Net model for phase retrieval. We use the extracted phase information to determine changes in an object's surface and generate a mesh representation of its 3D structure. We compare the performance of Res-U2Net phase retrieval against UNet and U2Net using images from the GDXRAY dataset.
Collapse
|
4
|
Mohammadzadeh M, Tabakhi S, Sayeh MR. Adaptive noise-resilient deep learning for image reconstruction in multimode fiber scattering. APPLIED OPTICS 2024; 63:3003-3014. [PMID: 38856444 DOI: 10.1364/ao.519285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 03/18/2024] [Indexed: 06/11/2024]
Abstract
This research offers a comprehensive exploration of three pivotal aspects within the realm of fiber optics and piezoelectric materials. The study delves into the influence of voltage variation on piezoelectric displacement, examines the effects of bending multimode fiber (MMF) on data transmission, and scrutinizes the performance of an autoencoder in MMF image reconstruction with and without additional noise. To assess the impact of voltage variation on piezoelectric displacement, experiments were conducted by applying varying voltages to a piezoelectric material, meticulously measuring its radial displacement. The results revealed a notable increase in displacement with higher voltage, presenting implications for fiber stability and overall performance. Additionally, the investigation into the effects of bending MMF on data transmission highlighted that the bending process causes the fiber to become leaky and radiate power radially, potentially affecting data transmission. This crucial insight emphasizes the necessity for further research to optimize data transmission in practical fiber systems. Furthermore, the performance of an autoencoder model was evaluated using a dataset of MMF images, in diverse scenarios. The autoencoder exhibited impressive accuracy in reconstructing MMF images with high fidelity. The results underscore the significance of ongoing research in these domains, propelling advancements in fiber optic technology.
Collapse
|
5
|
Mashiko R, Tanida J, Naruse M, Horisaki R. Extrapolated speckle-correlation imaging with an untrained deep neural network. APPLIED OPTICS 2023; 62:8327-8333. [PMID: 38037936 DOI: 10.1364/ao.496924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 10/09/2023] [Indexed: 12/02/2023]
Abstract
We present a method for speckle-correlation imaging with an extended field of view to observe spatially non-sparse objects. In speckle-correlation imaging, an object is recovered from a non-invasively captured image through a scattering medium by assuming shift-invariance of the optical process called the memory effect. The field of view of speckle-correlation imaging is limited by the size of the memory effect, and it can be extended by extrapolating the speckle correlation in the reconstruction process. However, spatially sparse objects are assumed in the inversion process because of its severe ill-posedness. To address this issue, we introduce a deep image prior, which regularizes the image statistics by using the structure of an untrained convolutional neural network, to speckle-correlation imaging. We experimentally demonstrated the proposed method and showed the possibility of extending the method to imaging through scattering media.
Collapse
|
6
|
Lee J, Moon G, Ka S, Toh KA, Kim D. Deep Learning Approach for the Localization and Analysis of Surface Plasmon Scattering. SENSORS (BASEL, SWITZERLAND) 2023; 23:8100. [PMID: 37836930 PMCID: PMC10575049 DOI: 10.3390/s23198100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 08/24/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023]
Abstract
Surface plasmon resonance microscopy (SPRM) combines the principles of traditional microscopy with the versatility of surface plasmons to develop label-free imaging methods. This paper describes a proof-of-principles approach based on deep learning that utilized the Y-Net convolutional neural network model to improve the detection and analysis methodology of SPRM. A machine-learning based image analysis technique was used to provide a method for the one-shot analysis of SPRM images to estimate scattering parameters such as the scatterer location. The method was assessed by applying the approach to SPRM images and reconstructing an image from the network output for comparison with the original image. The results showed that deep learning can localize scatterers and predict other variables of scattering objects with high accuracy in a noisy environment. The results also confirmed that with a larger field of view, deep learning can be used to improve traditional SPRM such that it localizes and produces scatterer characteristics in one shot, considerably increasing the detection capabilities of SPRM.
Collapse
Affiliation(s)
| | | | | | | | - Donghyun Kim
- School of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Republic of Korea; (J.L.); (G.M.); (S.K.); (K.-A.T.)
| |
Collapse
|
7
|
Tsukada T, Watanabe W. Central wavelength estimation in spectral imaging behind a diffuser via deep learning. APPLIED OPTICS 2023; 62:4143-4149. [PMID: 37706897 DOI: 10.1364/ao.486600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 05/02/2023] [Indexed: 09/15/2023]
Abstract
Multispectral imaging through scattering media is an important practical issue in the field of sensing. The light from a scattering medium is expected to carry information about the spectral properties of the medium, as well as geometrical information. Because spatial and spectral information of the object is encoded in speckle images, the information about the structure and spectrum of the object behind the scattering medium can be estimated from those images. Here we propose a deep learning-based strategy that can estimate the central wavelength from speckle images captured with a monochrome camera. When objects behind scattering media are illuminated with narrowband light having different spectra with different spectral peaks, deep learning of speckle images acquired at different central wavelengths can extend the spectral region to reconstruct images and estimate the central wavelengths of the illumination light. The proposed method achieves central wavelength estimation in 1 nm steps for objects whose central wavelength varies in a range of 100 nm. Because our method can achieve image reconstruction and central wavelength estimation in a single shot using a monochrome camera, this technique will pave the way for multispectral imaging through scattering media.
Collapse
|
8
|
Lan B, Wang H, Wang Y. One-to-all lightweight Fourier channel attention convolutional neural network for speckle reconstructions. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:2238-2245. [PMID: 36520741 DOI: 10.1364/josaa.470991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 10/20/2022] [Indexed: 06/17/2023]
Abstract
Speckle reconstruction is a classical inverse problem in computational imaging. Inspired by the memory effect of the scattering medium, deep learning methods reveal excellent performance in extracting the correlation of speckle patterns. Nowadays, advanced models generally include more than 10M parameters and mostly pay more attention to the spatial feature information. However, the frequency domain of images also contains precise hierarchical representations. Here we propose a one-to-all lightweight Fourier channel attention convolutional neural network (FCACNN) with Fourier channel attention and the res-connected bottleneck structure. Compared with the state-of-the-art model, i.e., self-attention armed convolutional neural network (SACNN), our architecture has better feature extraction and reconstruction ability. The Pearson correlation coefficient and Jaccard index scores of FCACNN increased by at least 5.2% and 13.6% compared with task-related models. And the parameter number of the lightweight FCACNN is only 1.15M. Furthermore, the validation results show that the one-to-all model, FCACNN, has excellent generalization capability on unseen speckle patterns such as handwritten letters and Quickdraws.
Collapse
|
9
|
Wu MH, Chang Lee YT, Tien CH. Lensless facial recognition with encrypted optics and a neural network computation. APPLIED OPTICS 2022; 61:7595-7601. [PMID: 36256358 DOI: 10.1364/ao.463017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 08/17/2022] [Indexed: 06/16/2023]
Abstract
Face recognition plays an essential role for the biometric authentication. Conventional lens-based imagery keeps the spatial fidelity with respect to the object, thus, leading to the privacy concerns. Based on the point spread function engineering, we employed a coded mask as the encryption scheme, which allows a readily noninterpretable representation on the sensor. A deep neural network computation was used to extract the features and further conduct the identification. The advantage of this data-driven approach lies in that it is neither necessary to correct the lens aberration nor revealing any facial conformity amid the image formation chain. To validate the proposed framework, we generated a dataset with practical photographing and data augmentation by a set of experimental parameters. The system has the capability to adapt a wide depth of field (DoF) (60-cm hyperfocal distance) and pose variation (0 to 45 deg). The 100% recognition accuracy on real-time measurement was achieved without the necessity of any physics priors, such as the encryption scheme.
Collapse
|
10
|
Zhao Q, Li H, Yu Z, Woo CM, Zhong T, Cheng S, Zheng Y, Liu H, Tian J, Lai P. Speckle-Based Optical Cryptosystem and its Application for Human Face Recognition via Deep Learning. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2022; 9:e2202407. [PMID: 35748190 PMCID: PMC9443436 DOI: 10.1002/advs.202202407] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Indexed: 05/30/2023]
Abstract
Face recognition has become ubiquitous for authentication or security purposes. Meanwhile, there are increasing concerns about the privacy of face images, which are sensitive biometric data and should be protected. Software-based cryptosystems are widely adopted to encrypt face images, but the security level is limited by insufficient digital secret key length or computing power. Hardware-based optical cryptosystems can generate enormously longer secret keys and enable encryption at light speed, but most reported optical methods, such as double random phase encryption, are less compatible with other systems due to system complexity. In this study, a plain yet highly efficient speckle-based optical cryptosystem is proposed and implemented. A scattering ground glass is exploited to generate physical secret keys of 17.2 gigabit length and encrypt face images via seemingly random optical speckles at light speed. Face images can then be decrypted from random speckles by a well-trained decryption neural network, such that face recognition can be realized with up to 98% accuracy. Furthermore, attack analyses are carried out to show the cryptosystem's security. Due to its high security, fast speed, and low cost, the speckle-based optical cryptosystem is suitable for practical applications and can inspire other high-security cryptosystems.
Collapse
Affiliation(s)
- Qi Zhao
- Department of Biomedical EngineeringHong Kong Polytechnic UniversityHong Kong SAR
- Shenzhen Research InstituteHong Kong Polytechnic UniversityShenzhen518057China
| | - Huanhao Li
- Department of Biomedical EngineeringHong Kong Polytechnic UniversityHong Kong SAR
- Shenzhen Research InstituteHong Kong Polytechnic UniversityShenzhen518057China
| | - Zhipeng Yu
- Department of Biomedical EngineeringHong Kong Polytechnic UniversityHong Kong SAR
- Shenzhen Research InstituteHong Kong Polytechnic UniversityShenzhen518057China
| | - Chi Man Woo
- Department of Biomedical EngineeringHong Kong Polytechnic UniversityHong Kong SAR
- Shenzhen Research InstituteHong Kong Polytechnic UniversityShenzhen518057China
| | - Tianting Zhong
- Department of Biomedical EngineeringHong Kong Polytechnic UniversityHong Kong SAR
- Shenzhen Research InstituteHong Kong Polytechnic UniversityShenzhen518057China
| | - Shengfu Cheng
- Department of Biomedical EngineeringHong Kong Polytechnic UniversityHong Kong SAR
- Shenzhen Research InstituteHong Kong Polytechnic UniversityShenzhen518057China
| | - Yuanjin Zheng
- School of Electrical and Electronic EngineeringNanyang Technological UniversitySingapore639798Singapore
| | - Honglin Liu
- Shenzhen Research InstituteHong Kong Polytechnic UniversityShenzhen518057China
- Key Laboratory for Quantum Optics, Shanghai Institute of Optics and Fine MechanicsChinese Academy of SciencesShanghai201800China
| | - Jie Tian
- Beijing Advanced Innovation Center for Big Data‐Based Precision Medicine, School of Medical Science and EngineeringBeihang UniversityBeijing100191China
- Key Laboratory of Molecular Imaging, Institute of AutomationChinese Academy of SciencesBeijing100190China
| | - Puxiang Lai
- Department of Biomedical EngineeringHong Kong Polytechnic UniversityHong Kong SAR
- Shenzhen Research InstituteHong Kong Polytechnic UniversityShenzhen518057China
- Photonics Research InstituteHong Kong Polytechnic UniversityHong Kong SAR
| |
Collapse
|
11
|
Xu S, Yang X, Liu W, Jönsson J, Qian R, Konda PC, Zhou KC, Kreiß L, Wang H, Dai Q, Berrocal E, Horstmeyer R. Imaging Dynamics Beneath Turbid Media via Parallelized Single-Photon Detection. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2022; 9:e2201885. [PMID: 35748188 PMCID: PMC9404405 DOI: 10.1002/advs.202201885] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 05/16/2022] [Indexed: 05/05/2023]
Abstract
Noninvasive optical imaging through dynamic scattering media has numerous important biomedical applications but still remains a challenging task. While standard diffuse imaging methods measure optical absorption or fluorescent emission, it is also well-established that the temporal correlation of scattered coherent light diffuses through tissue much like optical intensity. Few works to date, however, have aimed to experimentally measure and process such temporal correlation data to demonstrate deep-tissue video reconstruction of decorrelation dynamics. In this work, a single-photon avalanche diode array camera is utilized to simultaneously monitor the temporal dynamics of speckle fluctuations at the single-photon level from 12 different phantom tissue surface locations delivered via a customized fiber bundle array. Then a deep neural network is applied to convert the acquired single-photon measurements into video of scattering dynamics beneath rapidly decorrelating tissue phantoms. The ability to reconstruct images of transient (0.1-0.4 s) dynamic events occurring up to 8 mm beneath a decorrelating tissue phantom with millimeter-scale resolution is demonstrated, and it is highlighted how the model can flexibly extend to monitor flow speed within buried phantom vessels.
Collapse
Affiliation(s)
- Shiqi Xu
- Department of Biomedical EngineeringDuke UniversityDurhamNC27708USA
| | - Xi Yang
- Department of Biomedical EngineeringDuke UniversityDurhamNC27708USA
| | - Wenhui Liu
- Department of Biomedical EngineeringDuke UniversityDurhamNC27708USA
- Department of AutomationTsinghua UniversityBeijing100084China
| | - Joakim Jönsson
- Division of Combustion PhysicsDepartment of PhysicsLund UniversityLund22100Sweden
| | - Ruobing Qian
- Department of Biomedical EngineeringDuke UniversityDurhamNC27708USA
| | | | - Kevin C. Zhou
- Department of Biomedical EngineeringDuke UniversityDurhamNC27708USA
| | - Lucas Kreiß
- Department of Biomedical EngineeringDuke UniversityDurhamNC27708USA
- Institute of Medical BiotechnologyFriedrich‐Alexander‐University Erlangen‐Nürnberg (FAU)Erlangen91054Germany
| | - Haoqian Wang
- Tsinghua Shenzhen International Graduate SchoolTsinghua UniversityShenzhen518055China
| | - Qionghai Dai
- Department of AutomationTsinghua UniversityBeijing100084China
| | - Edouard Berrocal
- Division of Combustion PhysicsDepartment of PhysicsLund UniversityLund22100Sweden
| | - Roarke Horstmeyer
- Department of Biomedical EngineeringDuke UniversityDurhamNC27708USA
- Department of Electrical and Computer EngineeringDuke UniversityDurhamNC27708USA
- Department of PhysicsDuke UniversityDurhamNC27708USA
| |
Collapse
|
12
|
Imaging Complex Targets through a Scattering Medium Based on Adaptive Encoding. PHOTONICS 2022. [DOI: 10.3390/photonics9070467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The scattering of light after passing through a complex medium poses challenges in many fields. Any point in the collected speckle will contain information from the entire target plane because of the randomness of scattering. The detailed information of complex targets is submerged in the aliased signal caused by random scattering, and the aliased signal causes the quality of the recovered target to be degraded. In this paper, a new neural network named Adaptive Encoding Scattering Imaging ConvNet (AESINet) is constructed by analyzing the physical prior of speckle image redundancy to recover complex targets hidden behind the opaque medium. AESINet reduces the redundancy of speckle through adaptive encoding which effectively improves the separability of data; the encoded speckle makes it easier for the network to extract features, and helps restore the detailed information of the target. The necessity for adaptive encoding is analyzed, and the ability of this method to reconstruct complex targets is tested. The peak signal-to-noise ratio (PSNR) of the reconstructed target after adaptive encoding can be improved by 1.8 dB. This paper provides an effective reference for neural networks combined with other physical priors in scattering processes.
Collapse
|
13
|
Tsukada T, Watanabe W. Investigation of image plane for image reconstruction of objects through diffusers via deep learning. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:056001. [PMID: 35509071 PMCID: PMC9067610 DOI: 10.1117/1.jbo.27.5.056001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 01/31/2022] [Indexed: 06/14/2023]
Abstract
SIGNIFICANCE The imaging of objects hidden in light-scattering media is a vital practical task in a wide range of applications, including biological imaging. Deep-learning-based methods have been used to reconstruct images behind scattering media under complex scattering conditions, but improvements in the quality of the reconstructed images are required. AIM To investigate the effect of image plane on the accuracy of reconstructed images. APPROACH Light reflected from an object passing through glass diffusers is captured by changing the image plane of an optical imaging system. Images are reconstructed by deep learning, and evaluated in terms of structural similarity index measure, classification accuracy of digital images, and training and testing error curves. RESULTS The reconstruction accuracy was improved for the case in which the diffuser was imaged, compared to the case where the object was imaged. The training and testing error curves show that the loss converged to lower values in fewer epochs when the diffuser was imaged. CONCLUSIONS The proposed approach demonstrates an improvement in the accuracy of the reconstruction of objects hidden through glass diffusers by imaging glass diffuser surfaces, and can be applied to objects at unknown locations in a scattering medium.
Collapse
Affiliation(s)
- Takumi Tsukada
- Ritsumeikan University, College of Science and Engineering, Department of Electrical and Electronic Engineering, Kusatsu, Shiga, Japan
| | - Wataru Watanabe
- Ritsumeikan University, College of Science and Engineering, Department of Electrical and Electronic Engineering, Kusatsu, Shiga, Japan
| |
Collapse
|
14
|
Wertheimer ZA, Bar C, Levin A. Towards machine learning for heterogeneous inverse scattering in 3D microscopy. OPTICS EXPRESS 2022; 30:9854-9868. [PMID: 35299399 DOI: 10.1364/oe.447075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 02/09/2022] [Indexed: 06/14/2023]
Abstract
Light propagating through a nonuniform medium scatters as it interacts with particles with different refractive properties such as cells in the tissue. In this work we aim to utilize this scattering process to learn a volumetric reconstruction of scattering parameters, in particular particle densities. We target microscopy applications where coherent speckle effects are an integral part of the imaging process. We argue that the key for successful learning is modeling realistic speckles in the training process. To this end, we build on the development of recent physically accurate speckle simulators. We also explore how to incorporate speckle statistics, such as the memory effect, in the learning framework. Overall, this paper contributes an analysis of multiple aspects of the network design including the learning architecture, the training data and the desired input features. We hope this study will pave the road for future design of learning based imaging systems in this challenging domain.
Collapse
|
15
|
Song B, Jin C, Wu J, Lin W, Liu B, Huang W, Chen S. Deep learning image transmission through a multimode fiber based on a small training dataset. OPTICS EXPRESS 2022; 30:5657-5672. [PMID: 35209523 DOI: 10.1364/oe.450999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 01/31/2022] [Indexed: 06/14/2023]
Abstract
An improved deep neural network incorporating attention mechanism and DSSIM loss function (AM_U_Net) is used to recover input images with speckles transmitted through a multimode fiber (MMF). The network is trained on a relatively small dataset and demonstrates an optimal reconstruction ability and generalization ability. Furthermore, a bimodal fusion method is developed based on S polarization and P polarization speckles, greatly improving the recognition accuracy. These findings prove that AM_U_Net has remarkable capabilities for information recovery and transfer learning and good tolerance and robustness under different MMF transmission conditions, indicating its significant application potential in medical imaging and secure communication.
Collapse
|
16
|
Imaging through diffuse media using multi-mode vortex beams and deep learning. Sci Rep 2022; 12:1561. [PMID: 35091633 PMCID: PMC8799672 DOI: 10.1038/s41598-022-05358-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 01/11/2022] [Indexed: 01/20/2023] Open
Abstract
Optical imaging through diffuse media is a challenging issue and has attracted applications in many fields such as biomedical imaging, non-destructive testing, and computer-assisted surgery. However, light interaction with diffuse media leads to multiple scattering of the photons in the angular and spatial domain, severely degrading the image reconstruction process. In this article, a novel method to image through diffuse media using multiple modes of vortex beams and a new deep learning network named “LGDiffNet” is derived. A proof-of-concept numerical simulation is conducted using this method, and the results are experimentally verified. In this technique, the multiple modes of Gaussian and Laguerre-Gaussian beams illuminate the displayed digits dataset number, and the beams are then propagated through the diffuser before being captured on the beam profiler. Furthermore, we investigated whether imaging through diffuse media using multiple modes of vortex beams instead of Gaussian beams improves the imaging system's imaging capability and enhances the network's reconstruction ability. Our results show that illuminating the diffuser using vortex beams and employing the “LGDiffNet” network provides enhanced image reconstruction compared to existing modalities. When employing vortex beams for image reconstruction, the best NPCC is − 0.9850. However, when using Gaussian beams for imaging acquisition, the best NPCC is − 0.9837. An enhancement of 0.62 dB, in terms of PSNR, is achieved using this method when a highly scattering diffuser of grit 220 and width 2 mm (7.11 times the mean free path) is used. No additional optimizations or reference beams were used in the imaging system, revealing the robustness of the “LGDiffNet” network and the adaptability of the imaging system for practical applications in medical imaging.
Collapse
|
17
|
Zheng S, Liao M, Wang F, He W, Peng X, Situ G. Non-line-of-sight imaging under white-light illumination: a two-step deep learning approach. OPTICS EXPRESS 2021; 29:40091-40105. [PMID: 34809358 DOI: 10.1364/oe.443127] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 11/08/2021] [Indexed: 06/13/2023]
Abstract
Non-line-of-sight (NLOS) imaging has received considerable attentions for its ability to recover occluded objects from an indirect view. Various NLOS imaging techniques have been demonstrated recently. Here, we propose a white-light NLOS imaging method that is equipped only with an ordinary camera, and not necessary to operate under active coherent illumination as in other existing NLOS systems. The central idea is to incorporate speckle correlation-based model into a deep neural network (DNN), and form a two-step DNN strategy that endeavors to learn the optimization of the scattered pattern autocorrelation and object image reconstruction, respectively. Optical experiments are carried out to demonstrate the proposed method.
Collapse
|
18
|
Yoneda N, Kakei S, Komuro K, Onishi A, Saita Y, Nomura T. Single-shot higher-order transport-of-intensity quantitative phase imaging using deep learning. APPLIED OPTICS 2021; 60:8802-8808. [PMID: 34613106 DOI: 10.1364/ao.435538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 09/07/2021] [Indexed: 06/13/2023]
Abstract
Single-shot higher-order transport-of-intensity quantitative phase imaging (SHOT-QPI) is proposed to realize simple, in-line, scanless, and single-shot QPI. However, the light-use efficiency of SHOT-QPI is low because of the use of an amplitude-type computer-generated hologram (CGH). Although a phase-type CGH overcomes the problem, the accuracy of the measured phase is degraded owing to distortion of the defocused intensity distributions, which is caused by a quantization error of the CGH. Alternative SHOT-QPI with the help of deep learning, termed Deep-SHOT, is proposed to solve a nonlinear problem between the distorted intensities and the phase. In Deep-SHOT, a neural network learns the relationship between a series of distorted intensity distributions and the ground truth phase distribution. Because the distortion of intensity distributions is intrinsic to an optical system, the neural network is optimized for the system, and the proposed method improves the accuracy of the measured phase. The results of a proof-of-principle experiment indicate that the use of multiple defocused intensities also improves accuracy, even the nonlinear problem.
Collapse
|
19
|
Cheng Q, Guo E, Gu J, Bai L, Han J, Zheng D. De-noising imaging through diffusers with autocorrelation. APPLIED OPTICS 2021; 60:7686-7695. [PMID: 34613238 DOI: 10.1364/ao.425099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 08/02/2021] [Indexed: 06/13/2023]
Abstract
Recovering targets through diffusers is an important topic as well as a general problem in optical imaging. The difficulty of recovering is increased due to the noise interference caused by an imperfect imaging environment. Existing approaches generally require a high-signal-to-noise-ratio (SNR) speckle pattern to recover the target, but still have limitations in de-noising or generalizability. Here, featuring information of high-SNR autocorrelation as a physical constraint, we propose a two-stage (de-noising and reconstructing) method to improve robustness based on data driving. Specifically, a two-stage convolutional neural network (CNN) called autocorrelation reconstruction (ACR) CNN is designed to de-noise and reconstruct targets from low-SNR speckle patterns. We experimentally demonstrate the robustness through various diffusers with different levels of noise, from simulative Gaussian noise to the detector and photon noise captured by the actual optical system. The de-noising stage improves the peak SNR from 20 to 38 dB in the system data, and the reconstructing stage, compared with the unconstrained method, successfully recovers targets hidden in unknown diffusers with the detector and photon noise. With the help of the physical constraint to optimize the learning process, our two-stage method is realized to improve generalizability and has potential in various fields such as imaging in low illumination.
Collapse
|
20
|
Ehira K, Horisaki R, Nishizaki Y, Naruse M, Tanida J. Spectral speckle-correlation imaging. APPLIED OPTICS 2021; 60:2388-2392. [PMID: 33690339 DOI: 10.1364/ao.418361] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 02/16/2021] [Indexed: 06/12/2023]
Abstract
We present a method for single-shot spectrally resolved imaging through scattering media by using the spectral memory effect of speckles. In our method, a single speckle pattern from a multi-colored object is captured through scattering media with a monochrome image sensor. The color object is recovered by correlation of the captured speckle and a three-dimensional phase retrieval process. The proposed method was experimentally demonstrated by using point sources with different emission spectra located between diffusers. This study paves the way for non-invasive and low-cost spectral imaging through scattering media.
Collapse
|
21
|
A Fringe Phase Extraction Method Based on Neural Network. SENSORS 2021; 21:s21051664. [PMID: 33670957 PMCID: PMC7957713 DOI: 10.3390/s21051664] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 02/18/2021] [Accepted: 02/23/2021] [Indexed: 12/04/2022]
Abstract
In optical metrology, the output is usually in the form of a fringe pattern, from which a phase map can be generated and phase information can be converted into the desired parameters. This paper proposes an end-to-end method of fringe phase extraction based on the neural network. This method uses the U-net neural network to directly learn the correspondence between the gray level of a fringe pattern and the wrapped phase map, which is simpler than the exist deep learning methods. The results of simulation and experimental fringe patterns verify the accuracy and the robustness of this method. While it yields the same accuracy, the proposed method features easier operation and a simpler principle than the traditional phase-shifting method and has a faster speed than wavelet transform method.
Collapse
|
22
|
Lee YTC, Fang YC, Tien CH. Deep neural network for coded mask cryptographical imaging. APPLIED OPTICS 2021; 60:1686-1693. [PMID: 33690506 DOI: 10.1364/ao.415120] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 01/21/2021] [Indexed: 06/12/2023]
Abstract
We proposed a novel cryptographic imaging scheme that is the combination of optical encryption and computational decryption. To prevent personal privacy from being spied upon amid the imaging formation process, in this study we applied a coded mask to optically encrypt the scene and utilized the deep neural network for computational decryption. For encryption, the sensor recorded a new representation of the original signal, not being distinguishable by humans on purpose. For decryption, we successfully reconstructed the image with the mean squared error equal to 0.028, and 100% for the classification through the Japanese Female Facial Expression dataset. By means of the feature visualization, we found that the coded mask served as a linear operator to synthesize the spatial fidelity of the original scene, but kept the features for the post-recognition process. We believe the proposed framework can inspire more possibilities for the unconventional imaging system.
Collapse
|
23
|
Kang I, Pang S, Zhang Q, Fang N, Barbastathis G. Recurrent neural network reveals transparent objects through scattering media. OPTICS EXPRESS 2021; 29:5316-5326. [PMID: 33726070 DOI: 10.1364/oe.412890] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 01/29/2021] [Indexed: 06/12/2023]
Abstract
Scattering generally worsens the condition of inverse problems, with the severity depending on the statistics of the refractive index gradient and contrast. Removing scattering artifacts from images has attracted much work in the literature, including recently the use of static neural networks. S. Li et al. [Optica5(7), 803 (2018)10.1364/OPTICA.5.000803] trained a convolutional neural network to reveal amplitude objects hidden by a specific diffuser; whereas Y. Li et al. [Optica5(10), 1181 (2018)10.1364/OPTICA.5.001181] were able to deal with arbitrary diffusers, as long as certain statistical criteria were met. Here, we propose a novel dynamical machine learning approach for the case of imaging phase objects through arbitrary diffusers. The motivation is to strengthen the correlation among the patterns during the training and to reveal phase objects through scattering media. We utilize the on-axis rotation of a diffuser to impart dynamics and utilize multiple speckle measurements from different angles to form a sequence of images for training. Recurrent neural networks (RNN) embedded with the dynamics filter out useful information and discard the redundancies, thus quantitative phase information in presence of strong scattering. In other words, the RNN effectively averages out the effect of the dynamic random scattering media and learns more about the static pattern. The dynamical approach reveals transparent images behind the scattering media out of speckle correlation among adjacent measurements in a sequence. This method is also applicable to other imaging applications that involve any other spatiotemporal dynamics.
Collapse
|
24
|
Horisaki R, Nishizaki Y, Kitaguchi K, Saito M, Tanida J. Three-dimensional deeply generated holography [Invited]. APPLIED OPTICS 2021; 60:A323-A328. [PMID: 33690416 DOI: 10.1364/ao.404151] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 10/27/2020] [Indexed: 05/28/2023]
Abstract
In this paper, we present a noniterative method for 3D computer-generated holography based on deep learning. A convolutional neural network is adapted for directly generating a hologram to reproduce a 3D intensity pattern in a given class. We experimentally demonstrated the proposed method with optical reproductions of multiple layers based on phase-only Fourier holography. Our method is noniterative, but it achieves a reproduction quality comparable with that of iterative methods for a given class.
Collapse
|
25
|
Marima D, Hadad B, Froim S, Eyal A, Bahabad A. Visual data detection through side-scattering in a multimode optical fiber. OPTICS LETTERS 2020; 45:6724-6727. [PMID: 33325881 DOI: 10.1364/ol.408552] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 11/14/2020] [Indexed: 06/12/2023]
Abstract
Light propagation in optical fibers is accompanied by random omnidirectional scattering. The small fraction of coherent guided light that escapes outside the cladding of the fiber forms a speckle pattern. Here, visual information imaged into the input facet of a multimode fiber with a transparent buffer is retrieved, using a convolutional neural network, from the side-scattered light at several locations along the fiber. This demonstration can promote the development of distributed optical imaging systems and optical links interfaced via the sides of the fiber.
Collapse
|
26
|
Wetzstein G, Ozcan A, Gigan S, Fan S, Englund D, Soljačić M, Denz C, Miller DAB, Psaltis D. Inference in artificial intelligence with deep optics and photonics. Nature 2020; 588:39-47. [PMID: 33268862 DOI: 10.1038/s41586-020-2973-6] [Citation(s) in RCA: 138] [Impact Index Per Article: 34.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Accepted: 08/20/2020] [Indexed: 12/30/2022]
Abstract
Artificial intelligence tasks across numerous applications require accelerators for fast and low-power execution. Optical computing systems may be able to meet these domain-specific needs but, despite half a century of research, general-purpose optical computing systems have yet to mature into a practical technology. Artificial intelligence inference, however, especially for visual computing applications, may offer opportunities for inference based on optical and photonic systems. In this Perspective, we review recent work on optical computing for artificial intelligence applications and discuss its promise and challenges.
Collapse
Affiliation(s)
| | - Aydogan Ozcan
- University of California, Los Angeles, Los Angeles, CA, USA
| | - Sylvain Gigan
- Laboratoire Kastler Brossel, Sorbonne Université, École Normale Supérieure, Collège de France, CNRS UMR 8552, Paris, France
| | | | - Dirk Englund
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Marin Soljačić
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | | | - Demetri Psaltis
- École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
27
|
A physical unclonable neutron sensor for nuclear arms control inspections. Sci Rep 2020; 10:20605. [PMID: 33244133 PMCID: PMC7692483 DOI: 10.1038/s41598-020-77459-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 11/10/2020] [Indexed: 11/24/2022] Open
Abstract
Classical sensor security relies on cryptographic algorithms executed on trusted hardware. This approach has significant shortcomings, however. Hardware can be manipulated, including below transistor level, and cryptographic keys are at risk of extraction attacks. A further weakness is that sensor media themselves are assumed to be trusted, and any authentication and encryption is done ex situ and a posteriori. Here we propose and demonstrate a different approach to sensor security that does not rely on classical cryptography and trusted electronics. We designed passive sensor media that inherently produce secure and trustworthy data, and whose honest and non-malicious nature can be easily established. As a proof-of-concept, we manufactured and characterized the properties of non-electronic, physical unclonable, optically complex media sensitive to neutrons for use in a high-security scenario: the inspection of a military facility to confirm the absence or presence of nuclear weapons and fissile materials.
Collapse
|
28
|
Chang C, Bang K, Wetzstein G, Lee B, Gao L. Toward the next-generation VR/AR optics: a review of holographic near-eye displays from a human-centric perspective. OPTICA 2020; 7:1563-1578. [PMID: 34141829 PMCID: PMC8208705 DOI: 10.1364/optica.406004] [Citation(s) in RCA: 87] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Accepted: 09/23/2020] [Indexed: 05/19/2023]
Abstract
Wearable near-eye displays for virtual and augmented reality (VR/AR) have seen enormous growth in recent years. While researchers are exploiting a plethora of techniques to create life-like three-dimensional (3D) objects, there is a lack of awareness of the role of human perception in guiding the hardware development. An ultimate VR/AR headset must integrate the display, sensors, and processors in a compact enclosure that people can comfortably wear for a long time while allowing a superior immersion experience and user-friendly human-computer interaction. Compared with other 3D displays, the holographic display has unique advantages in providing natural depth cues and correcting eye aberrations. Therefore, it holds great promise to be the enabling technology for next-generation VR/AR devices. In this review, we survey the recent progress in holographic near-eye displays from the human-centric perspective.
Collapse
Affiliation(s)
- Chenliang Chang
- Department of Bioengineering, University of California, Los Angeles, 410 Westwood Plaza, Los Angeles, California 90095, USA
| | - Kiseung Bang
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul 08826, Republic of Korea
| | - Gordon Wetzstein
- Department of Electrical Engineering, Stanford University, 350 Jane Stanford Way, Stanford, California 94305, USA
| | - Byoungho Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul 08826, Republic of Korea
| | - Liang Gao
- Department of Bioengineering, University of California, Los Angeles, 410 Westwood Plaza, Los Angeles, California 90095, USA
- Corresponding author:
| |
Collapse
|
29
|
Yamazaki K, Horisaki R, Tanida J. Imaging through scattering media based on semi-supervised learning. APPLIED OPTICS 2020; 59:9850-9854. [PMID: 33175824 DOI: 10.1364/ao.402428] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 10/06/2020] [Indexed: 06/11/2023]
Abstract
We present a method for less-invasive imaging through scattering media. We use an image-to-image translation, which is called a cycle generative adversarial network (CycleGAN), based on semi-supervised learning with an unlabeled dataset. Our method was experimentally demonstrated by reconstructing object images displayed on a spatial light modulator between diffusers. In the demonstration, CycleGAN was trained with captured images and object candidate images that were not used for image capturing through the diffusers and were not paired with the captured images.
Collapse
|
30
|
Bian T, Dai Y, Hu J, Zheng Z, Gao L. Ghost imaging based on asymmetric learning. APPLIED OPTICS 2020; 59:9548-9552. [PMID: 33104675 DOI: 10.1364/ao.405120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 10/03/2020] [Indexed: 06/11/2023]
Abstract
Ghost imaging (GI) is an unconventional optical imaging method making use of the correlation measurement between a test beam and a reference beam. GI using deep learning (GIDL) has earned increasing attention, as it can reconstruct images of high quality more effectively than traditional GI methods. It has been demonstrated that GIDL can be trained completely with simulation data, which makes it even more practical. However, most GIDLs proposed so far appear to have limited performance for random noise distributed patterns. This is because traditional GIDLs are sensitive to the under-estimation error but robust to the over-estimation error. An asymmetric learning framework is proposed here to tackle the unbalanced sensitivity to estimation errors of GIDL. The experimental results show that it can achieve much better reconstructed images than GIDL with a symmetric loss function, and the structural similarity index of GI is quadrupled for randomly selected objects.
Collapse
|
31
|
Chen H, He Z, Zhang Z, Geng Y, Yu W. Binary amplitude-only image reconstruction through a MMF based on an AE-SNN combined deep learning model. OPTICS EXPRESS 2020; 28:30048-30062. [PMID: 33114890 DOI: 10.1364/oe.403316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 09/15/2020] [Indexed: 06/11/2023]
Abstract
The obstacle of imaging through multimode fibers (MMFs) is encountered due to the fact that the inherent mode dispersion and mode coupling lead the output of the MMF to be scattered and bring about image distortions. As a result, only noise-like speckle patterns can be formed on the distal end of the MMF. We propose a deep learning model exploited for computational imaging through an MMF, which contains an autoencoder (AE) for feature extraction and image reconstruction and self-normalizing neural networks (SNNs) sandwiched and employed for high-order feature representation. It was demonstrated both in simulations and in experiments that the proposed AE-SNN combined deep learning model could reconstruct image information from various binary amplitude-only targets going through a 5-meter-long MMF. Simulations indicate that our model works effectively even in the presence of system noise, and the experimental results prove that the method is valid for image reconstruction through the MMF. Enabled by the spatial variability and the self-normalizing properties, our model can be generalized to solve varieties of other computational imaging problems.
Collapse
|
32
|
Deng M, Li S, Zhang Z, Kang I, Fang NX, Barbastathis G. On the interplay between physical and content priors in deep learning for computational imaging. OPTICS EXPRESS 2020; 28:24152-24170. [PMID: 32752400 DOI: 10.1364/oe.395204] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 07/21/2020] [Indexed: 06/11/2023]
Abstract
Deep learning (DL) has been applied extensively in many computational imaging problems, often leading to superior performance over traditional iterative approaches. However, two important questions remain largely unanswered: first, how well can the trained neural network generalize to objects very different from the ones in training? This is particularly important in practice, since large-scale annotated examples similar to those of interest are often not available during training. Second, has the trained neural network learnt the underlying (inverse) physics model, or has it merely done something trivial, such as memorizing the examples or point-wise pattern matching? This pertains to the interpretability of machine-learning based algorithms. In this work, we use the Phase Extraction Neural Network (PhENN) [Optica 4, 1117-1125 (2017)], a deep neural network (DNN) for quantitative phase retrieval in a lensless phase imaging system as the standard platform and show that the two questions are related and share a common crux: the choice of the training examples. Moreover, we connect the strength of the regularization effect imposed by a training set to the training process with the Shannon entropy of images in the dataset. That is, the higher the entropy of the training images, the weaker the regularization effect can be imposed. We also discover that weaker regularization effect leads to better learning of the underlying propagation model, i.e. the weak object transfer function, applicable for weakly scattering objects under the weak object approximation. Finally, simulation and experimental results show that better cross-domain generalization performance can be achieved if DNN is trained on a higher-entropy database, e.g. the ImageNet, than if the same DNN is trained on a lower-entropy database, e.g. MNIST, as the former allows the underlying physics model be learned better than the latter.
Collapse
|
33
|
Bian T, Yi Y, Hu J, Zhang Y, Wang Y, Gao L. A residual-based deep learning approach for ghost imaging. Sci Rep 2020; 10:12149. [PMID: 32699297 PMCID: PMC7376173 DOI: 10.1038/s41598-020-69187-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 07/06/2020] [Indexed: 11/08/2022] Open
Abstract
Ghost imaging using deep learning (GIDL) is a kind of computational quantum imaging method devised to improve the imaging efficiency. However, among most proposals of GIDL so far, the same set of random patterns were used in both the training and test set, leading to a decrease of the generalization ability of networks. Thus, the GIDL technique can only reconstruct the profile of the image of the object, losing most of the details. Here we optimize the simulation algorithm of ghost imaging (GI) by introducing the concept of "batch" into the pre-processing stage. It can significantly reduce the data acquisition time and create reliable simulation data. The generalization ability of GIDL has been appreciably enhanced. Furthermore, we develop a residual-based framework for the GI system, namely the double residual U-Net (DRU-Net). The imaging quality of GI has been tripled in the evaluation of the structural similarity index by our proposed DRU-Net.
Collapse
Affiliation(s)
- Tong Bian
- School of Science, China University of Geosciences, Beijing, 100083, China
- School of Information Engineering, China University of Geosciences, Beijing, 100083, China
| | - Yuxuan Yi
- School of Information Engineering, China University of Geosciences, Beijing, 100083, China
| | - Jiale Hu
- School of Information Engineering, China University of Geosciences, Beijing, 100083, China
| | - Yin Zhang
- School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, 430072, China
| | - Yide Wang
- School of Information Engineering, China University of Geosciences, Beijing, 100083, China
| | - Lu Gao
- School of Science, China University of Geosciences, Beijing, 100083, China.
| |
Collapse
|
34
|
Kang I, Zhang F, Barbastathis G. Phase extraction neural network (PhENN) with coherent modulation imaging (CMI) for phase retrieval at low photon counts. OPTICS EXPRESS 2020; 28:21578-21600. [PMID: 32752433 DOI: 10.1364/oe.397430] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 06/19/2020] [Indexed: 06/11/2023]
Abstract
Imaging with low-dose light is of importance in various fields, especially when minimizing radiation-induced damage onto samples is desirable. The raw image captured at the detector plane is then predominantly a Poisson random process with Gaussian noise added due to the quantum nature of photo-electric conversion. Under such noisy conditions, highly ill-posed problems such as phase retrieval from raw intensity measurements become prone to strong artifacts in the reconstructions; a situation that deep neural networks (DNNs) have already been shown to be useful at improving. Here, we demonstrate that random phase modulation on the optical field, also known as coherent modulation imaging (CMI), in conjunction with the phase extraction neural network (PhENN) and a Gerchberg-Saxton-Fienup (GSF) approximant, further improves resilience to noise of the phase-from-intensity imaging problem. We offer design guidelines for implementing the CMI hardware with the proposed computational reconstruction scheme and quantify reconstruction improvement as function of photon count.
Collapse
|
35
|
Zhu R, Yu H, Tan Z, Lu R, Han S, Huang Z, Wang J. Ghost imaging based on Y-net: a dynamic coding and decoding approach. OPTICS EXPRESS 2020; 28:17556-17569. [PMID: 32679962 DOI: 10.1364/oe.395000] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 05/12/2020] [Indexed: 06/11/2023]
Abstract
Ghost imaging incorporating deep learning technology has recently attracted much attention in the optical imaging field. However, deterministic illumination and multiple exposure are still essential in most scenarios. Here we propose a ghost imaging scheme based on a novel dynamic decoding deep learning framework (Y-net), which works well under both deterministic and indeterministic illumination. Benefited from the end-to-end characteristic of our network, the image of a sample can be achieved directly from the data collected by the detector. The sample is illuminated only once in the experiment, and the spatial distribution of the speckle encoding the sample in the experiment can be completely different from that of the simulation speckle in training, as long as the statistical characteristics of the speckle remain unchanged. This approach is particularly important to high-resolution x-ray ghost imaging applications due to its potential for improving image quality and reducing radiation damage.
Collapse
|
36
|
Baek Y, Lee K, Oh J, Park Y. Speckle-Correlation Scattering Matrix Approaches for Imaging and Sensing through Turbidity. SENSORS 2020; 20:s20113147. [PMID: 32498322 PMCID: PMC7309038 DOI: 10.3390/s20113147] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 05/22/2020] [Accepted: 05/26/2020] [Indexed: 11/16/2022]
Abstract
The development of optical and computational techniques has enabled imaging without the need for traditional optical imaging systems. Modern lensless imaging techniques overcome several restrictions imposed by lenses, while preserving or even surpassing the capability of lens-based imaging. However, existing lensless methods often rely on a priori information about objects or imaging conditions. Thus, they are not ideal for general imaging purposes. The recent development of the speckle-correlation scattering matrix (SSM) techniques facilitates new opportunities for lensless imaging and sensing. In this review, we present the fundamentals of SSM methods and highlight recent implementations for holographic imaging, microscopy, optical mode demultiplexing, and quantification of the degree of the coherence of light. We conclude with a discussion of the potential of SSM and future research directions.
Collapse
Affiliation(s)
- YoonSeok Baek
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.B.); (K.L.); (J.O.)
| | - KyeoReh Lee
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.B.); (K.L.); (J.O.)
| | - Jeonghun Oh
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.B.); (K.L.); (J.O.)
| | - YongKeun Park
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.B.); (K.L.); (J.O.)
- Tomocube Inc., Daejeon 34109, Korea
- Correspondence: ; Tel.: +82-42-350-2514
| |
Collapse
|
37
|
Horisaki R, Okamoto Y, Tanida J. Deeply coded aperture for lensless imaging. OPTICS LETTERS 2020; 45:3131-3134. [PMID: 32479477 DOI: 10.1364/ol.390810] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Accepted: 05/03/2020] [Indexed: 06/11/2023]
Abstract
In this Letter, we present a method for jointly designing a coded aperture and a convolutional neural network for reconstructing an object from a single-shot lensless measurement. The coded aperture and the reconstruction network are connected with a deep learning framework in which the coded aperture is placed as a first convolutional layer. Our co-optimization method was experimentally demonstrated with a fully convolutional network, and its performance was compared to a coded aperture with a modified uniformly redundant array.
Collapse
|
38
|
Sun L, Shi J, Wu X, Sun Y, Zeng G. Photon-limited imaging through scattering medium based on deep learning. OPTICS EXPRESS 2019; 27:33120-33134. [PMID: 31878386 DOI: 10.1364/oe.27.033120] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Accepted: 10/22/2019] [Indexed: 06/10/2023]
Abstract
Imaging under ultra-weak light conditions is affected by Poisson noise heavily. The problem becomes worse if a scattering media is present in the optical path. Speckle patterns detected under ultra-weak light condition carry very little information which makes it difficult to reconstruct the image. Off-the-shelf methods are no longer available in this condition. In this paper, we experimentally demonstrate the use of a deep learning network to reconstruct images through scattering media under ultra-weak light illumination. The weak light limitation of this method is analyzed. The random Poisson detection under weak light condition obtains partial information of the object. Based on this property, we demonstrated better performance of our method by enlarging the training dataset with multiple detections of the speckle patterns. Our results demonstrate that our approach can reconstruct images through scattering media from close to 1 detected signal photon per pixel (PPP) per image.
Collapse
|
39
|
Wang F, Wang H, Wang H, Li G, Situ G. Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging. OPTICS EXPRESS 2019; 27:25560-25572. [PMID: 31510427 DOI: 10.1364/oe.27.025560] [Citation(s) in RCA: 77] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Accepted: 08/13/2019] [Indexed: 05/20/2023]
Abstract
Artificial intelligence (AI) techniques such as deep learning (DL) for computational imaging usually require to experimentally collect a large set of labeled data to train a neural network. Here we demonstrate that a practically usable neural network for computational imaging can be trained by using simulation data. We take computational ghost imaging (CGI) as an example to demonstrate this method. We develop a one-step end-to-end neural network, trained with simulation data, to reconstruct two-dimensional images directly from experimentally acquired one-dimensional bucket signals, without the need of the sequence of illumination patterns. This is in particular useful for image transmission through quasi-static scattering media as little care is needed to take to simulate the scattering process when generating the training data. We believe that the concept of training using simulation data can be used in various DL-based solvers for general computational imaging.
Collapse
|
40
|
Horisaki R, Okamoto Y, Tanida J. Single-shot noninvasive three-dimensional imaging through scattering media. OPTICS LETTERS 2019; 44:4032-4035. [PMID: 31415540 DOI: 10.1364/ol.44.004032] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Accepted: 07/17/2019] [Indexed: 06/10/2023]
Abstract
We present a method for single-shot three-dimensional imaging through scattering media with a three-dimensional memory effect. In the proposed computational process, a captured speckle image is two-dimensionally correlated with different scales, and the object is three-dimensionally recovered with three-dimensional phase retrieval. Our method was experimentally demonstrated with a lensless setup and was compared with a multishot approach used in our previous work [Opt. Lett.44, 2526 (2019)OPLEDP0146-959210.1364/OL.44.002526].
Collapse
|
41
|
Moon G, Son T, Lee H, Kim D. Deep Learning Approach for Enhanced Detection of Surface Plasmon Scattering. Anal Chem 2019; 91:9538-9545. [PMID: 31287294 DOI: 10.1021/acs.analchem.9b00683] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
A deep learning approach has been taken to improve detection characteristics of surface plasmon microscopy (SPM) of light scattering. Deep learning based on the convolutional neural network algorithm was used to estimate the effect of scattering parameters, mainly the number of scatterers. The improvement was assessed on a quantitative basis by applying the approach to SPM images formed by coherent interference of scatterers. It was found that deep learning significantly improves the accuracy over conventional detection: the enhancement in the accuracy was shown to be significantly higher by almost 6 times and useful for scattering by polydisperse mixtures. This suggests that deep learning can be used to find scattering objects effectively in the noisy environment. Furthermore, deep learning can be extended directly to label-free molecular detection assays and provide considerably improved detection in imaging and microscopy techniques.
Collapse
Affiliation(s)
- Gwiyeong Moon
- School of Electrical and Electronic Engineering Yonsei University , Seoul , Korea , 120-749
| | - Taehwang Son
- School of Electrical and Electronic Engineering Yonsei University , Seoul , Korea , 120-749
| | - Hongki Lee
- School of Electrical and Electronic Engineering Yonsei University , Seoul , Korea , 120-749
| | - Donghyun Kim
- School of Electrical and Electronic Engineering Yonsei University , Seoul , Korea , 120-749
| |
Collapse
|
42
|
Kürüm U, Wiecha PR, French R, Muskens OL. Deep learning enabled real time speckle recognition and hyperspectral imaging using a multimode fiber array. OPTICS EXPRESS 2019; 27:20965-20979. [PMID: 31510183 DOI: 10.1364/oe.27.020965] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Accepted: 06/10/2019] [Indexed: 06/10/2023]
Abstract
We demonstrate the use of deep learning for fast spectral deconstruction of speckle patterns. The artificial neural network can be effectively trained using numerically constructed multispectral datasets taken from a measured spectral transmission matrix. Optimized neural networks trained on these datasets achieve reliable reconstruction of both discrete and continuous spectra from a monochromatic camera image. Deep learning is compared to analytical inversion methods as well as to a compressive sensing algorithm and shows favourable characteristics both in the oversampling and in the sparse undersampling (compressive) regimes. The deep learning approach offers significant advantages in robustness to drift or noise and in reconstruction speed. In a proof-of-principle demonstrator we achieve real time recovery of hyperspectral information using a multi-core, multi-mode fiber array as a random scattering medium.
Collapse
|
43
|
Nakamura I, Kanemura A, Nakaso T, Yamamoto R, Fukuhara T. Non-standard trajectories found by machine learning for evaporative cooling of 87Rb atoms. OPTICS EXPRESS 2019; 27:20435-20443. [PMID: 31510137 DOI: 10.1364/oe.27.020435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 06/12/2019] [Indexed: 06/10/2023]
Abstract
We present a machine-learning experiment involving evaporative cooling of gaseous 87Rb atoms. The evaporation trajectory was optimized to maximize the number of atoms cooled down to a Bose-Einstein condensate using Bayesian optimization. After 300 trials within 3 hours, Bayesian optimization discovered trajectories that achieved atom numbers comparable with those of manual tuning by a human expert. Analysis of the machine-learned trajectories revealed minimum requirements for successful evaporative cooling. We found that the manually obtained curve and the machine-learned trajectories were quite similar in terms of evaporation efficiency, although the manual and machine-learned evaporation ramps were significantly different.
Collapse
|
44
|
Okamoto Y, Horisaki R, Tanida J. Noninvasive three-dimensional imaging through scattering media by three-dimensional speckle correlation. OPTICS LETTERS 2019; 44:2526-2529. [PMID: 31090723 DOI: 10.1364/ol.44.002526] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Accepted: 04/18/2019] [Indexed: 06/09/2023]
Abstract
We present a method for noninvasive three-dimensional imaging through scattering media by using a three-dimensional memory effect in scattering phenomena. In the proposed method, an object in a scattering medium is reconstructed from a three-dimensional autocorrelation of speckle images captured by axially scanning an image sensor, based on a three-dimensional phase retrieval algorithm. We experimentally demonstrated our method with a lensless setup by using a three-dimensionally printed object between diffusers.
Collapse
|
45
|
Wang K, Li Y, Kemao Q, Di J, Zhao J. One-step robust deep learning phase unwrapping. OPTICS EXPRESS 2019; 27:15100-15115. [PMID: 31163947 DOI: 10.1364/oe.27.015100] [Citation(s) in RCA: 79] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Phase unwrapping is an important but challenging issue in phase measurement. Even with the research efforts of a few decades, unfortunately, the problem remains not well solved, especially when heavy noise and aliasing (undersampling) are present. We propose a database generation method for phase-type objects and a one-step deep learning phase unwrapping method. With a trained deep neural network, the unseen phase fields of living mouse osteoblasts and dynamic candle flame are successfully unwrapped, demonstrating that the complicated nonlinear phase unwrapping task can be directly fulfilled in one step by a single deep neural network. Excellent anti-noise and anti-aliasing performances outperforming classical methods are highlighted in this paper.
Collapse
|
46
|
Nishizaki Y, Valdivia M, Horisaki R, Kitaguchi K, Saito M, Tanida J, Vera E. Deep learning wavefront sensing. OPTICS EXPRESS 2019; 27:240-251. [PMID: 30645371 DOI: 10.1364/oe.27.000240] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Accepted: 12/19/2018] [Indexed: 05/20/2023]
Abstract
We present a new class of wavefront sensors by extending their design space based on machine learning. This approach simplifies both the optical hardware and image processing in wavefront sensing. We experimentally demonstrated a variety of image-based wavefront sensing architectures that can directly estimate Zernike coefficients of aberrated wavefronts from a single intensity image by using a convolutional neural network. We also demonstrated that the proposed deep learning wavefront sensor can be trained to estimate wavefront aberrations stimulated by a point source and even extended sources.
Collapse
|
47
|
Turpin A, Vishniakou I, Seelig JD. Light scattering control in transmission and reflection with neural networks. OPTICS EXPRESS 2018; 26:30911-30929. [PMID: 30469982 DOI: 10.1364/oe.26.030911] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Scattering often limits the controlled delivery of light in applications such as biomedical imaging, optogenetics, optical trapping, and fiber-optic communication or imaging. Such scattering can be controlled by appropriately shaping the light wavefront entering the material. Here, we develop a machine-learning approach for light control. Using pairs of binary intensity patterns and intensity measurements we train neural networks (NNs) to provide the wavefront corrections necessary to shape the beam after the scatterer. Additionally, we demonstrate that NNs can be used to find a functional relationship between transmitted and reflected speckle patterns. Establishing the validity of this relationship, we focus and scan in transmission through opaque media using reflected light. Our approach shows the versatility of NNs for light shaping, for efficiently and flexibly correcting for scattering, and in particular the feasibility of transmission control based on reflected light.
Collapse
|
48
|
Li S, Barbastathis G. Spectral pre-modulation of training examples enhances the spatial resolution of the phase extraction neural network (PhENN). OPTICS EXPRESS 2018; 26:29340-29352. [PMID: 30470099 DOI: 10.1364/oe.26.029340] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Accepted: 09/21/2018] [Indexed: 05/27/2023]
Abstract
The phase extraction neural network (PhENN) [Optica 4, 1117 (2017)] is a computational architecture, based on deep machine learning, for lens-less quantitative phase retrieval from raw intensity data. PhENN is a deep convolutional neural network trained through examples consisting of pairs of true phase objects and their corresponding intensity diffraction patterns; thereafter, given a test raw intensity pattern, PhENN is capable of reconstructing the original phase object robustly, in many cases even for objects outside the database where the training examples were drawn from. Here, we show that the spatial frequency content of the training examples is an important factor limiting PhENN's spatial frequency response. For example, if the training database is relatively sparse in high spatial frequencies, as most natural scenes are, PhENN's ability to resolve fine spatial features in test patterns will be correspondingly limited. To combat this issue, we propose "flattening" the power spectral density of the training examples before presenting them to PhENN. For phase objects following the statistics of natural scenes, we demonstrate experimentally that the spectral pre-modulation method enhances the spatial resolution of PhENN by a factor of 2.
Collapse
|
49
|
Haskel M, Stern A. Modeling optical memory effects with phase screens. OPTICS EXPRESS 2018; 26:29231-29243. [PMID: 30470089 DOI: 10.1364/oe.26.029231] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2018] [Accepted: 08/26/2018] [Indexed: 06/09/2023]
Abstract
During the last decade, optical memory effects have been explored extensively for various applications. In this letter we propose phase screen models to facilitate the analysis and the simulation of wave propagation through optical media that exhibits memory effects. We show that the classical optical memory effect, which implies tilt wave correlations of the input and the scattered fields, can be readily modeled by a single random phase screen. For the recently discovered generalized optical memory effect, which implies the existence of shift wave correlations in addition to the tilt correlation, we propose an appropriate generalized random phase screen model.
Collapse
|
50
|
Chen H, Gao Y, Liu X, Zhou Z. Imaging through scattering media using speckle pattern classification based support vector regression. OPTICS EXPRESS 2018; 26:26663-26678. [PMID: 30469748 DOI: 10.1364/oe.26.026663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Accepted: 09/03/2018] [Indexed: 06/09/2023]
Abstract
Imaging through scattering media is a common practice in many applications of biomedical imaging. Object image would deteriorate into unrecognizable speckle pattern when scattering media is presented. Many methods have been investigated to reconstruct the object image when only speckle pattern is available. In this paper, we demonstrate a method of single-shot imaging through scattering media. This method is based on classification and support vector regression of the measured speckle pattern. We prove the possibility of speckle pattern classification and related formulas are presented. The specified and limited imaging capability without speckle pattern classification is demonstrated. Our proposed approach, that is, speckle pattern classification based support vector regression method, makes up the deficiency. Experimental results show that, with our approach, speckle patterns could be utilized for classification when object images are unavailable, and object images can be reconstructed with high fidelity. The proposed approach for imaging through scattering media is expected to be applicable to various sensing schemes.
Collapse
|