1
|
Liu J, Luo H, Tu D. Underwater motion scene image restoration based on an improved U-Net network. APPLIED OPTICS 2024; 63:228-238. [PMID: 38175025 DOI: 10.1364/ao.505198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 11/27/2023] [Indexed: 01/05/2024]
Abstract
Active underwater polarization imaging is a common underwater imaging method, which uses the polarization difference between the reflected light and the scattered light in the underwater scene to suppress the scattered light, so as to improve the imaging quality of the underwater scene. However, the implementation often requires the acquisition of multiple polarization images, which is not suitable for the restoration of images of underwater motion scenes. To address the problem, a U-AD-Net deep learning network model based on a single polarized image is proposed, taking the polarization information of the single polarized image as the feature input, based on the classic U-Net network model, and introducing Dense-Net and spatial attention module. The learning ability and generalization ability of the proposed model for deep features are enhanced, and the polarization information that is most helpful to the image restoration is extracted, so as to restore the scene image more comprehensively. IE, AG, UCIQE, and SSIM are selected as evaluation metrics to assess the quality of the restored images. Experimental results show that the images restored through this proposed method contain richer detail information, having an obvious advantage to the existing network models. Since only a single polarized image is needed for restoration, this method has dynamic adaptability to underwater moving scene restoration.
Collapse
|
2
|
Lin B, Fan X, Guo Z. Self-attention module in a multi-scale improved U-net (SAM-MIU-net) motivating high-performance polarization scattering imaging. OPTICS EXPRESS 2023; 31:3046-3058. [PMID: 36785304 DOI: 10.1364/oe.479636] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 12/16/2022] [Indexed: 06/18/2023]
Abstract
Polarization imaging has outstanding advantages in the field of scattering imaging, which still encounters great challenges in heavy scattering media systems even though there are helps from deep learning technology. In this paper, we propose a self-attention module (SAM) in multi-scale improved U-net (SAM-MIU-net) for the polarization scattering imaging, which can extract a new combination of multidimensional information from targets effectively. The proposed SAM-MIU-net can focus on the stable feature carried by polarization characteristics of the target, so as to enhance the expression of the available features, and make it easier to extract polarization features which help to recover the detail of targets for the polarization scattering imaging. Meanwhile, the SAM's effectiveness has been verified in a series of experiments. Based on proposed SAM-MIU-net, we have investigated the generalization abilities for the targets' structures and materials, and the imaging distances between the targets and the ground glass. Experimental results demonstrate that our proposed SAM-MIU-net can achieve high-precision reconstruction of target information under incoherent light conditions for the polarization scattering imaging.
Collapse
|
3
|
Fan W, Sun J, Qiu Y, Wu Y, Chen S. 2D shape reconstruction of irregular particles with deep learning based on interferometric particle imaging. APPLIED OPTICS 2022; 61:9595-9602. [PMID: 36606899 DOI: 10.1364/ao.462450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 10/12/2022] [Indexed: 06/17/2023]
Abstract
Interferometric particle imaging (IPI) technology is widely used in the measurement of various particles. Obtaining particle shape information directly by IPI is challenging because of the complex relationship between the speckle distribution of interference-defocused speckle patterns and the shape of the corresponding irregular particles. Considering this challenge, we implement a deep learning method based on the convolutional neural network (CNN) to reconstruct defocused images of sand particles with sparse features. We also introduce the negative Pearson correlation coefficient as the loss function. To verify the feasibility of our method, we implemented it to reconstruct defocused images obtained from IPI experiments. Finally, compared with another common CNN-based structure, we confirmed that our network structure has good performance in the shape reconstruction of irregular particles.
Collapse
|
4
|
He F, Tian X, Liu R, Ma J. MoG-DS: model-guided deep convolutional network for joint denoising and super-resolution of a single-photon counting image. OPTICS EXPRESS 2022; 30:33068-33082. [PMID: 36242355 DOI: 10.1364/oe.462935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 08/08/2022] [Indexed: 06/16/2023]
Abstract
Single-photon counting (SPC) imaging has attracted considerable research attention in recent years due to its capability to detect targets under extremely low-light conditions. However, the spatial quality of SPC images is always unsatisfactory because they typically suffer from considerable effects of noise and their spatial resolution is low. Most traditional methods are dedicated to solving the noise problem while ignoring the improvement of spatial resolution. To address these challenging issues, we propose a novel model-guided deep convolutional network for joint denoising and super-resolution (SR) of SPC images. First, we introduce a model-based iterative optimization algorithm with deep regularizer to unify denoising and SR into one problem. Second, we construct a model-guided deep convolutional network by unfolding the aforementioned model-based iterative algorithm to achieve an optimal solution. All modules in the proposed network are interpretable due to the special model-guided design, and they enable good generalization in real situations. In addition, the deep regularizer and other parameters in the proposed network are jointly optimized in an end-to-end manner, which efficiently reduces the difficulty of parameter design. Extensive simulation and real experimental results are reported to demonstrate the superiority of the proposed method in terms of visual comparison and quantitative analysis, respectively.
Collapse
|
5
|
Laurenzis M, Christnacher F. Time domain analysis of photon scattering and Huygens-Fresnel back projection. OPTICS EXPRESS 2022; 30:30441-30454. [PMID: 36242148 DOI: 10.1364/oe.468668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 07/21/2022] [Indexed: 06/16/2023]
Abstract
Stand-off detection and characterization of scattering media such as fog and aerosols is an important task in environmental monitoring and related applications. We present, for the first time, a stand-off characterization of sprayed water fog in the time domain. Using a time correlated single photon counting, we measure transient signatures of photons reflected off a target within the fog volume. We can distinguish ballistic from scattered photon. By application of a forward propagation model, we reconstruct the scattered photon paths and determine the fog's mean scattering length μscat. in a range of 1.55 m to 1.86m. Moreover, in a second analysis, we project the recorded transients back to reconstruct the scene using virtual Huygens-Fresnel wavefronts. While in medium-density fog some contribution of ballistic remain in the signatures, we could demonstrate that in high-density fog, all recorded photons are at least scattered a single time. This work may path the way to novel characterization tools of and enhanced imaging in scattering media.
Collapse
|
6
|
Tsukada T, Watanabe W. Investigation of image plane for image reconstruction of objects through diffusers via deep learning. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:056001. [PMID: 35509071 PMCID: PMC9067610 DOI: 10.1117/1.jbo.27.5.056001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 01/31/2022] [Indexed: 06/14/2023]
Abstract
SIGNIFICANCE The imaging of objects hidden in light-scattering media is a vital practical task in a wide range of applications, including biological imaging. Deep-learning-based methods have been used to reconstruct images behind scattering media under complex scattering conditions, but improvements in the quality of the reconstructed images are required. AIM To investigate the effect of image plane on the accuracy of reconstructed images. APPROACH Light reflected from an object passing through glass diffusers is captured by changing the image plane of an optical imaging system. Images are reconstructed by deep learning, and evaluated in terms of structural similarity index measure, classification accuracy of digital images, and training and testing error curves. RESULTS The reconstruction accuracy was improved for the case in which the diffuser was imaged, compared to the case where the object was imaged. The training and testing error curves show that the loss converged to lower values in fewer epochs when the diffuser was imaged. CONCLUSIONS The proposed approach demonstrates an improvement in the accuracy of the reconstruction of objects hidden through glass diffusers by imaging glass diffuser surfaces, and can be applied to objects at unknown locations in a scattering medium.
Collapse
Affiliation(s)
- Takumi Tsukada
- Ritsumeikan University, College of Science and Engineering, Department of Electrical and Electronic Engineering, Kusatsu, Shiga, Japan
| | - Wataru Watanabe
- Ritsumeikan University, College of Science and Engineering, Department of Electrical and Electronic Engineering, Kusatsu, Shiga, Japan
| |
Collapse
|
7
|
Imaging through diffuse media using multi-mode vortex beams and deep learning. Sci Rep 2022; 12:1561. [PMID: 35091633 PMCID: PMC8799672 DOI: 10.1038/s41598-022-05358-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 01/11/2022] [Indexed: 01/20/2023] Open
Abstract
Optical imaging through diffuse media is a challenging issue and has attracted applications in many fields such as biomedical imaging, non-destructive testing, and computer-assisted surgery. However, light interaction with diffuse media leads to multiple scattering of the photons in the angular and spatial domain, severely degrading the image reconstruction process. In this article, a novel method to image through diffuse media using multiple modes of vortex beams and a new deep learning network named “LGDiffNet” is derived. A proof-of-concept numerical simulation is conducted using this method, and the results are experimentally verified. In this technique, the multiple modes of Gaussian and Laguerre-Gaussian beams illuminate the displayed digits dataset number, and the beams are then propagated through the diffuser before being captured on the beam profiler. Furthermore, we investigated whether imaging through diffuse media using multiple modes of vortex beams instead of Gaussian beams improves the imaging system's imaging capability and enhances the network's reconstruction ability. Our results show that illuminating the diffuser using vortex beams and employing the “LGDiffNet” network provides enhanced image reconstruction compared to existing modalities. When employing vortex beams for image reconstruction, the best NPCC is − 0.9850. However, when using Gaussian beams for imaging acquisition, the best NPCC is − 0.9837. An enhancement of 0.62 dB, in terms of PSNR, is achieved using this method when a highly scattering diffuser of grit 220 and width 2 mm (7.11 times the mean free path) is used. No additional optimizations or reference beams were used in the imaging system, revealing the robustness of the “LGDiffNet” network and the adaptability of the imaging system for practical applications in medical imaging.
Collapse
|
8
|
Li Z, Tang F, Shang S, Wu J, Shao J, Liao W, Kong B, Zeng T, Ye X, Jiang X, Yang L. Compact metalens-based integrated imaging devices for near-infrared microscopy. OPTICS EXPRESS 2021; 29:27041-27047. [PMID: 34615126 DOI: 10.1364/oe.431901] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 07/05/2021] [Indexed: 06/13/2023]
Abstract
With current trends to progressively miniaturize optical systems, it is now essential to look for alternative methods to control light at extremely small dimensions. Metalenses are composed of subwavelength nanostructures and have an excellent ability to manipulate the polarization, phase, and amplitude of incident light. Although great progress of metalenses has been made, the compact metalens-integrated devices have not been researched adequately. In the study, we present compact imaging devices for near-infrared microscopy, in which a metalens is exploited. The indicators including resolution, magnification, and image quality are investigated via imaging several specimens of intestinal cells to verify the overall performance of the imaging system. The further compact devices, where the metalens is integrated directly on the CMOS imaging sensor, are also researched to detect biomedical issues. This study provides an approach to constructing compact imaging devices based on metalenses for near-infrared microscopy, micro-telecopy, etc., which can promote the miniaturization tending of futural optical systems.
Collapse
|
9
|
Huang X, Nan S, Tan W, Bai Y, Fu X. Ghost imaging influenced by a supersonic wind-induced random environment. OPTICS LETTERS 2021; 46:1009-1012. [PMID: 33649641 DOI: 10.1364/ol.417763] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 01/26/2021] [Indexed: 06/12/2023]
Abstract
Near field airflow induced by wind is an important factor influencing imaging quality when the imaging system is placed on a moving platform with high speed, such as airborne imaging. In this Letter, ghost imaging through an airflow environment is experimentally and numerically investigated. The experiment is performed with a wind tunnel, and imaging quality decreases with wind velocity. The simulation model of ghost imaging through this kind of environment is proposed, and simulation results match well with experiments. With the model, imaging results are extended into the supersonic wind region with the effects of airflow factors discussed in detail, and a comparison between airflow and atmosphere turbulence is presented. The results can find potential applications in optical imaging and may be a powerful tool to estimate the effect of airflow on performance of the imaging system.
Collapse
|
10
|
Kang I, Pang S, Zhang Q, Fang N, Barbastathis G. Recurrent neural network reveals transparent objects through scattering media. OPTICS EXPRESS 2021; 29:5316-5326. [PMID: 33726070 DOI: 10.1364/oe.412890] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 01/29/2021] [Indexed: 06/12/2023]
Abstract
Scattering generally worsens the condition of inverse problems, with the severity depending on the statistics of the refractive index gradient and contrast. Removing scattering artifacts from images has attracted much work in the literature, including recently the use of static neural networks. S. Li et al. [Optica5(7), 803 (2018)10.1364/OPTICA.5.000803] trained a convolutional neural network to reveal amplitude objects hidden by a specific diffuser; whereas Y. Li et al. [Optica5(10), 1181 (2018)10.1364/OPTICA.5.001181] were able to deal with arbitrary diffusers, as long as certain statistical criteria were met. Here, we propose a novel dynamical machine learning approach for the case of imaging phase objects through arbitrary diffusers. The motivation is to strengthen the correlation among the patterns during the training and to reveal phase objects through scattering media. We utilize the on-axis rotation of a diffuser to impart dynamics and utilize multiple speckle measurements from different angles to form a sequence of images for training. Recurrent neural networks (RNN) embedded with the dynamics filter out useful information and discard the redundancies, thus quantitative phase information in presence of strong scattering. In other words, the RNN effectively averages out the effect of the dynamic random scattering media and learns more about the static pattern. The dynamical approach reveals transparent images behind the scattering media out of speckle correlation among adjacent measurements in a sequence. This method is also applicable to other imaging applications that involve any other spatiotemporal dynamics.
Collapse
|
11
|
Zhu R, Yu H, Tan Z, Lu R, Han S, Huang Z, Wang J. Ghost imaging based on Y-net: a dynamic coding and decoding approach. OPTICS EXPRESS 2020; 28:17556-17569. [PMID: 32679962 DOI: 10.1364/oe.395000] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 05/12/2020] [Indexed: 06/11/2023]
Abstract
Ghost imaging incorporating deep learning technology has recently attracted much attention in the optical imaging field. However, deterministic illumination and multiple exposure are still essential in most scenarios. Here we propose a ghost imaging scheme based on a novel dynamic decoding deep learning framework (Y-net), which works well under both deterministic and indeterministic illumination. Benefited from the end-to-end characteristic of our network, the image of a sample can be achieved directly from the data collected by the detector. The sample is illuminated only once in the experiment, and the spatial distribution of the speckle encoding the sample in the experiment can be completely different from that of the simulation speckle in training, as long as the statistical characteristics of the speckle remain unchanged. This approach is particularly important to high-resolution x-ray ghost imaging applications due to its potential for improving image quality and reducing radiation damage.
Collapse
|