201
|
Shajkofci A, Liebling M. Spatially-Variant CNN-based Point Spread Function Estimation for Blind Deconvolution and Depth Estimation in Optical Microscopy. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:5848-5861. [PMID: 32305918 DOI: 10.1109/tip.2020.2986880] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Optical microscopy is an essential tool in biology and medicine. Imaging thin, yet non-flat objects in a single shot (without relying on more sophisticated sectioning setups) remains challenging as the shallow depth of field that comes with highresolution microscopes leads to unsharp image regions and makes depth localization and quantitative image interpretation difficult. Here, we present a method that improves the resolution of light microscopy images of such objects by locally estimating image distortion while jointly estimating object distance to the focal plane. Specifically, we estimate the parameters of a spatiallyvariant Point Spread Function (PSF) model using a Convolutional Neural Network (CNN), which does not require instrument- or object-specific calibration. Our method recovers PSF parameters from the image itself with up to a squared Pearson correlation coefficient of 0.99 in ideal conditions, while remaining robust to object rotation, illumination variations, or photon noise. When the recovered PSFs are used with a spatially-variant and regularized Richardson-Lucy (RL) deconvolution algorithm, we observed up to 2.1 dB better Signal-to-Noise Ratio (SNR) compared to other Blind Deconvolution (BD) techniques. Following microscope-specific calibration, we further demonstrate that the recovered PSF model parameters permit estimating surface depth with a precision of 2 micrometers and over an extended range when using engineered PSFs. Our method opens up multiple possibilities for enhancing images of non-flat objects with minimal need for a priori knowledge about the optical setup.
Collapse
|
202
|
Fast Terahertz Coded-Aperture Imaging Based on Convolutional Neural Network. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10082661] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Terahertz coded-aperture imaging (TCAI) has many advantages such as forward-looking imaging, staring imaging and low cost and so forth. However, it is difficult to resolve the target under low signal-to-noise ratio (SNR) and the imaging process is time-consuming. Here, we provide an efficient solution to tackle this problem. A convolution neural network (CNN) is leveraged to develop an off-line end to end imaging network whose structure is highly parallel and free of iterations. And it can just act as a general and powerful mapping function. Once the network is well trained and adopted for TCAI signal processing, the target of interest can be recovered immediately from echo signal. Also, the method to generate training data is shown, and we find that the imaging network trained with simulation data is of good robustness against noise and model errors. The feasibility of the proposed approach is verified by simulation experiments and the results show that it has a competitive performance with the state-of-the-art algorithms.
Collapse
|
203
|
DuBose TB, Gardner DF, Watnik AT. Intensity-enhanced deep network wavefront reconstruction in Shack-Hartmann sensors. OPTICS LETTERS 2020; 45:1699-1702. [PMID: 32235977 DOI: 10.1364/ol.389895] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Accepted: 02/27/2020] [Indexed: 06/11/2023]
Abstract
The Shack-Hartmann wavefront sensor (SH-WFS) is known to produce incorrect measurements of the wavefront gradient in the presence of non-uniform illumination. Moreover, the most common least-squares phase reconstructors cannot accurately reconstruct the wavefront in the presence of branch points. We therefore developed the intensity/slopes network (ISNet), a deep convolutional-neural-network-based reconstructor that uses both the wavefront gradient information and the intensity of the SH-WFS's subapertures to provide better wavefront reconstruction. We trained the network on simulated data with multiple levels of turbulence and compared the performance of our reconstructor to several other reconstruction techniques. ISNet produced the lowest wavefront error of the reconstructors we evaluated and operated at a speed suitable for real-time applications, enabling the use of the SH-WFS in stronger turbulence than was previously possible.
Collapse
|
204
|
Wang F, Eljarrat A, Müller J, Henninen TR, Erni R, Koch CT. Multi-resolution convolutional neural networks for inverse problems. Sci Rep 2020; 10:5730. [PMID: 32235861 PMCID: PMC7109091 DOI: 10.1038/s41598-020-62484-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Accepted: 03/13/2020] [Indexed: 12/02/2022] Open
Abstract
Inverse problems in image processing, phase imaging, and computer vision often share the same structure of mapping input image(s) to output image(s) but are usually solved by different application-specific algorithms. Deep convolutional neural networks have shown great potential for highly variable tasks across many image-based domains, but sometimes can be challenging to train due to their internal non-linearity. We propose a novel, fast-converging neural network architecture capable of solving generic image(s)-to-image(s) inverse problems relevant to a diverse set of domains. We show this approach is useful in recovering wavefronts from direct intensity measurements, imaging objects from diffusely reflected images, and denoising scanning transmission electron microscopy images, just by using different training datasets. These successful applications demonstrate the proposed network to be an ideal candidate solving general inverse problems falling into the category of image(s)-to-image(s) translation.
Collapse
Affiliation(s)
- Feng Wang
- Electron Microscopy Center, Empa, Swiss Federal Laboratories for Materials Science and Technology, CH-8600, Dübendorf, Switzerland. .,Institut für Physik, IRIS Adlershof der Humboldt-Universität zu Berlin, 12489, Berlin, Germany.
| | - Alberto Eljarrat
- Institut für Physik, IRIS Adlershof der Humboldt-Universität zu Berlin, 12489, Berlin, Germany
| | - Johannes Müller
- Institut für Physik, IRIS Adlershof der Humboldt-Universität zu Berlin, 12489, Berlin, Germany
| | - Trond R Henninen
- Electron Microscopy Center, Empa, Swiss Federal Laboratories for Materials Science and Technology, CH-8600, Dübendorf, Switzerland
| | - Rolf Erni
- Electron Microscopy Center, Empa, Swiss Federal Laboratories for Materials Science and Technology, CH-8600, Dübendorf, Switzerland
| | - Christoph T Koch
- Institut für Physik, IRIS Adlershof der Humboldt-Universität zu Berlin, 12489, Berlin, Germany
| |
Collapse
|
205
|
Lv S, Sun Q, Zhang Y, Wang J, Jiang Y. Monotonicity analysis of absolute phase unwrapping by geometric constraint in a structured light system. OPTICS EXPRESS 2020; 28:9885-9897. [PMID: 32225589 DOI: 10.1364/oe.386646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Accepted: 02/29/2020] [Indexed: 06/10/2023]
Abstract
The monotonicity of depth in a geometric constraint based absolute phase unwrapping is analyzed and a monotonic discriminant of Δ(uc,vc) is presented in this paper. The sign of the discriminant determines the distance selection for the virtual plane to create the artificial absolute phase map for a given structured light system. As Δ(uc,vc) ≥ 0 at an arbitrary point on the CCD pixel coordinates the minimum depth distance is selected for the virtual plane, and the maximum depth distance is selected as Δ(uc,vc) ≤ 0. Two structured light systems with different signs of the monotonic discriminant are developed and the validity of the theoretical analysis is experimentally demonstrated.
Collapse
|
206
|
Probst J, Braig C, Langlotz E, Rahneberg I, Kühnel M, Zeschke T, Siewert F, Krist T, Erko A. Conception of diffractive wavefront correction for XUV and soft x-ray spectroscopy. APPLIED OPTICS 2020; 59:2580-2590. [PMID: 32225799 DOI: 10.1364/ao.384782] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Accepted: 02/03/2020] [Indexed: 06/10/2023]
Abstract
We present a simple and precise method to minimize aberrations of mirror-based, wavelength-dispersive spectrometers for the extreme ultraviolet (XUV) and soft x-ray domain. The concept enables an enhanced resolving power $ E/\Delta E $E/ΔE, in particular, close to the diffraction limit over a spectral band of a few percent around the design energy of the instrument. Our optical element, the "diffractive wavefront corrector" (DWC), is individually shaped to the form and figure error of the mirror profile and might be written directly with a laser on a plane and even strongly curved substrates. Theory and simulations of various configurations, like Hettrick-Underwood or compact, highly efficient all-in-one setups for $ {{\rm TiO}_2} $TiO2 spectroscopy with $ E/\Delta E \mathbin{\lower.3ex\hbox{$\buildrel{\displaystyle{\lt}}\over{\smash{\displaystyle\sim}\vphantom{_x}}$}} 4.5 \times {10^4} $E/ΔE∼x<4.5×104, are addressed, as well as aspects of their experimental realization.
Collapse
|
207
|
Deng M, Li S, Goy A, Kang I, Barbastathis G. Learning to synthesize: robust phase retrieval at low photon counts. LIGHT, SCIENCE & APPLICATIONS 2020; 9:36. [PMID: 32194950 PMCID: PMC7062747 DOI: 10.1038/s41377-020-0267-2] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 02/04/2020] [Accepted: 02/19/2020] [Indexed: 05/13/2023]
Abstract
The quality of inverse problem solutions obtained through deep learning is limited by the nature of the priors learned from examples presented during the training phase. Particularly in the case of quantitative phase retrieval, spatial frequencies that are underrepresented in the training database, most often at the high band, tend to be suppressed in the reconstruction. Ad hoc solutions have been proposed, such as pre-amplifying the high spatial frequencies in the examples; however, while that strategy improves the resolution, it also leads to high-frequency artefacts, as well as low-frequency distortions in the reconstructions. Here, we present a new approach that learns separately how to handle the two frequency bands, low and high, and learns how to synthesize these two bands into full-band reconstructions. We show that this "learning to synthesize" (LS) method yields phase reconstructions of high spatial resolution and without artefacts and that it is resilient to high-noise conditions, e.g., in the case of very low photon flux. In addition to the problem of quantitative phase retrieval, the LS method is applicable, in principle, to any inverse problem where the forward operator treats different frequency bands unevenly, i.e., is ill-posed.
Collapse
Affiliation(s)
- Mo Deng
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Shuai Li
- Sensebrain Technology Limited LLC, 2550 N 1st Street, Suite 300, San Jose, CA 95131 USA
| | - Alexandre Goy
- Omnisens SA, Riond Bosson 3, 1110 Morges, VD Switzerland
| | - Iksung Kang
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, Singapore, 117543 Singapore
| |
Collapse
|
208
|
Shen C, Nguyen D, Zhou Z, Jiang SB, Dong B, Jia X. An introduction to deep learning in medical physics: advantages, potential, and challenges. Phys Med Biol 2020; 65:05TR01. [PMID: 31972556 PMCID: PMC7101509 DOI: 10.1088/1361-6560/ab6f51] [Citation(s) in RCA: 66] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
As one of the most popular approaches in artificial intelligence, deep learning (DL) has attracted a lot of attention in the medical physics field over the past few years. The goals of this topical review article are twofold. First, we will provide an overview of the method to medical physics researchers interested in DL to help them start the endeavor. Second, we will give in-depth discussions on the DL technology to make researchers aware of its potential challenges and possible solutions. As such, we divide the article into two major parts. The first part introduces general concepts and principles of DL and summarizes major research resources, such as computational tools and databases. The second part discusses challenges faced by DL, present available methods to mitigate some of these challenges, as well as our recommendations.
Collapse
Affiliation(s)
- Chenyang Shen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America. Innovative Technology Of Radiotherapy Computation and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | | | | | | | | | | |
Collapse
|
209
|
Zhai Y, Fu S, Zhang J, Liu X, Zhou H, Gao C. Turbulence aberration correction for vector vortex beams using deep neural networks on experimental data. OPTICS EXPRESS 2020; 28:7515-7527. [PMID: 32225977 DOI: 10.1364/oe.388526] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 02/18/2020] [Indexed: 06/10/2023]
Abstract
The vector vortex beams (VVB) possessing non-separable states of light, in which polarization and orbital angular momentum (OAM) are coupled, have attracted more and more attentions in science and technology, due to the unique nature of the light field. However, atmospheric transmission distortion is a recurring challenge hampering the practical application, such as communication and imaging. In this work, we built a deep learning based adaptive optics system to compensate the turbulence aberrations of the vector vortex mode in terms of phase distribution and mode purity. A turbulence aberration correction convolutional neural network (TACCNN) model, which can learn the mapping relationship of intensity profile of the distorted vector vortex modes and the turbulence phase generated by first 20 Zernike modes, is well designed. After supervised learning plentiful experimental samples, the TACCNN model compensates turbulence aberration for VVB quickly and accurately. For the first time, experimental results show that through correction, the mode purity of the distorted VVB improves from 19% to 70% under the turbulence strength of D/r0 = 5.28 with correction time 100 ms. Furthermore, both spatial modes and the light intensity distribution can be well compensated in different atmospheric turbulence.
Collapse
|
210
|
Zhao H, Ke Z, Chen N, Wang S, Li K, Wang L, Gong X, Zheng W, Song L, Liu Z, Liang D, Liu C. A new deep learning method for image deblurring in optical microscopic systems. JOURNAL OF BIOPHOTONICS 2020; 13:e201960147. [PMID: 31845537 DOI: 10.1002/jbio.201960147] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 11/25/2019] [Accepted: 12/12/2019] [Indexed: 05/03/2023]
Abstract
Deconvolution is the most commonly used image processing method in optical imaging systems to remove the blur caused by the point-spread function (PSF). While this method has been successful in deblurring, it suffers from several disadvantages, such as slow processing time due to multiple iterations required to deblur and suboptimal in cases where the experimental operator chosen to represent PSF is not optimal. In this paper, we present a deep-learning-based deblurring method that is fast and applicable to optical microscopic imaging systems. We tested the robustness of proposed deblurring method on the publicly available data, simulated data and experimental data (including 2D optical microscopic data and 3D photoacoustic microscopic data), which all showed much improved deblurred results compared to deconvolution. We compared our results against several existing deconvolution methods. Our results are better than conventional techniques and do not require multiple iterations or pre-determined experimental operator. Our method has several advantages including simple operation, short time to compute, good deblur results and wide application in all types of optical microscopic imaging systems. The deep learning approach opens up a new path for deblurring and can be applied in various biomedical imaging fields.
Collapse
Affiliation(s)
- Huangxuan Zhao
- Research Laboratory for Biomedical Optics and Molecular Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Beijing, China
- School of Biomedical Engineering, Capital Medical University, Beijing, China
| | - Ziwen Ke
- Research Center for Medical AI, CAS Key Laboratory of Health Informatics, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Ningbo Chen
- Research Laboratory for Biomedical Optics and Molecular Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Songjian Wang
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Beijing, China
- School of Biomedical Engineering, Capital Medical University, Beijing, China
| | - Ke Li
- Research Laboratory for Biomedical Optics and Molecular Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Beijing, China
- School of Biomedical Engineering, Capital Medical University, Beijing, China
| | - Lidai Wang
- Department of Mechanical and Biomedical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Xiaojing Gong
- Research Laboratory for Biomedical Optics and Molecular Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wei Zheng
- Research Laboratory for Biomedical Optics and Molecular Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Liang Song
- Research Laboratory for Biomedical Optics and Molecular Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhicheng Liu
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Beijing, China
- School of Biomedical Engineering, Capital Medical University, Beijing, China
| | - Dong Liang
- Research Center for Medical AI, CAS Key Laboratory of Health Informatics, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Chengbo Liu
- Research Laboratory for Biomedical Optics and Molecular Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
211
|
Park DY, Park JH. Hologram conversion for speckle free reconstruction using light field extraction and deep learning. OPTICS EXPRESS 2020; 28:5393-5409. [PMID: 32121761 DOI: 10.1364/oe.384888] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Accepted: 02/01/2020] [Indexed: 06/10/2023]
Abstract
A novel hologram conversion technique for speckle-less reconstruction is proposed. Many speckle-less reconstruction methods require holograms specially created for those techniques, limiting their applications to general pre-existing holograms. The proposed technique transforms an existing hologram with random phase distribution to new holograms for the application of the speckle-less reconstruction methods. The proposed technique first extracts a set of orthographic views from the existing hologram, then the extracted orthographic views are processed for the speckle noise removal using convolutional neural network. The processed orthographic views are finally used to synthesize new holograms with desired carrier waves by using non-hogel based computer generated hologram technique. The selection of the carrier wave is determined by the desired speckle-less reconstruction method. In this paper, we demonstrate the proposed technique with two speckle-less reconstruction methods; i.e. temporal speckle averaging of different random phase distributions and time-multiplexing of interleaved angular spectrums.
Collapse
|
212
|
Zeng T, So HKH, Lam EY. RedCap: residual encoder-decoder capsule network for holographic image reconstruction. OPTICS EXPRESS 2020; 28:4876-4887. [PMID: 32121718 DOI: 10.1364/oe.383350] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 01/27/2020] [Indexed: 06/10/2023]
Abstract
A capsule network, as an advanced technique in deep learning, is designed to overcome information loss in the pooling operation and internal data representation of a convolutional neural network (CNN). It has shown promising results in several applications, such as digit recognition and image segmentation. In this work, we investigate for the first time the use of capsule network in digital holographic reconstruction. The proposed residual encoder-decoder capsule network, which we call RedCap, uses a novel windowed spatial dynamic routing algorithm and residual capsule block, which extends the idea of a residual block. Compared with the CNN-based neural network, RedCap exhibits much better experimental results in digital holographic reconstruction, while having a dramatic 75% reduction in the number of parameters. It indicates that RedCap is more efficient in the way it processes data and requires a much less memory storage for the learned model, which therefore makes it possible to be applied to some challenging situations with limited computational resources, such as portable devices.
Collapse
|
213
|
Shao S, Mallery K, Kumar SS, Hong J. Machine learning holography for 3D particle field imaging. OPTICS EXPRESS 2020; 28:2987-2999. [PMID: 32121975 DOI: 10.1364/oe.379480] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Accepted: 01/01/2020] [Indexed: 06/10/2023]
Abstract
We propose a new learning-based approach for 3D particle field imaging using holography. Our approach uses a U-net architecture incorporating residual connections, Swish activation, hologram preprocessing, and transfer learning to cope with challenges arising in particle holograms where accurate measurement of individual particles is crucial. Assessments on both synthetic and experimental holograms demonstrate a significant improvement in particle extraction rate, localization accuracy and speed compared to prior methods over a wide range of particle concentrations, including highly dense concentrations where other methods are unsuitable. Our approach can be potentially extended to other types of computational imaging tasks with similar features.
Collapse
|
214
|
Jiao S, Gao Y, Feng J, Lei T, Yuan X. Does deep learning always outperform simple linear regression in optical imaging? OPTICS EXPRESS 2020; 28:3717-3731. [PMID: 32122034 DOI: 10.1364/oe.382319] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Accepted: 01/16/2020] [Indexed: 06/10/2023]
Abstract
Deep learning has been extensively applied in many optical imaging problems in recent years. Despite the success, the limitations and drawbacks of deep learning in optical imaging have been seldom investigated. In this work, we show that conventional linear-regression-based methods can outperform the previously proposed deep learning approaches for two black-box optical imaging problems in some extent. Deep learning demonstrates its weakness especially when the number of training samples is small. The advantages and disadvantages of linear-regression-based methods and deep learning are analyzed and compared. Since many optical systems are essentially linear, a deep learning network containing many nonlinearity functions sometimes may not be the most suitable option.
Collapse
|
215
|
Matlock A, Sentenac A, Chaumet PC, Yi J, Tian L. Inverse scattering for reflection intensity phase microscopy. BIOMEDICAL OPTICS EXPRESS 2020; 11:911-926. [PMID: 32206398 PMCID: PMC7041473 DOI: 10.1364/boe.380845] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 01/07/2020] [Accepted: 01/07/2020] [Indexed: 05/05/2023]
Abstract
Reflection phase imaging provides label-free, high-resolution characterization of biological samples, typically using interferometric-based techniques. Here, we investigate reflection phase microscopy from intensity-only measurements under diverse illumination. We evaluate the forward and inverse scattering model based on the first Born approximation for imaging scattering objects above a glass slide. Under this design, the measured field combines linear forward-scattering and height-dependent nonlinear back-scattering from the object that complicates object phase recovery. Using only the forward-scattering, we derive a linear inverse scattering model and evaluate this model's validity range in simulation and experiment using a standard reflection microscope modified with a programmable light source. Our method provides enhanced contrast of thin, weakly scattering samples that complement transmission techniques. This model provides a promising development for creating simplified intensity-based reflection quantitative phase imaging systems easily adoptable for biological research.
Collapse
Affiliation(s)
- Alex Matlock
- Department of Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA
| | - Anne Sentenac
- Institut Fresnel, Aix Marseille Univ., CNRS, Centrale Marseille, Marseille, France
| | - Patrick C. Chaumet
- Institut Fresnel, Aix Marseille Univ., CNRS, Centrale Marseille, Marseille, France
| | - Ji Yi
- Department of Medicine, Boston University School of Medicine, Boston, MA 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA
| |
Collapse
|
216
|
Belashov AV, Zhikhoreva AA, Belyaeva TN, Kornilova ES, Salova AV, Semenova IV, Vasyutinskii OS. In vitro monitoring of photoinduced necrosis in HeLa cells using digital holographic microscopy and machine learning. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2020; 37:346-352. [PMID: 32118916 DOI: 10.1364/josaa.382135] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 01/03/2020] [Indexed: 06/10/2023]
Abstract
Digital holographic microscopy supplemented with the developed cell segmentation and machine learning and classification algorithms is implemented for quantitative description of the dynamics of cellular necrosis induced by photodynamic treatment in vitro. It is demonstrated that the developed algorithms operating with a set of optical, morphological, and physiological parameters of cells, obtained from their phase images, can be used for automatic distinction between live and necrotic cells. The developed classifier provides high accuracy of about 95.5% and allows for calculation of survival rates in the course of cell death.
Collapse
|
217
|
Dardikman-Yoffe G, Roitshtain D, Mirsky SK, Turko NA, Habaza M, Shaked NT. PhUn-Net: ready-to-use neural network for unwrapping quantitative phase images of biological cells. BIOMEDICAL OPTICS EXPRESS 2020; 11:1107-1121. [PMID: 32206402 PMCID: PMC7041455 DOI: 10.1364/boe.379533] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 12/19/2019] [Accepted: 01/07/2020] [Indexed: 05/17/2023]
Abstract
We present a deep-learning approach for solving the problem of 2π phase ambiguities in two-dimensional quantitative phase maps of biological cells, using a multi-layer encoder-decoder residual convolutional neural network. We test the trained network, PhUn-Net, on various types of biological cells, captured with various interferometric setups, as well as on simulated phantoms. These tests demonstrate the robustness and generality of the network, even for cells of different morphologies or different illumination conditions than PhUn-Net has been trained on. In this paper, for the first time, we make the trained network publicly available in a global format, such that it can be easily deployed on every platform, to yield fast and robust phase unwrapping, not requiring prior knowledge or complex implementation. By this, we expect our phase unwrapping approach to be widely used, substituting conventional and more time-consuming phase unwrapping algorithms.
Collapse
Affiliation(s)
- Gili Dardikman-Yoffe
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv 69978, Israel
| | - Darina Roitshtain
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv 69978, Israel
| | - Simcha K. Mirsky
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv 69978, Israel
| | - Nir A. Turko
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv 69978, Israel
| | - Mor Habaza
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv 69978, Israel
| | - Natan T. Shaked
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv 69978, Israel
| |
Collapse
|
218
|
Wiecha PR, Muskens OL. Deep Learning Meets Nanophotonics: A Generalized Accurate Predictor for Near Fields and Far Fields of Arbitrary 3D Nanostructures. NANO LETTERS 2020; 20:329-338. [PMID: 31825227 DOI: 10.1021/acs.nanolett.9b03971] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Deep artificial neural networks are powerful tools with many possible applications in nanophotonics. Here, we demonstrate how a deep neural network can be used as a fast, general purpose predictor of the full near-field and far-field response of plasmonic and dielectric nanostructures. A trained neural network is shown to infer the internal fields of arbitrary three-dimensional nanostructures many orders of magnitude faster compared to conventional numerical simulations. Secondary physical quantities are derived from the deep learning predictions and faithfully reproduce a wide variety of physical effects without requiring specific training. We discuss the strengths and limitations of the neural network approach using a number of model studies of single particles and their near-field interactions. Our approach paves the way for fast, yet universal, methods for design and analysis of nanophotonic systems.
Collapse
Affiliation(s)
- Peter R Wiecha
- Physics and Astronomy, Faculty of Engineering and Physical Sciences , University of Southampton , SO 17 1BJ Southampton , United Kingdom
| | - Otto L Muskens
- Physics and Astronomy, Faculty of Engineering and Physical Sciences , University of Southampton , SO 17 1BJ Southampton , United Kingdom
| |
Collapse
|
219
|
|
220
|
Yin W, Chen Q, Feng S, Tao T, Huang L, Trusiak M, Asundi A, Zuo C. Temporal phase unwrapping using deep learning. Sci Rep 2019; 9:20175. [PMID: 31882669 PMCID: PMC6934795 DOI: 10.1038/s41598-019-56222-3] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2019] [Accepted: 12/09/2019] [Indexed: 11/08/2022] Open
Abstract
The multi-frequency temporal phase unwrapping (MF-TPU) method, as a classical phase unwrapping algorithm for fringe projection techniques, has the ability to eliminate the phase ambiguities even while measuring spatially isolated scenes or the objects with discontinuous surfaces. For the simplest and most efficient case in MF-TPU, two groups of phase-shifting fringe patterns with different frequencies are used: the high-frequency one is applied for 3D reconstruction of the tested object and the unit-frequency one is used to assist phase unwrapping for the wrapped phase with high frequency. The final measurement precision or sensitivity is determined by the number of fringes used within the high-frequency pattern, under the precondition that its absolute phase can be successfully recovered without any fringe order errors. However, due to the non-negligible noises and other error sources in actual measurement, the frequency of the high-frequency fringes is generally restricted to about 16, resulting in limited measurement accuracy. On the other hand, using additional intermediate sets of fringe patterns can unwrap the phase with higher frequency, but at the expense of a prolonged pattern sequence. With recent developments and advancements of machine learning for computer vision and computational imaging, it can be demonstrated in this work that deep learning techniques can automatically realize TPU through supervised learning, as called deep learning-based temporal phase unwrapping (DL-TPU), which can substantially improve the unwrapping reliability compared with MF-TPU even under different types of error sources, e.g., intensity noise, low fringe modulation, projector nonlinearity, and motion artifacts. Furthermore, as far as we know, our method was demonstrated experimentally that the high-frequency phase with 64 periods can be directly and reliably unwrapped from one unit-frequency phase using DL-TPU. These results highlight that challenging issues in optical metrology can be potentially overcome through machine learning, opening new avenues to design powerful and extremely accurate high-speed 3D imaging systems ubiquitous in nowadays science, industry, and multimedia.
Collapse
Grants
- National Natural Science Foundation of China (61722506, 61705105, 11574152), National Key R$ & $D Program of China (2017YFF0106403), Final Assembly ``13th Five-Year Plan' Advanced Research Project of China (30102070102), Equipment Advanced Research Fund of China (61404150202), The Key Research and Development Program of Jiangsu Province (BE2017162), Outstanding Youth Foundation of Jiangsu Province (BK20170034), National Defense Science and Technology Foundation of China (0106173), "333 Engineering" Research Project of Jiangsu Province (BRA2016407), Fundamental Research Funds for the Central Universities (30917011204), China Postdoctoral Science Foundation (2017M621747), Jiangsu Planned Projects for Postdoctoral Research Funds (1701038A).
Collapse
Affiliation(s)
- Wei Yin
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, No. 200 Xiaolingwei Street, Nanjing, Jiangsu Province, 210094, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, Nanjing, Jiangsu Province, 210094, China
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, Nanjing, Jiangsu Province, 210094, China
| | - Qian Chen
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, No. 200 Xiaolingwei Street, Nanjing, Jiangsu Province, 210094, China.
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, Nanjing, Jiangsu Province, 210094, China.
| | - Shijie Feng
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, No. 200 Xiaolingwei Street, Nanjing, Jiangsu Province, 210094, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, Nanjing, Jiangsu Province, 210094, China
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, Nanjing, Jiangsu Province, 210094, China
| | - Tianyang Tao
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, No. 200 Xiaolingwei Street, Nanjing, Jiangsu Province, 210094, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, Nanjing, Jiangsu Province, 210094, China
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, Nanjing, Jiangsu Province, 210094, China
| | - Lei Huang
- Brookhaven National Laboratory, NSLS II 50 Rutherford Drive, Upton, New York, 11973-5000, United States
| | - Maciej Trusiak
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli Street, Warsaw, 02-525, Poland
| | - Anand Asundi
- Centre for Optical and Laser Engineering (COLE), School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, 639798, Singapore
| | - Chao Zuo
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, No. 200 Xiaolingwei Street, Nanjing, Jiangsu Province, 210094, China.
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, Nanjing, Jiangsu Province, 210094, China.
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, Nanjing, Jiangsu Province, 210094, China.
| |
Collapse
|
221
|
Luo Y, Mengu D, Yardimci NT, Rivenson Y, Veli M, Jarrahi M, Ozcan A. Design of task-specific optical systems using broadband diffractive neural networks. LIGHT, SCIENCE & APPLICATIONS 2019; 8:112. [PMID: 31814969 PMCID: PMC6885516 DOI: 10.1038/s41377-019-0223-1] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Revised: 11/08/2019] [Accepted: 11/15/2019] [Indexed: 05/08/2023]
Abstract
Deep learning has been transformative in many fields, motivating the emergence of various optical computing architectures. Diffractive optical network is a recently introduced optical computing framework that merges wave optics with deep-learning methods to design optical neural networks. Diffraction-based all-optical object recognition systems, designed through this framework and fabricated by 3D printing, have been reported to recognize hand-written digits and fashion products, demonstrating all-optical inference and generalization to sub-classes of data. These previous diffractive approaches employed monochromatic coherent light as the illumination source. Here, we report a broadband diffractive optical neural network design that simultaneously processes a continuum of wavelengths generated by a temporally incoherent broadband source to all-optically perform a specific task learned using deep learning. We experimentally validated the success of this broadband diffractive neural network architecture by designing, fabricating and testing seven different multi-layer, diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize (1) a series of tuneable, single-passband and dual-passband spectral filters and (2) spatially controlled wavelength de-multiplexing. Merging the native or engineered dispersion of various material systems with a deep-learning-based design strategy, broadband diffractive neural networks help us engineer the light-matter interaction in 3D, diverging from intuitive and analytical design methods to create task-specific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning.
Collapse
Affiliation(s)
- Yi Luo
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Deniz Mengu
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Nezih T. Yardimci
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Muhammed Veli
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA 90095 USA
| |
Collapse
|
222
|
Matlock A, Tian L. High-throughput, volumetric quantitative phase imaging with multiplexed intensity diffraction tomography. BIOMEDICAL OPTICS EXPRESS 2019; 10:6432-6448. [PMID: 31853409 PMCID: PMC6913397 DOI: 10.1364/boe.10.006432] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Revised: 11/12/2019] [Accepted: 11/14/2019] [Indexed: 05/06/2023]
Abstract
Intensity diffraction tomography (IDT) provides quantitative, volumetric refractive index reconstructions of unlabeled biological samples from intensity-only measurements. IDT is scanless and easily implemented in standard optical microscopes using an LED array but suffers from large data requirements and slow acquisition speeds. Here, we develop multiplexed IDT (mIDT), a coded illumination framework providing high volume-rate IDT for evaluating dynamic biological samples. mIDT combines illuminations from an LED grid using physical model-based design choices to improve acquisition rates and reduce dataset size with minimal loss to resolution and reconstruction quality. We analyze the optimal design scheme with our mIDT framework in simulation using the reconstruction error compared to conventional IDT and theoretical acquisition speed. With the optimally determined mIDT scheme, we achieve hardware-limited 4Hz acquisition rates enabling 3D refractive index distribution recovery on live Caenorhabditis elegans worms and embryos as well as epithelial buccal cells. Our mIDT architecture provides a 60 × speed improvement over conventional IDT and is robust across different illumination hardware designs, making it an easily adoptable imaging tool for volumetrically quantifying biological samples in their natural state.
Collapse
Affiliation(s)
- Alex Matlock
- Department of Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA
| |
Collapse
|
223
|
Wu X, Li X, Yao L, Wu Y, Lin X, Chen L, Cen K. Accurate detection of small particles in digital holography using fully convolutional networks. APPLIED OPTICS 2019; 58:G332-G344. [PMID: 31873518 DOI: 10.1364/ao.58.00g332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 10/14/2019] [Indexed: 06/10/2023]
Abstract
Particle detection is a key procedure in particle field characterization with digital holography. Due to various background noises, spurious small particles might be generated and real small particles might be lost during particle detection. Therefore, accurate small particle detection remains a challenge in the research of energy and combustion. A deep learning method based on modified fully convolutional networks is proposed to detect small opaque particles (e.g., coal particles) on extended focus images. The model is tested by several experiments and proved to have good small particle detection accuracy.
Collapse
|
224
|
Lam HH, Tsang PWM, Poon TC. Ensemble convolutional neural network for classifying holograms of deformable objects. OPTICS EXPRESS 2019; 27:34050-34055. [PMID: 31878461 DOI: 10.1364/oe.27.034050] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 10/16/2019] [Indexed: 06/10/2023]
Abstract
Recently, a method known as "ensemble deep learning invariant hologram classification" (EDL-IHC) for classifying of holograms of deformable objects with deep learning network (DLN) has been demonstrated. However DL-IHC requires substantial computational resources to attain near perfect success rate (≥99%). In practice, it is always desirable to have higher success rate with a low complexity DLN. In this paper we propose a low complexity DLN known as "ensemble deep learning invariant hologram classification" (EDL-IHC). In comparison with DL-IHC, our proposed hologram classifier has promoted the success rate by 2.86% in the classification of holograms of handwritten numerals.
Collapse
|
225
|
Liu T, Wei Z, Rivenson Y, de Haan K, Zhang Y, Wu Y, Ozcan A. Deep learning-based color holographic microscopy. JOURNAL OF BIOPHOTONICS 2019; 12:e201900107. [PMID: 31309728 DOI: 10.1002/jbio.201900107] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Revised: 07/13/2019] [Accepted: 07/14/2019] [Indexed: 06/10/2023]
Abstract
We report a framework based on a generative adversarial network that performs high-fidelity color image reconstruction using a single hologram of a sample that is illuminated simultaneously by light at three different wavelengths. The trained network learns to eliminate missing-phase-related artifacts, and generates an accurate color transformation for the reconstructed image. Our framework is experimentally demonstrated using lung and prostate tissue sections that are labeled with different histological stains. This framework is envisaged to be applicable to point-of-care histopathology and presents a significant improvement in the throughput of coherent microscopy systems given that only a single hologram of the specimen is required for accurate color imaging.
Collapse
Affiliation(s)
- Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, California
- Bioengineering Department, University of California, Los Angeles, California
- California NanoSystems Institute (CNSI), University of California, Los Angeles, California
| | - Zhensong Wei
- Electrical and Computer Engineering Department, University of California, Los Angeles, California
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, California
- Bioengineering Department, University of California, Los Angeles, California
- California NanoSystems Institute (CNSI), University of California, Los Angeles, California
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California
- Bioengineering Department, University of California, Los Angeles, California
- California NanoSystems Institute (CNSI), University of California, Los Angeles, California
| | - Yibo Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, California
- Bioengineering Department, University of California, Los Angeles, California
- California NanoSystems Institute (CNSI), University of California, Los Angeles, California
| | - Yichen Wu
- Electrical and Computer Engineering Department, University of California, Los Angeles, California
- Bioengineering Department, University of California, Los Angeles, California
- California NanoSystems Institute (CNSI), University of California, Los Angeles, California
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California
- Bioengineering Department, University of California, Los Angeles, California
- California NanoSystems Institute (CNSI), University of California, Los Angeles, California
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, California
| |
Collapse
|
226
|
Kandel ME, Hu C, Naseri Kouzehgarani G, Min E, Sullivan KM, Kong H, Li JM, Robson DN, Gillette MU, Best-Popescu C, Popescu G. Epi-illumination gradient light interference microscopy for imaging opaque structures. Nat Commun 2019; 10:4691. [PMID: 31619681 PMCID: PMC6795907 DOI: 10.1038/s41467-019-12634-3] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Accepted: 09/17/2019] [Indexed: 02/06/2023] Open
Abstract
Multiple scattering and absorption limit the depth at which biological tissues can be imaged with light. In thick unlabeled specimens, multiple scattering randomizes the phase of the field and absorption attenuates light that travels long optical paths. These obstacles limit the performance of transmission imaging. To mitigate these challenges, we developed an epi-illumination gradient light interference microscope (epi-GLIM) as a label-free phase imaging modality applicable to bulk or opaque samples. Epi-GLIM enables studying turbid structures that are hundreds of microns thick and otherwise opaque to transmitted light. We demonstrate this approach with a variety of man-made and biological samples that are incompatible with imaging in a transmission geometry: semiconductors wafers, specimens on opaque and birefringent substrates, cells in microplates, and bulk tissues. We demonstrate that the epi-GLIM data can be used to solve the inverse scattering problem and reconstruct the tomography of single cells and model organisms.
Collapse
Affiliation(s)
- Mikhail E Kandel
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Chenfei Hu
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Ghazal Naseri Kouzehgarani
- Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Eunjung Min
- Rowland Institute at Harvard University, Cambridge, Cambridge, MA, USA
| | | | - Hyunjoon Kong
- Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Department of Chemical and Biomolecular Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Car R. Woese Institute for Genomic Biology, University of Illinois at Urbana-, Champaign, IL, USA
| | - Jennifer M Li
- Rowland Institute at Harvard University, Cambridge, Cambridge, MA, USA
| | - Drew N Robson
- Rowland Institute at Harvard University, Cambridge, Cambridge, MA, USA
| | - Martha U Gillette
- Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Department of Cell & Developmental Biology, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Catherine Best-Popescu
- Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Gabriel Popescu
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA.
- Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL, USA.
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA.
| |
Collapse
|
227
|
Zhang Y, Ouyang M, Ray A, Liu T, Kong J, Bai B, Kim D, Guziak A, Luo Y, Feizi A, Tsai K, Duan Z, Liu X, Kim D, Cheung C, Yalcin S, Ceylan Koydemir H, Garner OB, Di Carlo D, Ozcan A. Computational cytometer based on magnetically modulated coherent imaging and deep learning. LIGHT, SCIENCE & APPLICATIONS 2019; 8:91. [PMID: 31645935 PMCID: PMC6804677 DOI: 10.1038/s41377-019-0203-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2019] [Revised: 09/05/2019] [Accepted: 09/12/2019] [Indexed: 05/08/2023]
Abstract
Detecting rare cells within blood has numerous applications in disease diagnostics. Existing rare cell detection techniques are typically hindered by their high cost and low throughput. Here, we present a computational cytometer based on magnetically modulated lensless speckle imaging, which introduces oscillatory motion to the magnetic-bead-conjugated rare cells of interest through a periodic magnetic force and uses lensless time-resolved holographic speckle imaging to rapidly detect the target cells in three dimensions (3D). In addition to using cell-specific antibodies to magnetically label target cells, detection specificity is further enhanced through a deep-learning-based classifier that is based on a densely connected pseudo-3D convolutional neural network (P3D CNN), which automatically detects rare cells of interest based on their spatio-temporal features under a controlled magnetic force. To demonstrate the performance of this technique, we built a high-throughput, compact and cost-effective prototype for detecting MCF7 cancer cells spiked in whole blood samples. Through serial dilution experiments, we quantified the limit of detection (LoD) as 10 cells per millilitre of whole blood, which could be further improved through multiplexing parallel imaging channels within the same instrument. This compact, cost-effective and high-throughput computational cytometer can potentially be used for rare cell detection and quantification in bodily fluids for a variety of biomedical applications.
Collapse
Affiliation(s)
- Yibo Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Mengxing Ouyang
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
| | - Aniruddha Ray
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
- Department of Physics and Astronomy, University of Toledo, Toledo, OH 43606 USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Janay Kong
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
| | - Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Donghyuk Kim
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
| | - Alexander Guziak
- Department of Physics and Astronomy, University of California, Los Angeles, CA 90095 USA
| | - Yi Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Alborz Feizi
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
- Yale School of Medicine, New Haven, CT 06510 USA
| | - Katherine Tsai
- Department of Biochemistry, University of California, Los Angeles, CA 90095 USA
| | - Zhuoran Duan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
| | - Xuewei Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
| | - Danny Kim
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
| | - Chloe Cheung
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
| | - Sener Yalcin
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
| | - Hatice Ceylan Koydemir
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Omai B. Garner
- Department of Pathology and Laboratory Medicine, University of California, Los Angeles, CA 90095 USA
| | - Dino Di Carlo
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, CA 90095 USA
- Jonsson Comprehensive Cancer Center, University of California, Los Angeles, CA 90095 USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA 90095 USA
| |
Collapse
|
228
|
Goy A, Rughoobur G, Li S, Arthur K, Akinwande AI, Barbastathis G. High-resolution limited-angle phase tomography of dense layered objects using deep neural networks. Proc Natl Acad Sci U S A 2019; 116:19848-19856. [PMID: 31527279 PMCID: PMC6778227 DOI: 10.1073/pnas.1821378116] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
We present a machine learning-based method for tomographic reconstruction of dense layered objects, with range of projection angles limited to [Formula: see text] Whereas previous approaches to phase tomography generally require 2 steps, first to retrieve phase projections from intensity projections and then to perform tomographic reconstruction on the retrieved phase projections, in our work a physics-informed preprocessor followed by a deep neural network (DNN) conduct the 3-dimensional reconstruction directly from the intensity projections. We demonstrate this single-step method experimentally in the visible optical domain on a scaled-up integrated circuit phantom. We show that even under conditions of highly attenuated photon fluxes a DNN trained only on synthetic data can be used to successfully reconstruct physical samples disjoint from the synthetic training set. Thus, the need for producing a large number of physical examples for training is ameliorated. The method is generally applicable to tomography with electromagnetic or other types of radiation at all bands.
Collapse
Affiliation(s)
- Alexandre Goy
- 3D Optics Laboratory, Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139;
| | - Girish Rughoobur
- Microsystems Technology Laboratories, Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Shuai Li
- 3D Optics Laboratory, Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Kwabena Arthur
- 3D Optics Laboratory, Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Akintunde I Akinwande
- Microsystems Technology Laboratories, Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - George Barbastathis
- 3D Optics Laboratory, Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139
- BioSystems and bioMechanics (BioSyM) Interdisciplinary Research Group, Singapore-MIT Alliance for Research and Technology (SMART) Centre, Singapore 117543, Singapore
| |
Collapse
|
229
|
Rubin M, Stein O, Turko NA, Nygate Y, Roitshtain D, Karako L, Barnea I, Giryes R, Shaked NT. TOP-GAN: Stain-free cancer cell classification using deep learning with a small training set. Med Image Anal 2019; 57:176-185. [DOI: 10.1016/j.media.2019.06.014] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2018] [Revised: 05/18/2019] [Accepted: 06/25/2019] [Indexed: 01/01/2023]
|
230
|
Wang K, Dou J, Kemao Q, Di J, Zhao J. Y-Net: a one-to-two deep learning framework for digital holographic reconstruction. OPTICS LETTERS 2019; 44:4765-4768. [PMID: 31568437 DOI: 10.1364/ol.44.004765] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Accepted: 08/29/2019] [Indexed: 06/10/2023]
Abstract
In this Letter, for the first time, to the best of our knowledge, we propose a digital holographic reconstruction method with a one-to-two deep learning framework (Y-Net). Perfectly fitting the holographic reconstruction process, the Y-Net can simultaneously reconstruct intensity and phase information from a single digital hologram. As a result, this compact network with reduced parameters brings higher performance than typical network variants. The experimental results of the mouse phagocytes demonstrate the advantages of the proposed Y-Net.
Collapse
|
231
|
Shi J, Zhu X, Wang H, Song L, Guo Q. Label enhanced and patch based deep learning for phase retrieval from single frame fringe pattern in fringe projection 3D measurement. OPTICS EXPRESS 2019; 27:28929-28943. [PMID: 31684636 DOI: 10.1364/oe.27.028929] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Accepted: 09/17/2019] [Indexed: 06/10/2023]
Abstract
We propose a label enhanced and patch based deep learning phase retrieval approach which can achieve fast and accurate phase retrieval using only several fringe patterns as training dataset. To the best of our knowledge, it is the first time that the advantages of the label enhancement and patch strategy for deep learning based phase retrieval are demonstrated in fringe projection. In the proposed method, the enhanced labeled data in training dataset is designed to learn the mapping between the input fringe pattern and the output enhanced fringe part of the deep neural network (DNN). Moreover, the training data is cropped into small overlapped patches to expand the training samples for the DNN. The performance of the proposed approach is verified by experimental projection fringe patterns with applications in dynamic fringe projection 3D measurement.
Collapse
|
232
|
Zhou J, Huang B, Yan Z, Bünzli JCG. Emerging role of machine learning in light-matter interaction. LIGHT, SCIENCE & APPLICATIONS 2019; 8:84. [PMID: 31645928 PMCID: PMC6804848 DOI: 10.1038/s41377-019-0192-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2019] [Revised: 07/22/2019] [Accepted: 08/06/2019] [Indexed: 05/21/2023]
Abstract
Machine learning has provided a huge wave of innovation in multiple fields, including computer vision, medical diagnosis, life sciences, molecular design, and instrumental development. This perspective focuses on the implementation of machine learning in dealing with light-matter interaction, which governs those fields involving materials discovery, optical characterizations, and photonics technologies. We highlight the role of machine learning in accelerating technology development and boosting scientific innovation in the aforementioned aspects. We provide future directions for advanced computing techniques via multidisciplinary efforts that can help to transform optical materials into imaging probes, information carriers and photonics devices.
Collapse
Affiliation(s)
- Jiajia Zhou
- Faculty of Science, Institute for Biomedical Materials and Devices, University of Technology, Sydney, NSW 2007 Australia
| | - Bolong Huang
- Department of Applied Biology and Chemical Technology, The Hong Kong Polytechnic University, Hong Hum, Kowloon, Hong Kong SAR China
| | - Zheng Yan
- Faculty of Engineering and IT, Centre for Artificial Intelligence, University of Technology, Sydney, NSW 2007 Australia
| | - Jean-Claude G. Bünzli
- Faculty of Science, Institute for Biomedical Materials and Devices, University of Technology, Sydney, NSW 2007 Australia
- Swiss Federal Institute of Technology, Lausanne (EPFL), ISIC, Lausanne, Switzerland
| |
Collapse
|
233
|
Wang F, Wang H, Wang H, Li G, Situ G. Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging. OPTICS EXPRESS 2019; 27:25560-25572. [PMID: 31510427 DOI: 10.1364/oe.27.025560] [Citation(s) in RCA: 77] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Accepted: 08/13/2019] [Indexed: 05/20/2023]
Abstract
Artificial intelligence (AI) techniques such as deep learning (DL) for computational imaging usually require to experimentally collect a large set of labeled data to train a neural network. Here we demonstrate that a practically usable neural network for computational imaging can be trained by using simulation data. We take computational ghost imaging (CGI) as an example to demonstrate this method. We develop a one-step end-to-end neural network, trained with simulation data, to reconstruct two-dimensional images directly from experimentally acquired one-dimensional bucket signals, without the need of the sequence of illumination patterns. This is in particular useful for image transmission through quasi-static scattering media as little care is needed to take to simulate the scattering process when generating the training data. We believe that the concept of training using simulation data can be used in various DL-based solvers for general computational imaging.
Collapse
|
234
|
Xin Q, Ju G, Zhang C, Xu S. Object-independent image-based wavefront sensing approach using phase diversity images and deep learning. OPTICS EXPRESS 2019; 27:26102-26119. [PMID: 31510471 DOI: 10.1364/oe.27.026102] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 08/05/2019] [Indexed: 06/10/2023]
Abstract
This paper proposes an image-based wavefront sensing approach using deep learning, which is applicable to both point source and any extended scenes at the same time, while the training process is performed without any simulated or real extended scenes. Rather than directly recovering phase information from image plane intensities, we first extract a special feature in the frequency domain that is independent of the original objects but only determined by phase aberrations (a pair of phase diversity images is needed in this process). Then the deep long short-term memory (LSTM) network (a variant of recurrent neural network) is introduced to establish the accurate non-linear mapping between the extracted feature image and phase aberrations. Simulations and an experiment are performed to demonstrate the effectiveness and accuracy of the proposed approach. Some other discussions are further presented for demonstrating the superior non-linear fitting capacity of deep LSTM compared to Resnet 18 (a variant of convolutional neural network) specifically for the problem encountered in this paper. The effect of the incoherency of light on the accuracy of the recovered wavefront phase is also quantitatively discussed. This work will contribute to the application of deep learning to image-based wavefront sensing and high-resolution image reconstruction.
Collapse
|
235
|
Sun M, Chen X, Zhu Y, Li D, Mu Q, Xuan L. Neural network model combined with pupil recovery for Fourier ptychographic microscopy. OPTICS EXPRESS 2019; 27:24161-24174. [PMID: 31510310 DOI: 10.1364/oe.27.024161] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Fourier ptychographic microscopy (FPM) is a recently developed imaging approach aiming at circumventing the limitation of the space-bandwidth product (SBP) and acquiring a complex image with both wide field and high resolution. So far, in many algorithms that have been proposed to solve the FPM reconstruction problem, the pupil function is set to be a fixed value such as the coherent transfer function (CTF) of the system. However, the pupil aberration of the optical components in an FPM imaging system can significantly degrade the quality of the reconstruction results. In this paper, we build a trainable network (FINN-P) which combines the pupil recovery with the forward imaging process of FPM based on TensorFlow. Both the spectrum of the sample and pupil function are treated as the two-dimensional (2D) learnable weights of layers. Therefore, the complex object information and pupil function can be obtained simultaneously by minimizing the loss function in the training process. Simulated datasets are used to verify the effectiveness of pupil recovery, and experiments on the open source measured dataset demonstrate that our method can achieve better reconstruction results even in the presence of a large aberration. In addition, the recovered pupil function can be used as a good estimate before further analysis of the system optical transmission capability.
Collapse
|
236
|
Large-scale waterproof and stretchable textile-integrated laser- printed graphene energy storages. Sci Rep 2019; 9:11822. [PMID: 31413348 PMCID: PMC6694168 DOI: 10.1038/s41598-019-48320-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Accepted: 07/31/2019] [Indexed: 11/09/2022] Open
Abstract
Textile integrable large-scale on-chip energy storages and solar energy storages take a significant role in the realization of next-generation primary wearable devices for sensing, wireless communication, and health tracking. In general, these energy storages require major features like mechanical robustness, environmental friendliness, high-temperature tolerance, inexplosive nature, and long-term storage duration. Here we report on large-scale laser-printed graphene supercapacitors of dimension 100 cm2 fabricated in 3 minutes on textiles with excellent water stability, an areal capacitance, 49 mF cm−2, energy density, 6.73 mWh/cm−2, power density, 2.5 mW/cm−2, and stretchability up to 200%. Further, a demonstration is given for the textile integrated solar energy storage with stable performance for up to 20 days to reach half of the maximum output potential. These cost-effective self-reliant on-chip charging units can become an integral part for the future electronic and optoelectronic textiles.
Collapse
|
237
|
Balin I, Garmider V, Long Y, Abdulhalim I. Training artificial neural network for optimization of nanostructured VO 2-based smart window performance. OPTICS EXPRESS 2019; 27:A1030-A1040. [PMID: 31510489 DOI: 10.1364/oe.27.0a1030] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Accepted: 03/28/2019] [Indexed: 05/23/2023]
Abstract
In this work, we apply for the first time a machine learning approach to design and optimize VO2 based nanostructured smart window performance. An artificial neural network was trained to find the relationship between VO2 smart window structural parameters and performance metrics-luminous transmittance (Tlum) and solar modulation (ΔTsol), calculated by first-principle electromagnetic simulations (FDTD method). Once training was accomplished, the combination of optimal Tlum and ΔTsol was found by applying classical trust region algorithm on the trained network. The proposed method allows flexibility in definition of the optimization problem and provides clear uncertainty limits for future experimental realizations.
Collapse
|
238
|
Jaferzadeh K, Hwang SH, Moon I, Javidi B. No-search focus prediction at the single cell level in digital holographic imaging with deep convolutional neural network. BIOMEDICAL OPTICS EXPRESS 2019; 10:4276-4289. [PMID: 31453010 PMCID: PMC6701551 DOI: 10.1364/boe.10.004276] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Revised: 07/11/2019] [Accepted: 07/23/2019] [Indexed: 05/05/2023]
Abstract
Digital propagation of an off-axis hologram can provide the quantitative phase-contrast image if the exact distance between the sensor plane (such as CCD) and the reconstruction plane is correctly provided. In this paper, we present a deep-learning convolutional neural network with a regression layer as the top layer to estimate the best reconstruction distance. The experimental results obtained using microsphere beads and red blood cells show that the proposed method can accurately predict the propagation distance from a filtered hologram. The result is compared with the conventional automatic focus-evaluation function. Additionally, our approach can be utilized at the single-cell level, which is useful for cell-to-cell depth measurement and cell adherent studies.
Collapse
Affiliation(s)
- Keyvan Jaferzadeh
- Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science & Technology, Dalseong-gun, Daegu, 42988, South Korea
| | - Seung-Hyeon Hwang
- Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science & Technology, Dalseong-gun, Daegu, 42988, South Korea
| | - Inkyu Moon
- Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science & Technology, Dalseong-gun, Daegu, 42988, South Korea
- Corresponding author:
| | - Bahram Javidi
- Department of Electrical and Computer Engineering, U-4157, University of Connecticut, Storrs, Connecticut 06269-4157, USA
| |
Collapse
|
239
|
Hai H, Pan S, Liao M, Lu D, He W, Peng X. Cryptanalysis of random-phase-encoding-based optical cryptosystem via deep learning. OPTICS EXPRESS 2019; 27:21204-21213. [PMID: 31510202 DOI: 10.1364/oe.27.021204] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Accepted: 07/06/2019] [Indexed: 06/10/2023]
Abstract
Random Phase Encoding (RPE) techniques for image encryption have drawn increasing attention during the past decades. We demonstrate in this contribution that the RPE-based optical cryptosystems are vulnerable to the chosen-plaintext attack (CPA) with deep learning strategy. A deep neural network (DNN) model is employed and trained to learn the working mechanism of optical cryptosystems, and finally obtaining a certain optimized DNN that acts as a decryption system. Numerical simulations were carried out to verify its feasibility and reliability of not only the classical Double RPE (DRPE) scheme but also the security-enhanced Tripe RPE (TRPE) scheme. The results further indicate the possibility of reconstructing images (plaintexts) outside the original data set.
Collapse
|
240
|
Işıl Ç, Oktem FS, Koç A. Deep iterative reconstruction for phase retrieval. APPLIED OPTICS 2019; 58:5422-5431. [PMID: 31504010 DOI: 10.1364/ao.58.005422] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
The classical phase retrieval problem is the recovery of a constrained image from the magnitude of its Fourier transform. Although there are several well-known phase retrieval algorithms, including the hybrid input-output (HIO) method, the reconstruction performance is generally sensitive to initialization and measurement noise. Recently, deep neural networks (DNNs) have been shown to provide state-of-the-art performance in solving several inverse problems such as denoising, deconvolution, and superresolution. In this work, we develop a phase retrieval algorithm that utilizes two DNNs together with the model-based HIO method. First, a DNN is trained to remove the HIO artifacts, and is used iteratively with the HIO method to improve the reconstructions. After this iterative phase, a second DNN is trained to remove the remaining artifacts. Numerical results demonstrate the effectiveness of our approach, which has little additional computational cost compared to the HIO method. Our approach not only achieves state-of-the-art reconstruction performance but also is more robust to different initialization and noise levels.
Collapse
|
241
|
Shimobaba T, Blinder D, Makowski M, Schelkens P, Yamamoto Y, Hoshi I, Nishitsuji T, Endo Y, Kakue T, Ito T. Dynamic-range compression scheme for digital hologram using a deep neural network. OPTICS LETTERS 2019; 44:3038-3041. [PMID: 31199375 DOI: 10.1364/ol.44.003038] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2019] [Accepted: 05/19/2019] [Indexed: 06/09/2023]
Abstract
This Letter aims to propose a dynamic-range compression and decompression scheme for digital holograms that uses a deep neural network (DNN). The proposed scheme uses simple thresholding to compress the dynamic range of holograms with 8-bit gradation to binary holograms. Although this can decrease the amount of data by one-eighth, the binarization strongly degrades the image quality of the reconstructed images. The proposed scheme uses a DNN to predict the original gradation holograms from the binary holograms, and the error-diffusion algorithm of the binarization process contributes significantly to training the DNN. The performance of the scheme exceeds that of modern compression techniques such as JPEG 2000 and high-efficiency video coding.
Collapse
|
242
|
Yang T, Cheng D, Wang Y. Direct generation of starting points for freeform off-axis three-mirror imaging system design using neural network based deep-learning. OPTICS EXPRESS 2019; 27:17228-17238. [PMID: 31252936 DOI: 10.1364/oe.27.017228] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 05/27/2019] [Indexed: 06/09/2023]
Abstract
In this paper, we propose a framework of starting points generation for freeform reflective triplet using back-propagation neural network based deep-learning. The network is trained using various system specifications and the corresponding surface data obtained by system evolution as the data set. Good starting points of specific system specifications for further optimization can be generated immediately using the obtained network in general. The feasibility of this design process is validated by designing the Wetherell-configuration freeform off-axis reflective triplet. The amount of time and human effort as well as the dependence on advanced design skills are significantly reduced. These results highlight the powerful ability of deep learning in the field of freeform imaging optical design.
Collapse
|
243
|
Research on Scene Classification Method of High-Resolution Remote Sensing Images Based on RFPNet. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9102028] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
One of the challenges in the field of remote sensing is how to automatically identify and classify high-resolution remote sensing images. A number of approaches have been proposed. Among them, the methods based on low-level visual features and middle-level visual features have limitations. Therefore, this paper adopts the method of deep learning to classify scenes of high-resolution remote sensing images to learn semantic information. Most of the existing methods of convolutional neural networks are based on the existing model using transfer learning, while there are relatively few articles about designing of new convolutional neural networks based on the existing high-resolution remote sensing image datasets. In this context, this paper proposes a multi-view scaling strategy, a new convolutional neural network based on residual blocks and fusing strategy of pooling layer maps, and uses optimization methods to make the convolutional neural network named RFPNet more robust. Experiments on two benchmark remote sensing image datasets have been conducted. On the UC Merced dataset, the test accuracy, precision, recall, and F1-score all exceed 93%. On the SIRI-WHU dataset, the test accuracy, precision, recall, and F1-score all exceed 91%. Compared with the existing methods, such as the most traditional methods and some deep learning methods for scene classification of high-resolution remote sensing images, the proposed method has higher accuracy and robustness.
Collapse
|
244
|
Wang K, Li Y, Kemao Q, Di J, Zhao J. One-step robust deep learning phase unwrapping. OPTICS EXPRESS 2019; 27:15100-15115. [PMID: 31163947 DOI: 10.1364/oe.27.015100] [Citation(s) in RCA: 79] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Phase unwrapping is an important but challenging issue in phase measurement. Even with the research efforts of a few decades, unfortunately, the problem remains not well solved, especially when heavy noise and aliasing (undersampling) are present. We propose a database generation method for phase-type objects and a one-step deep learning phase unwrapping method. With a trained deep neural network, the unseen phase fields of living mouse osteoblasts and dynamic candle flame are successfully unwrapped, demonstrating that the complicated nonlinear phase unwrapping task can be directly fulfilled in one step by a single deep neural network. Excellent anti-noise and anti-aliasing performances outperforming classical methods are highlighted in this paper.
Collapse
|
245
|
Luo Z, Yurt A, Stahl R, Lambrechts A, Reumers V, Braeken D, Lagae L. Pixel super-resolution for lens-free holographic microscopy using deep learning neural networks. OPTICS EXPRESS 2019; 27:13581-13595. [PMID: 31163820 DOI: 10.1364/oe.27.013581] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 01/10/2019] [Indexed: 06/09/2023]
Abstract
Lens-free holographic microscopy (LFHM) provides a cost-effective tool for large field-of-view imaging in various biomedical applications. However, due to the unit optical magnification, its spatial resolution is limited by the pixel size of the imager. Pixel super-resolution (PSR) technique tackles this problem by using a series of sub-pixel shifted low-resolution (LR) lens-free holograms to form the high-resolution (HR) hologram. Conventional iterative PSR methods require a large number of measurements and a time-consuming reconstruction process, limiting the throughput of LFHM in practice. Here we report a deep learning-based PSR approach to enhance the resolution of LFHM. Compared with the existing PSR methods, our neural network-based approach outputs the HR hologram in an end-to-end fashion and maintains consistency in resolution improvement with a reduced number of LR holograms. Moreover, by exploiting the resolution degradation model in the imaging process, the network can be trained with a data set synthesized from the LR hologram itself without resorting to the HR ground truth. We validated the effectiveness and the robustness of our method by imaging various types of samples using a single network trained on an entirely different data set. This deep learning-based PSR approach can significantly accelerate both the data acquisition and the HR hologram reconstruction processes, therefore providing a practical solution to fast, lens-free, super-resolution imaging.
Collapse
|
246
|
Xue Y, Cheng S, Li Y, Tian L. Reliable deep-learning-based phase imaging with uncertainty quantification. OPTICA 2019; 6:618-619. [PMID: 34350313 PMCID: PMC8329751 DOI: 10.1364/optica.6.000618] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Emerging deep-learning (DL)-based techniques have significant potential to revolutionize biomedical imaging. However, one outstanding challenge is the lack of reliability assessment in the DL predictions, whose errors are commonly revealed only in hindsight. Here, we propose a new Bayesian convolutional neural network (BNN)-based framework that overcomes this issue by quantifying the uncertainty of DL predictions. Foremost, we show that BNN-predicted uncertainty maps provide surrogate estimates of the true error from the network model and measurement itself. The uncertainty maps characterize imperfections often unknown in real-world applications, such as noise, model error, incomplete training data, and out-of-distribution testing data. Quantifying this uncertainty provides a per-pixel estimate of the confidence level of the DL prediction as well as the quality of the model and data set. We demonstrate this framework in the application of large space-bandwidth product phase imaging using a physics-guided coded illumination scheme. From only five multiplexed illumination measurements, our BNN predicts gigapixel phase images in both static and dynamic biological samples with quantitative credibility assessment. Furthermore, we show that low-certainty regions can identify spatially and temporally rare biological phenomena. We believe our uncertainty learning framework is widely applicable to many DL-based biomedical imaging techniques for assessing the reliability of DL predictions.
Collapse
|
247
|
Iyer RR, Liu YZ, Boppart SA. Automated sensorless single-shot closed-loop adaptive optics microscopy with feedback from computational adaptive optics. OPTICS EXPRESS 2019; 27:12998-13014. [PMID: 31052832 PMCID: PMC6825599 DOI: 10.1364/oe.27.012998] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Revised: 04/02/2019] [Accepted: 04/02/2019] [Indexed: 05/02/2023]
Abstract
Traditional wavefront-sensor-based adaptive optics (AO) techniques face numerous challenges that cause poor performance in scattering samples. Sensorless closed-loop AO techniques overcome these challenges by optimizing an image metric at different states of a deformable mirror (DM). This requires acquisition of a series of images continuously for optimization - an arduous task in dynamic in vivo samples. We present a technique where the different states of the DM are instead simulated using computational adaptive optics (CAO). The optimal wavefront is estimated by performing CAO on an initial volume to minimize an image metric, and then the pattern is translated to the DM. In this paper, we have demonstrated this technique on a spectral-domain optical coherence microscope for three applications: real-time depth-wise aberration correction, single-shot volumetric aberration correction, and extension of depth-of-focus. Our technique overcomes the disadvantages of sensor-based AO, reduces the number of image acquisitions compared to traditional sensorless AO, and retains the advantages of both computational and hardware-based AO.
Collapse
Affiliation(s)
- Rishyashring R. Iyer
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801,
USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801,
USA
| | - Yuan-Zhi Liu
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801,
USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801,
USA
| | - Stephen A. Boppart
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801,
USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801,
USA
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801,
USA
- Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801,
USA
| |
Collapse
|
248
|
Fan X, Healy JJ, O'Dwyer K, Hennelly BM. Label-free color staining of quantitative phase images of biological cells by simulated Rheinberg illumination. APPLIED OPTICS 2019; 58:3104-3114. [PMID: 31044784 DOI: 10.1364/ao.58.003104] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Accepted: 03/18/2019] [Indexed: 06/09/2023]
Abstract
Modern microscopes are designed with functionalities that are tailored to enhance image contrast. Dark-field imaging, phase contrast, differential interference contrast, and other optical techniques enable biological cells and other phase-only objects to be visualized. Quantitative phase imaging refers to an emerging set of techniques that allow for the complex transmission function of the sample to be measured. With this quantitative phase image available, any optical technique can then be simulated; it is trivial to generate a phase contrast image or a differential interference contrast image. Rheinberg illumination, proposed almost a century ago, is an optical technique that applies color contrast to images of phase-only objects by introducing a type of optical staining via an amplitude filter placed in the illumination path that consists of two or more colors. In this paper, the complete theory of Rheinberg illumination is derived, from which an algorithm is proposed that can digitally simulate the technique. Results are shown for a number of quantitative phase images of diatom cells obtained via digital holographic microscopy. The results clearly demonstrate the potential of the technique for label-free color staining of subcellular features.
Collapse
|
249
|
Liu T, de Haan K, Rivenson Y, Wei Z, Zeng X, Zhang Y, Ozcan A. Deep learning-based super-resolution in coherent imaging systems. Sci Rep 2019; 9:3926. [PMID: 30850721 PMCID: PMC6408569 DOI: 10.1038/s41598-019-40554-1] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Accepted: 02/19/2019] [Indexed: 11/28/2022] Open
Abstract
We present a deep learning framework based on a generative adversarial network (GAN) to perform super-resolution in coherent imaging systems. We demonstrate that this framework can enhance the resolution of both pixel size-limited and diffraction-limited coherent imaging systems. The capabilities of this approach are experimentally validated by super-resolving complex-valued images acquired using a lensfree on-chip holographic microscope, the resolution of which was pixel size-limited. Using the same GAN-based approach, we also improved the resolution of a lens-based holographic imaging system that was limited in resolution by the numerical aperture of its objective lens. This deep learning-based super-resolution framework can be broadly applied to enhance the space-bandwidth product of coherent imaging systems using image data and convolutional neural networks, and provides a rapid, non-iterative method for solving inverse image reconstruction or enhancement problems in optics.
Collapse
Affiliation(s)
- Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Zhensong Wei
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Xin Zeng
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Yibo Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
250
|
Zhang H, Jiang S, Liao J, Deng J, Liu J, Zhang Y, Zheng G. Near-field Fourier ptychography: super-resolution phase retrieval via speckle illumination. OPTICS EXPRESS 2019; 27:7498-7512. [PMID: 30876313 PMCID: PMC6825623 DOI: 10.1364/oe.27.007498] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Revised: 02/08/2019] [Accepted: 02/09/2019] [Indexed: 05/21/2023]
Abstract
High spatial resolution is the goal of many imaging systems. While designing a high-resolution lens with diffraction-limited performance over a large field of view remains a difficult task, creating a complex speckle pattern with wavelength-limited spatial features is easily accomplished with a simple random diffuser. With this observation and the concept of near-field ptychography, we report a new imaging modality, termed near-field Fourier ptychography, to address high-resolution imaging challenges in both microscopic and macroscopic imaging settings. 'Near-field' refers to placing the object at a short defocus distance with a large Fresnel number. We project a speckle pattern with fine spatial features on the object instead of directly resolving the spatial features via a high-resolution lens. We then translate the object (or speckle) to different positions and acquire the corresponding images by using a low-resolution lens. A ptychographic phase retrieval process is used to recover the complex object, the unknown speckle pattern, and the coherent transfer function at the same time. In a microscopic imaging setup, we use a 0.12 numerical aperture (NA) lens to achieve an NA of 0.85 in the reconstruction process. In a macroscale photographic imaging setup, we achieve ~7-fold resolution gain by using a photographic lens. The collection optics do not determine the final achievable resolution; rather, the speckle pattern's feature size does. This is similar to our recent demonstration in fluorescence imaging settings (Guo et al., Biomed. Opt. Express, 9(1), 2018). The reported imaging modality can be employed in light, coherent X-ray, and transmission electron imaging systems to increase resolution and provide quantitative absorption and object phase contrast.
Collapse
Affiliation(s)
- He Zhang
- Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Ultra-Precision Optoelectronic Instrument Engineering Center, Harbin Institute of Technology, Harbin 150001, China
- These authors contributed equally to this work
| | - Shaowei Jiang
- Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
- These authors contributed equally to this work
| | - Jun Liao
- Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Junjing Deng
- Advanced Photon Source, Argonne National Laboratory, Argonne, IL 60439, USA
| | - Jian Liu
- Ultra-Precision Optoelectronic Instrument Engineering Center, Harbin Institute of Technology, Harbin 150001, China
| | - Yongbing Zhang
- Shenzhen Key Lab of Broadband Network and Multimedia, Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055, China
| | - Guoan Zheng
- Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Electrical and Computer Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|