101
|
Huang K, Matsumura H, Zhao Y, Herbig M, Yuan D, Mineharu Y, Harmon J, Findinier J, Yamagishi M, Ohnuki S, Nitta N, Grossman AR, Ohya Y, Mikami H, Isozaki A, Goda K. Deep imaging flow cytometry. LAB ON A CHIP 2022; 22:876-889. [PMID: 35142325 DOI: 10.1039/d1lc01043c] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Imaging flow cytometry (IFC) has become a powerful tool for diverse biomedical applications by virtue of its ability to image single cells in a high-throughput manner. However, there remains a challenge posed by the fundamental trade-off between throughput, sensitivity, and spatial resolution. Here we present deep-learning-enhanced imaging flow cytometry (dIFC) that circumvents this trade-off by implementing an image restoration algorithm on a virtual-freezing fluorescence imaging (VIFFI) flow cytometry platform, enabling higher throughput without sacrificing sensitivity and spatial resolution. A key component of dIFC is a high-resolution (HR) image generator that synthesizes "virtual" HR images from the corresponding low-resolution (LR) images acquired with a low-magnification lens (10×/0.4-NA). For IFC, a low-magnification lens is favorable because of reduced image blur of cells flowing at a higher speed, which allows higher throughput. We trained and developed the HR image generator with an architecture containing two generative adversarial networks (GANs). Furthermore, we developed dIFC as a method by combining the trained generator and IFC. We characterized dIFC using Chlamydomonas reinhardtii cell images, fluorescence in situ hybridization (FISH) images of Jurkat cells, and Saccharomyces cerevisiae (budding yeast) cell images, showing high similarities of dIFC images to images obtained with a high-magnification lens (40×/0.95-NA), at a high flow speed of 2 m s-1. We lastly employed dIFC to show enhancements in the accuracy of FISH-spot counting and neck-width measurement of budding yeast cells. These results pave the way for statistical analysis of cells with high-dimensional spatial information.
Collapse
Affiliation(s)
- Kangrui Huang
- Department of Chemistry, The University of Tokyo, Tokyo 113-0033, Japan.
| | - Hiroki Matsumura
- Department of Chemistry, The University of Tokyo, Tokyo 113-0033, Japan.
| | - Yaqi Zhao
- Department of Chemistry, The University of Tokyo, Tokyo 113-0033, Japan.
| | - Maik Herbig
- Department of Chemistry, The University of Tokyo, Tokyo 113-0033, Japan.
| | - Dan Yuan
- Department of Chemistry, The University of Tokyo, Tokyo 113-0033, Japan.
| | - Yohei Mineharu
- Department of Neurosurgery, Kyoto University, Kyoto 606-8507, Japan
- Department of Artificial Intelligence in Healthcare and Medicine, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Jeffrey Harmon
- Department of Chemistry, The University of Tokyo, Tokyo 113-0033, Japan.
| | - Justin Findinier
- Department of Plant Biology, The Carnegie Institution for Science, Stanford, California 94305, USA
| | - Mai Yamagishi
- Department of Biological Sciences, The University of Tokyo, Tokyo 113-0033, Japan
| | - Shinsuke Ohnuki
- Department of Integrated Biosciences, Graduate School of Frontier Sciences, The University of Tokyo, Chiba 277-8562, Japan
| | | | - Arthur R Grossman
- Department of Plant Biology, The Carnegie Institution for Science, Stanford, California 94305, USA
- Department of Biology, Stanford University, Stanford, California 94305, USA
| | - Yoshikazu Ohya
- Department of Integrated Biosciences, Graduate School of Frontier Sciences, The University of Tokyo, Chiba 277-8562, Japan
- Collaborative Research Institute for Innovative Microbiology, The University of Tokyo, Tokyo 113-8654, Japan
| | - Hideharu Mikami
- Department of Chemistry, The University of Tokyo, Tokyo 113-0033, Japan.
- PRESTO, Japan Science and Technology Agency, Saitama 332-0012, Japan
| | - Akihiro Isozaki
- Department of Chemistry, The University of Tokyo, Tokyo 113-0033, Japan.
| | - Keisuke Goda
- Department of Chemistry, The University of Tokyo, Tokyo 113-0033, Japan.
- Department of Bioengineering, University of California, Los Angeles, California 90095, USA
- Institute of Technological Sciences, Wuhan University, Hubei 430072, China
| |
Collapse
|
102
|
Çetintaş E, Luo Y, Nguyen C, Guo Y, Li L, Zhu Y, Ozcan A. Characterization of exhaled e-cigarette aerosols in a vape shop using a field-portable holographic on-chip microscope. Sci Rep 2022; 12:3175. [PMID: 35210524 PMCID: PMC8873257 DOI: 10.1038/s41598-022-07150-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 02/14/2022] [Indexed: 11/09/2022] Open
Abstract
The past decade marked a drastic increase in the usage of electronic cigarettes. The adverse health impact of secondhand exposure due to exhaled e-cig particles has raised significant concerns, demanding further research on the characteristics of these particles. In this work, we report direct volatility measurements on exhaled e-cig aerosols using a field-portable device (termed c-Air) enabled by deep learning and lens-free holographic microscopy; for this analysis, we performed a series of field experiments in a vape shop where customers used/vaped their e-cig products. During four days of experiments, we periodically sampled the indoor air with intervals of ~ 16 min and collected the exhaled particles with c-Air. Time-lapse inline holograms of the collected particles were recorded by c-Air and reconstructed using a convolutional neural network yielding phase-recovered microscopic images of the particles. Volumetric decay of individual particles due to evaporation was used as an indicator of the volatility of each aerosol. Volatility dynamics quantified through c-Air experiments showed that indoor vaping increased the percentage of volatile and semi-volatile particles in air. The reported methodology and findings can guide further studies on volatility characterization of indoor e-cig emissions.
Collapse
Affiliation(s)
- Ege Çetintaş
- Electrical and Computer Engineering Department, University of California, Los Angeles (UCLA), 420 Westwood Plaza, Engr. IV 68-119, Los Angeles, CA, 90095, USA.,Bioengineering Department, University of California, Los Angeles, Los Angeles, CA, 90095, USA.,California Nano Systems Institute (CNSI), University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Yi Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles (UCLA), 420 Westwood Plaza, Engr. IV 68-119, Los Angeles, CA, 90095, USA.,Bioengineering Department, University of California, Los Angeles, Los Angeles, CA, 90095, USA.,California Nano Systems Institute (CNSI), University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Charlene Nguyen
- Department of Environmental Health Sciences, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Yuening Guo
- Department of Environmental Health Sciences, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Liqiao Li
- Department of Environmental Health Sciences, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Yifang Zhu
- Department of Environmental Health Sciences, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles (UCLA), 420 Westwood Plaza, Engr. IV 68-119, Los Angeles, CA, 90095, USA. .,Bioengineering Department, University of California, Los Angeles, Los Angeles, CA, 90095, USA. .,California Nano Systems Institute (CNSI), University of California, Los Angeles, Los Angeles, CA, 90095, USA. .,David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, 90095, USA.
| |
Collapse
|
103
|
Tahir W, Wang H, Tian L. Adaptive 3D descattering with a dynamic synthesis network. LIGHT, SCIENCE & APPLICATIONS 2022; 11:42. [PMID: 35210401 PMCID: PMC8873471 DOI: 10.1038/s41377-022-00730-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 01/22/2022] [Accepted: 02/02/2022] [Indexed: 05/11/2023]
Abstract
Deep learning has been broadly applied to imaging in scattering applications. A common framework is to train a descattering network for image recovery by removing scattering artifacts. To achieve the best results on a broad spectrum of scattering conditions, individual "expert" networks need to be trained for each condition. However, the expert's performance sharply degrades when the testing condition differs from the training. An alternative brute-force approach is to train a "generalist" network using data from diverse scattering conditions. It generally requires a larger network to encapsulate the diversity in the data and a sufficiently large training set to avoid overfitting. Here, we propose an adaptive learning framework, termed dynamic synthesis network (DSN), which dynamically adjusts the model weights and adapts to different scattering conditions. The adaptability is achieved by a novel "mixture of experts" architecture that enables dynamically synthesizing a network by blending multiple experts using a gating network. We demonstrate the DSN in holographic 3D particle imaging for a variety of scattering conditions. We show in simulation that our DSN provides generalization across a continuum of scattering conditions. In addition, we show that by training the DSN entirely on simulated data, the network can generalize to experiments and achieve robust 3D descattering. We expect the same concept can find many other applications, such as denoising and imaging in scattering media. Broadly, our dynamic synthesis framework opens up a new paradigm for designing highly adaptive deep learning and computational imaging techniques.
Collapse
Affiliation(s)
- Waleed Tahir
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Hao Wang
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA.
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA.
| |
Collapse
|
104
|
Zuo C, Qian J, Feng S, Yin W, Li Y, Fan P, Han J, Qian K, Chen Q. Deep learning in optical metrology: a review. LIGHT, SCIENCE & APPLICATIONS 2022; 11:39. [PMID: 35197457 PMCID: PMC8866517 DOI: 10.1038/s41377-022-00714-x] [Citation(s) in RCA: 71] [Impact Index Per Article: 35.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Revised: 01/03/2022] [Accepted: 01/11/2022] [Indexed: 05/20/2023]
Abstract
With the advances in scientific foundations and technological implementations, optical metrology has become versatile problem-solving backbones in manufacturing, fundamental research, and engineering applications, such as quality control, nondestructive testing, experimental mechanics, and biomedicine. In recent years, deep learning, a subfield of machine learning, is emerging as a powerful tool to address problems by learning from data, largely driven by the availability of massive datasets, enhanced computational power, fast data storage, and novel training algorithms for the deep neural network. It is currently promoting increased interests and gaining extensive attention for its utilization in the field of optical metrology. Unlike the traditional "physics-based" approach, deep-learning-enabled optical metrology is a kind of "data-driven" approach, which has already provided numerous alternative solutions to many challenging problems in this field with better performances. In this review, we present an overview of the current status and the latest progress of deep-learning technologies in the field of optical metrology. We first briefly introduce both traditional image-processing algorithms in optical metrology and the basic concepts of deep learning, followed by a comprehensive review of its applications in various optical metrology tasks, such as fringe denoising, phase retrieval, phase unwrapping, subset correlation, and error compensation. The open challenges faced by the current deep-learning approach in optical metrology are then discussed. Finally, the directions for future research are outlined.
Collapse
Grants
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- National Key R&D Program of China (2017YFF0106403) Leading Technology of Jiangsu Basic Research Plan (BK20192003) National Defense Science and Technology Foundation of China (2019-JCJQ-JJ-381) "333 Engineering" Research Project of Jiangsu Province (BRA2016407) Fundamental Research Funds for the Central Universities (30920032101, 30919011222) Open Research Fund of Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense (3091801410411)
Collapse
Affiliation(s)
- Chao Zuo
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China.
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China.
| | - Jiaming Qian
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Shijie Feng
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Wei Yin
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Yixuan Li
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Pengfei Fan
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- School of Engineering and Materials Science, Queen Mary University of London, London, E1 4NS, UK
| | - Jing Han
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Kemao Qian
- School of Computer Science and Engineering, Nanyang Technological University, Singapore, 639798, Singapore.
| | - Qian Chen
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China.
| |
Collapse
|
105
|
Yolalmaz A, Yüce E. Comprehensive deep learning model for 3D color holography. Sci Rep 2022; 12:2487. [PMID: 35169161 PMCID: PMC8847588 DOI: 10.1038/s41598-022-06190-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 01/20/2022] [Indexed: 12/04/2022] Open
Abstract
Holography is a vital tool used in various applications from microscopy, solar energy, imaging, display to information encryption. Generation of a holographic image and reconstruction of object/hologram information from a holographic image using the current algorithms are time-consuming processes. Versatile, fast in the meantime, accurate methodologies are required to compute holograms performing color imaging at multiple observation planes and reconstruct object/sample information from a holographic image for widely accommodating optical holograms. Here, we focus on design of optical holograms for generation of holographic images at multiple observation planes and colors via a deep learning model, the CHoloNet. The CHoloNet produces optical holograms which show multitasking performance as multiplexing color holographic image planes by tuning holographic structures. Furthermore, our deep learning model retrieves an object/hologram information from an intensity holographic image without requiring phase and amplitude information from the intensity image. We show that reconstructed objects/holograms show excellent agreement with the ground-truth images. The CHoloNet does not need iteratively reconstruction of object/hologram information while conventional object/hologram recovery methods rely on multiple holographic images at various observation planes along with the iterative algorithms. We openly share the fast and efficient framework that we develop in order to contribute to the design and implementation of optical holograms, and we believe that the CHoloNet based object/hologram reconstruction and generation of holographic images will speed up wide-area implementation of optical holography in microscopy, data encryption, and communication technologies.
Collapse
Affiliation(s)
- Alim Yolalmaz
- Programmable Photonics Group, Department of Physics, Middle East Technical University, 06800, Ankara, Turkey. .,Micro and Nanotechnology Program, Middle East Technical University, 06800, Ankara, Turkey.
| | - Emre Yüce
- Programmable Photonics Group, Department of Physics, Middle East Technical University, 06800, Ankara, Turkey.,Micro and Nanotechnology Program, Middle East Technical University, 06800, Ankara, Turkey
| |
Collapse
|
106
|
Pirone D, Sirico D, Miccio L, Bianco V, Mugnano M, Ferraro P, Memmolo P. Speeding up reconstruction of 3D tomograms in holographic flow cytometry via deep learning. LAB ON A CHIP 2022; 22:793-804. [PMID: 35076055 DOI: 10.1039/d1lc01087e] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Tomographic flow cytometry by digital holography is an emerging imaging modality capable of collecting multiple views of moving and rotating cells with the aim of recovering their refractive index distribution in 3D. Although this modality allows us to access high-resolution imaging with high-throughput, the huge amount of time-lapse holographic images to be processed (hundreds of digital holograms per cell) constitutes the actual bottleneck. This prevents the system from being suitable for lab-on-a-chip platforms in real-world applications, where fast analysis of measured data is mandatory. Here we demonstrate a significant speeding-up reconstruction of phase-contrast tomograms by introducing in the processing pipeline a multi-scale fully-convolutional context aggregation network. Although it was originally developed in the context of semantic image analysis, we demonstrate for the first time that it can be successfully adapted to a holographic lab-on-chip platform for achieving 3D tomograms through a faster computational process. We trained the network with input-output image pairs to reproduce the end-to-end holographic reconstruction process, i.e. recovering quantitative phase maps (QPMs) of single cells from their digital holograms. Then, the sequence of QPMs of the same rotating cell is used to perform the tomographic reconstruction. The proposed approach significantly reduces the computational time for retrieving tomograms, thus making them available in a few seconds instead of tens of minutes, while essentially preserving the high-content information of tomographic data. Moreover, we have accomplished a compact deep convolutional neural network parameterization that can fit into on-chip SRAM and a small memory footprint, thus demonstrating its possible exploitation to provide onboard computations for lab-on-chip devices with low processing hardware resources.
Collapse
Affiliation(s)
- Daniele Pirone
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
- DIETI, Department of Electrical Engineering and Information Technologies, University of Naples "Federico II", via Claudio 21, 80125 Napoli, Italy
| | - Daniele Sirico
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
| | - Lisa Miccio
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
| | - Vittorio Bianco
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
| | - Martina Mugnano
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
| | - Pietro Ferraro
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
| | - Pasquale Memmolo
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
| |
Collapse
|
107
|
Wu P, Zhang D, Yuan J, Zeng S, Gong H, Luo Q, Yang X. Large depth-of-field fluorescence microscopy based on deep learning supported by Fresnel incoherent correlation holography. OPTICS EXPRESS 2022; 30:5177-5191. [PMID: 35209487 DOI: 10.1364/oe.451409] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 01/12/2022] [Indexed: 06/14/2023]
Abstract
Fluorescence microscopy plays an irreplaceable role in biomedicine. However, limited depth of field (DoF) of fluorescence microscopy is always an obstacle of image quality, especially when the sample is with an uneven surface or distributed in different depths. In this manuscript, we combine deep learning with Fresnel incoherent correlation holography to describe a method to obtain significant large DoF fluorescence microscopy. Firstly, the hologram is restored by the Auto-ASP method from out-of-focus to in-focus in double-spherical wave Fresnel incoherent correlation holography. Then, we use a generative adversarial network to eliminate the artifacts introduced by Auto-ASP and output the high-quality image as a result. We use fluorescent beads, USAF target and mouse brain as samples to demonstrate the large DoF of more than 400µm, which is 13 times better than that of traditional wide-field microscopy. Moreover, our method is with a simple structure, which can be easily combined with many existing fluorescence microscopic imaging technology.
Collapse
|
108
|
Bazow B, Phan T, Raub CB, Nehmetallah G. Computational multi-wavelength phase synthesis using convolutional neural networks [Invited]. APPLIED OPTICS 2022; 61:B132-B146. [PMID: 35201134 DOI: 10.1364/ao.439323] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 11/13/2021] [Indexed: 05/22/2023]
Abstract
Multi-wavelength digital holographic microscopy (MWDHM) provides indirect measurements of the refractive index for non-dispersive samples. Successive-shot MWDHM is not appropriate for dynamic samples and single-shot MWDHM significantly increases the complexity of the optical setup due to the need for multiple lasers or a wavelength tunable source. Here we consider deep learning convolutional neural networks for computational phase synthesis to obtain high-speed simultaneous phase estimates on different wavelengths and thus single-shot estimates of the integral refractive index without increased experimental complexity. This novel, to the best of our knowledge, computational concept is validated using cell phantoms consisting of internal refractive index variations representing cytoplasm and membrane-bound organelles, respectively, and a simulation of a realistic holographic recording process. Specifically, in this work we employed data-driven computational techniques to perform accurate dual-wavelength hologram synthesis (hologram-to-hologram prediction), dual-wavelength phase synthesis (unwrapped phase-to-phase prediction), direct phase-to-index prediction using a single wavelength, hologram-to-phase prediction, and 2D phase unwrapping with sharp discontinuities (wrapped-to-unwrapped phase prediction).
Collapse
|
109
|
Live-dead assay on unlabeled cells using phase imaging with computational specificity. Nat Commun 2022; 13:713. [PMID: 35132059 PMCID: PMC8821584 DOI: 10.1038/s41467-022-28214-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 01/11/2022] [Indexed: 12/20/2022] Open
Abstract
Existing approaches to evaluate cell viability involve cell staining with chemical reagents. However, the step of exogenous staining makes these methods undesirable for rapid, nondestructive, and long-term investigation. Here, we present an instantaneous viability assessment of unlabeled cells using phase imaging with computation specificity. This concept utilizes deep learning techniques to compute viability markers associated with the specimen measured by label-free quantitative phase imaging. Demonstrated on different live cell cultures, the proposed method reports approximately 95% accuracy in identifying live and dead cells. The evolution of the cell dry mass and nucleus area for the labeled and unlabeled populations reveal that the chemical reagents decrease viability. The nondestructive approach presented here may find a broad range of applications, from monitoring the production of biopharmaceuticals to assessing the effectiveness of cancer treatments. Common methods for characterising cell viability involve cell staining with chemical reagents. Here the authors report a method for cell viability assessment that does not require labelling; this uses quantitative phase imaging combined with deep learning.
Collapse
|
110
|
Montresor S, Tahon M, Picart P. Deep learning speckle de-noising algorithms for coherent metrology: a review and a phase-shifted iterative scheme [Invited]. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:A62-A78. [PMID: 35200959 DOI: 10.1364/josaa.444951] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
We present a review of deep learning algorithms dedicated to the processing of speckle noise in coherent imaging. We focus on methods that specifically process de-noising of input images. Four main classes of applications are described in this review: optical coherence tomography, synthetic aperture radar imaging, digital holography amplitude imaging, and fringe pattern analysis. We then present deep learning approaches recently developed in our group that rely on the retraining of residual convolutional neural network structures to process decorrelation phase noise. The paper ends with the presentation of a new approach that uses an iterative scheme controlled by an input SNR estimator associated with a phase-shifting procedure.
Collapse
|
111
|
Guo Z, Levitan A, Barbastathis G, Comin R. Randomized probe imaging through deep k-learning. OPTICS EXPRESS 2022; 30:2247-2264. [PMID: 35209369 DOI: 10.1364/oe.445498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 12/22/2021] [Indexed: 06/14/2023]
Abstract
Randomized probe imaging (RPI) is a single-frame diffractive imaging method that uses highly randomized light to reconstruct the spatial features of a scattering object. The reconstruction process, known as phase retrieval, aims to recover a unique solution for the object without measuring the far-field phase information. Typically, reconstruction is done via time-consuming iterative algorithms. In this work, we propose a fast and efficient deep learning based method to reconstruct phase objects from RPI data. The method, which we call deep k-learning, applies the physical propagation operator to generate an approximation of the object as an input to the neural network. This way, the network no longer needs to parametrize the far-field diffraction physics, dramatically improving the results. Deep k-learning is shown to be computationally efficient and robust to Poisson noise. The advantages provided by our method may enable the analysis of far larger datasets in photon starved conditions, with important applications to the study of dynamic phenomena in physical science and biological engineering.
Collapse
|
112
|
Castaneda R, Trujillo C, Doblas A. Video-Rate Quantitative Phase Imaging Using a Digital Holographic Microscope and a Generative Adversarial Network. SENSORS 2021; 21:s21238021. [PMID: 34884025 PMCID: PMC8659916 DOI: 10.3390/s21238021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 11/20/2021] [Accepted: 11/28/2021] [Indexed: 01/22/2023]
Abstract
The conventional reconstruction method of off-axis digital holographic microscopy (DHM) relies on computational processing that involves spatial filtering of the sample spectrum and tilt compensation between the interfering waves to accurately reconstruct the phase of a biological sample. Additional computational procedures such as numerical focusing may be needed to reconstruct free-of-distortion quantitative phase images based on the optical configuration of the DHM system. Regardless of the implementation, any DHM computational processing leads to long processing times, hampering the use of DHM for video-rate renderings of dynamic biological processes. In this study, we report on a conditional generative adversarial network (cGAN) for robust and fast quantitative phase imaging in DHM. The reconstructed phase images provided by the GAN model present stable background levels, enhancing the visualization of the specimens for different experimental conditions in which the conventional approach often fails. The proposed learning-based method was trained and validated using human red blood cells recorded on an off-axis Mach–Zehnder DHM system. After proper training, the proposed GAN yields a computationally efficient method, reconstructing DHM images seven times faster than conventional computational approaches.
Collapse
Affiliation(s)
- Raul Castaneda
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN 38152, USA;
| | - Carlos Trujillo
- Applied Optics Group, Physical Sciences Department, Universidad EAFIT, Medellin 050037, Colombia;
| | - Ana Doblas
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN 38152, USA;
- Correspondence:
| |
Collapse
|
113
|
Kumar S. Phase retrieval with physics informed zero-shot network. OPTICS LETTERS 2021; 46:5942-5945. [PMID: 34851929 DOI: 10.1364/ol.433625] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Accepted: 11/10/2021] [Indexed: 06/13/2023]
Abstract
Phase can be reliably estimated from a single diffracted intensity image if faithful prior information about the object is available. Examples include amplitude bounds, object support, sparsity in the spatial or transform domain, deep image prior, and the prior learned from labeled datasets by a deep neural network. Deep learning facilitates state-of-the-art reconstruction quality but requires a large labeled dataset (ground truth measurement pair acquired in the same experimental conditions) for training. To alleviate this data requirement problem, this Letter proposes a zero-shot learning method. The Letter demonstrates that the object prior learned by a deep neural network while being trained for a denoising task can also be utilized for phase retrieval if the diffraction physics is effectively enforced on the network output. The Letter additionally demonstrates that the incorporation of total variation in the proposed zero-shot framework facilitates reconstruction of similar quality in less time (e.g., ∼9 fold, for a test reported in this Letter).
Collapse
|
114
|
Zeng T, Zhu Y, Lam EY. Deep learning for digital holography: a review. OPTICS EXPRESS 2021; 29:40572-40593. [PMID: 34809394 DOI: 10.1364/oe.443367] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 06/13/2023]
Abstract
Recent years have witnessed the unprecedented progress of deep learning applications in digital holography (DH). Nevertheless, there remain huge potentials in how deep learning can further improve performance and enable new functionalities for DH. Here, we survey recent developments in various DH applications powered by deep learning algorithms. This article starts with a brief introduction to digital holographic imaging, then summarizes the most relevant deep learning techniques for DH, with discussions on their benefits and challenges. We then present case studies covering a wide range of problems and applications in order to highlight research achievements to date. We provide an outlook of several promising directions to widen the use of deep learning in various DH applications.
Collapse
|
115
|
Terbe D, Orzó L, Zarándy Á. Deep-learning-based bright-field image generation from a single hologram using an unpaired dataset. OPTICS LETTERS 2021; 46:5567-5570. [PMID: 34780407 DOI: 10.1364/ol.440900] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 10/18/2021] [Indexed: 06/13/2023]
Abstract
We adopted an unpaired neural network training technique, namely CycleGAN, to generate bright-field microscope-like images from hologram reconstructions. The motivation for unpaired training in microscope applications is that the construction of paired/parallel datasets is cumbersome or sometimes not even feasible, for example, lensless or flow-through holographic measuring setups. Our results show that the proposed method is applicable in these cases and provides comparable results to the paired training. Furthermore, it has some favorable properties even though its metric scores are lower. The CycleGAN training results in sharper and-from this point of view-more realistic object reconstructions compared to the baseline paired setting. Finally, we show that a lower metric score of the unpaired training does not necessarily imply a worse image generation but a correct object synthesis, yet with a different focal representation.
Collapse
|
116
|
Dai M, Xiao G, Fiondella L, Shao M, Zhang YS. Deep Learning-Enabled Resolution-Enhancement in Mini- and Regular Microscopy for Biomedical Imaging. SENSORS AND ACTUATORS. A, PHYSICAL 2021; 331:112928. [PMID: 34393376 PMCID: PMC8362924 DOI: 10.1016/j.sna.2021.112928] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Artificial intelligence algorithms that aid mini-microscope imaging are attractive for numerous applications. In this paper, we optimize artificial intelligence techniques to provide clear, and natural biomedical imaging. We demonstrate that a deep learning-enabled super-resolution method can significantly enhance the spatial resolution of mini-microscopy and regular-microscopy. This data-driven approach trains a generative adversarial network to transform low-resolution images into super-resolved ones. Mini-microscopic images and regular-microscopic images acquired with different optical microscopes under various magnifications are collected as our experimental benchmark datasets. The only input to this generative-adversarial-network-based method are images from the datasets down-sampled by the Bicubic interpolation. We use independent test set to evaluate this deep learning approach with other deep learning-based algorithms through qualitative and quantitative comparisons. To clearly present the improvements achieved by this generative-adversarial-network-based method, we zoom into the local features to explore and highlight the qualitative differences. We also employ the peak signal-to-noise ratio and the structural similarity, to quantitatively compare alternative super-resolution methods. The quantitative results illustrate that super-resolution images obtained from our approach with interpolation parameter α=0.25 more closely match those of the original high-resolution images than to those obtained by any of the alternative state-of-the-art method. These results are significant for fields that use microscopy tools, such as biomedical imaging of engineered living systems. We also utilize this generative adversarial network-based algorithm to optimize the resolution of biomedical specimen images and then generate three-dimensional reconstruction, so as to enhance the ability of three-dimensional imaging throughout the entire volumes for spatial-temporal analyses of specimen structures.
Collapse
Affiliation(s)
- Manna Dai
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Cambridge, MA 02139, USA
| | - Gao Xiao
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Cambridge, MA 02139, USA
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA
| | - Lance Fiondella
- Department of Electrical and Computer Engineering, College of Engineering, University of Massachusetts Dartmouth, North Dartmouth, MA 02747, USA
| | - Ming Shao
- Department of Computer and Information Science, College of Engineering, University of Massachusetts Dartmouth, North Dartmouth, MA 02747, USA
| | - Yu Shrike Zhang
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Cambridge, MA 02139, USA
| |
Collapse
|
117
|
Xu S, Wang J, Shu H, Zhang Z, Yi S, Bai B, Wang X, Liu J, Zou W. Optical coherent dot-product chip for sophisticated deep learning regression. LIGHT, SCIENCE & APPLICATIONS 2021; 10:221. [PMID: 34725322 PMCID: PMC8560900 DOI: 10.1038/s41377-021-00666-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 09/29/2021] [Accepted: 10/18/2021] [Indexed: 05/31/2023]
Abstract
Optical implementations of neural networks (ONNs) herald the next-generation high-speed and energy-efficient deep learning computing by harnessing the technical advantages of large bandwidth and high parallelism of optics. However, due to the problems of the incomplete numerical domain, limited hardware scale, or inadequate numerical accuracy, the majority of existing ONNs were studied for basic classification tasks. Given that regression is a fundamental form of deep learning and accounts for a large part of current artificial intelligence applications, it is necessary to master deep learning regression for further development and deployment of ONNs. Here, we demonstrate a silicon-based optical coherent dot-product chip (OCDC) capable of completing deep learning regression tasks. The OCDC adopts optical fields to carry out operations in the complete real-value domain instead of in only the positive domain. Via reusing, a single chip conducts matrix multiplications and convolutions in neural networks of any complexity. Also, hardware deviations are compensated via in-situ backpropagation control provided the simplicity of chip architecture. Therefore, the OCDC meets the requirements for sophisticated regression tasks and we successfully demonstrate a representative neural network, the AUTOMAP (a cutting-edge neural network model for image reconstruction). The quality of reconstructed images by the OCDC and a 32-bit digital computer is comparable. To the best of our knowledge, there is no precedent of performing such state-of-the-art regression tasks on ONN chips. It is anticipated that the OCDC can promote the novel accomplishment of ONNs in modern AI applications including autonomous driving, natural language processing, and scientific study.
Collapse
Affiliation(s)
- Shaofu Xu
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Intelligent Microwave Lightwave Integration Innovation Center (imLic), Department of Electronic Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai, 200240, China
| | - Jing Wang
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Intelligent Microwave Lightwave Integration Innovation Center (imLic), Department of Electronic Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai, 200240, China
| | - Haowen Shu
- State Key Laboratory of Advanced Optical Communications System and Networks, Department of Electronics, School of Electronics Engineering and Computer Science, Peking University, Beijing, 100871, China
| | - Zhike Zhang
- Institution of Semiconductors, Chinese Academy of Sciences, Beijing, 100083, China
| | - Sicheng Yi
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Intelligent Microwave Lightwave Integration Innovation Center (imLic), Department of Electronic Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai, 200240, China
| | - Bowen Bai
- State Key Laboratory of Advanced Optical Communications System and Networks, Department of Electronics, School of Electronics Engineering and Computer Science, Peking University, Beijing, 100871, China
| | - Xingjun Wang
- State Key Laboratory of Advanced Optical Communications System and Networks, Department of Electronics, School of Electronics Engineering and Computer Science, Peking University, Beijing, 100871, China
| | - Jianguo Liu
- Institution of Semiconductors, Chinese Academy of Sciences, Beijing, 100083, China
| | - Weiwen Zou
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Intelligent Microwave Lightwave Integration Innovation Center (imLic), Department of Electronic Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai, 200240, China.
| |
Collapse
|
118
|
Li AC, Vyas S, Lin YH, Huang YY, Huang HM, Luo Y. Patch-Based U-Net Model for Isotropic Quantitative Differential Phase Contrast Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3229-3237. [PMID: 34152982 DOI: 10.1109/tmi.2021.3091207] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Quantitative differential phase-contrast (qDPC) imaging is a label-free phase retrieval method for weak phase objects using asymmetric illumination. However, qDPC imaging with fewer intensity measurements leads to anisotropic phase distribution in reconstructed images. In order to obtain isotropic phase transfer function, multiple measurements are required; thus, it is a time-consuming process. Here, we propose the feasibility of using deep learning (DL) method for isotropic qDPC microscopy from the least number of measurements. We utilize a commonly used convolutional neural network namely U-net architecture, trained to generate 12-axis isotropic reconstructed cell images (i.e. output) from 1-axis anisotropic cell images (i.e. input). To further extend the number of images for training, the U-net model is trained with a patch-wise approach. In this work, seven different types of living cell images were used for training, validation, and testing datasets. The results obtained from testing datasets show that our proposed DL-based method generates 1-axis qDPC images of similar accuracy to 12-axis measurements. The quantitative phase value in the region of interest is recovered from 66% up to 97%, compared to ground-truth values, providing solid evidence for improved phase uniformity, as well as retrieved missing spatial frequencies in 1-axis reconstructed images. In addition, results from our model are compared with paired and unpaired CycleGANs. Higher PSNR and SSIM values show the advantage of using the U-net model for isotropic qDPC microscopy. The proposed DL-based method may help in performing high-resolution quantitative studies for cell biology.
Collapse
|
119
|
Courtier AF, McDonnell M, Praeger M, Grant-Jacob JA, Codemard C, Harrison P, Mills B, Zervas M. Modelling of fibre laser cutting via deep learning. OPTICS EXPRESS 2021; 29:36487-36502. [PMID: 34809059 DOI: 10.1364/oe.432741] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 08/30/2021] [Indexed: 06/13/2023]
Abstract
Laser cutting is a materials processing technique used throughout academia and industry. However, defects such as striations can be formed while cutting, which can negatively affect the final quality of the cut. As the light-matter interactions that occur during laser machining are highly non-linear and difficult to model mathematically, there is interest in developing novel simulation methods for studying these interactions. Deep learning enables a data-driven approach to the modelling of complex systems. Here, we show that deep learning can be used to determine the scanning speed used for laser cutting, directly from microscope images of the cut surface. Furthermore, we demonstrate that a trained neural network can generate realistic predictions of the visual appearance of the laser cut surface, and hence can be used as a predictive visualisation tool.
Collapse
|
120
|
Xiong W, Huang Z, Wang P, Wang X, He Y, Wang C, Liu J, Ye H, Fan D, Chen S. Optical diffractive deep neural network-based orbital angular momentum mode add-drop multiplexer. OPTICS EXPRESS 2021; 29:36936-36952. [PMID: 34809092 DOI: 10.1364/oe.441905] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 10/14/2021] [Indexed: 06/13/2023]
Abstract
Vortex beams have application potential in multiplexing communication because of their orthogonal orbital angular momentum (OAM) modes. OAM add-drop multiplexing remains a challenge owing to the lack of mode selective coupling and separation technologies. We proposed an OAM add-drop multiplexer (OADM) using an optical diffractive deep neural network (ODNN). By exploiting the effective data-fitting capability of deep neural networks and the complex light-field manipulation ability of multilayer diffraction screens, we constructed a five-layer ODNN to manipulate the spatial location of vortex beams, which can selectively couple and separate OAM modes. Both the diffraction efficiency and mode purity exceeded 95% in simulations and four OAM channels carrying 16-quadrature-amplitude-modulation signals were successfully downloaded and uploaded with optical signal-to-noise ratio penalties of ∼1 dB at a bit error rate of 3.8 × 10-3. This method can break through the constraints of conventional OADM, such as single function and poor flexibility, which may create new opportunities for OAM multiplexing and all-optical interconnection.
Collapse
|
121
|
Javidi B, Carnicer A, Anand A, Barbastathis G, Chen W, Ferraro P, Goodman JW, Horisaki R, Khare K, Kujawinska M, Leitgeb RA, Marquet P, Nomura T, Ozcan A, Park Y, Pedrini G, Picart P, Rosen J, Saavedra G, Shaked NT, Stern A, Tajahuerce E, Tian L, Wetzstein G, Yamaguchi M. Roadmap on digital holography [Invited]. OPTICS EXPRESS 2021; 29:35078-35118. [PMID: 34808951 DOI: 10.1364/oe.435915] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 09/04/2021] [Indexed: 05/22/2023]
Abstract
This Roadmap article on digital holography provides an overview of a vast array of research activities in the field of digital holography. The paper consists of a series of 25 sections from the prominent experts in digital holography presenting various aspects of the field on sensing, 3D imaging and displays, virtual and augmented reality, microscopy, cell identification, tomography, label-free live cell imaging, and other applications. Each section represents the vision of its author to describe the significant progress, potential impact, important developments, and challenging issues in the field of digital holography.
Collapse
|
122
|
Wu H, Li Q, Meng X, Yang X, Liu S, Yin Y. Cryptographic analysis on an optical random-phase-encoding cryptosystem for complex targets based on physics-informed learning. OPTICS EXPRESS 2021; 29:33558-33571. [PMID: 34809166 DOI: 10.1364/oe.441293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 09/21/2021] [Indexed: 06/13/2023]
Abstract
Optical cryptanalysis based on deep learning (DL) has grabbed more and more attention. However, most DL methods are purely data-driven methods, lacking relevant physical priors, resulting in generalization capabilities restrained and limiting practical applications. In this paper, we demonstrate that the double-random phase encoding (DRPE)-based optical cryptosystems are susceptible to preprocessing ciphertext-only attack (pCOA) based on DL strategies, which can achieve high prediction fidelity for complex targets by using only one random phase mask (RPM) for training. After preprocessing the ciphertext information to procure substantial intrinsic information, the physical knowledge DL method based on physical priors is exploited to further learn the statistical invariants in different ciphertexts. As a result, the generalization ability has been significantly improved by increasing the number of training RPMs. This method also breaks the image size limitation of the traditional COA method. Optical experiments demonstrate the feasibility and the effectiveness of the proposed learning-based pCOA method.
Collapse
|
123
|
Li J, Zhang Q, Zhong L, Lu X. Hybrid-net: a two-to-one deep learning framework for three-wavelength phase-shifting interferometry. OPTICS EXPRESS 2021; 29:34656-34670. [PMID: 34809250 DOI: 10.1364/oe.438444] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 09/30/2021] [Indexed: 06/13/2023]
Abstract
In this paper, we propose a two-to-one deep learning (DL) framework for three- wavelength phase-shifting interferometry. The interferograms at two different wavelengths are used as the input of the proposed hybrid-net, and the interferogram of the third wavelength is used as the output. Using the advantages of the hybrid learning network, the interferogram of the third wavelength can be obtained accurately. Finally, the three-wavelength phase-shifting interferometry is realized. Compared with the previous DL-based dual-wavelength interferometry (DWI), the proposed method can further improve the measurement range of the sample without changing the DWI system. Especially for the independent step sample, the problem of limited measurement range is solved due to the input of auxiliary information. More importantly, the third wavelength can be set freely according to the measurement requirements, which is no longer limited by the actual laser and can provide more measuring ruler for phase measurement. Both experimental results and simulation analysis demonstrate the proposed method in the feasibility and the performance in improving the measurement range.
Collapse
|
124
|
Sun T. Light People: Professor Aydogan Ozcan. LIGHT, SCIENCE & APPLICATIONS 2021; 10:208. [PMID: 34611128 PMCID: PMC8491441 DOI: 10.1038/s41377-021-00643-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In 2016, the news that Google's artificial intelligence (AI) robot AlphaGo, based on the principle of deep learning, won the victory over lee Sedol, the former world Go champion and the famous 9th Dan competitor of Korea, caused a sensation in both fields of AI and Go, which brought epoch-making significance to the development of deep learning. Deep learning is a complex machine learning algorithm that uses multiple layers of artificial neural networks to automatically analyze signals or data. At present, deep learning has penetrated into our daily life, such as the applications of face recognition and speech recognition. Scientists have also made many remarkable achievements based on deep learning. Professor Aydogan Ozcan from the University of California, Los Angeles (UCLA) led his team to research deep learning algorithms, which provided new ideas for the exploring of optical computational imaging and sensing technology, and introduced image generation and reconstruction methods which brought major technological innovations to the development of related fields. Optical designs and devices are moving from being physically driven to being data-driven. We are much honored to have Aydogan Ozcan, Fellow of the National Academy of Inventors and Chancellor's Professor of UCLA, to unscramble his latest scientific research results and foresight for the future development of related fields, and to share his journey of pursuing Optics, his indissoluble relationship with Light: Science & Applications (LSA), and his experience in talent cultivation.
Collapse
Affiliation(s)
- Tingting Sun
- Light Publishing Group, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, 3888 Dong Nan Hu Road, Changchun, 130033, China.
| |
Collapse
|
125
|
Yoneda N, Kakei S, Komuro K, Onishi A, Saita Y, Nomura T. Single-shot higher-order transport-of-intensity quantitative phase imaging using deep learning. APPLIED OPTICS 2021; 60:8802-8808. [PMID: 34613106 DOI: 10.1364/ao.435538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 09/07/2021] [Indexed: 06/13/2023]
Abstract
Single-shot higher-order transport-of-intensity quantitative phase imaging (SHOT-QPI) is proposed to realize simple, in-line, scanless, and single-shot QPI. However, the light-use efficiency of SHOT-QPI is low because of the use of an amplitude-type computer-generated hologram (CGH). Although a phase-type CGH overcomes the problem, the accuracy of the measured phase is degraded owing to distortion of the defocused intensity distributions, which is caused by a quantization error of the CGH. Alternative SHOT-QPI with the help of deep learning, termed Deep-SHOT, is proposed to solve a nonlinear problem between the distorted intensities and the phase. In Deep-SHOT, a neural network learns the relationship between a series of distorted intensity distributions and the ground truth phase distribution. Because the distortion of intensity distributions is intrinsic to an optical system, the neural network is optimized for the system, and the proposed method improves the accuracy of the measured phase. The results of a proof-of-principle experiment indicate that the use of multiple defocused intensities also improves accuracy, even the nonlinear problem.
Collapse
|
126
|
Hameed BMZ, Prerepa G, Patil V, Shekhar P, Zahid Raza S, Karimi H, Paul R, Naik N, Modi S, Vigneswaran G, Prasad Rai B, Chłosta P, Somani BK. Engineering and clinical use of artificial intelligence (AI) with machine learning and data science advancements: radiology leading the way for future. Ther Adv Urol 2021; 13:17562872211044880. [PMID: 34567272 PMCID: PMC8458681 DOI: 10.1177/17562872211044880] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 08/21/2021] [Indexed: 12/29/2022] Open
Abstract
Over the years, many clinical and engineering methods have been adapted for testing and screening for the presence of diseases. The most commonly used methods for diagnosis and analysis are computed tomography (CT) and X-ray imaging. Manual interpretation of these images is the current gold standard but can be subject to human error, is tedious, and is time-consuming. To improve efficiency and productivity, incorporating machine learning (ML) and deep learning (DL) algorithms could expedite the process. This article aims to review the role of artificial intelligence (AI) and its contribution to data science as well as various learning algorithms in radiology. We will analyze and explore the potential applications in image interpretation and radiological advances for AI. Furthermore, we will discuss the usage, methodology implemented, future of these concepts in radiology, and their limitations and challenges.
Collapse
Affiliation(s)
- B M Zeeshan Hameed
- Department of Urology, Father Muller Medical College, Mangalore, Karnataka, India
| | - Gayathri Prerepa
- Department of Electronics and Communication, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Vathsala Patil
- Department of Oral Medicine and Radiology, Manipal College of Dental Sciences, Manipal, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India
| | - Pranav Shekhar
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Syed Zahid Raza
- Department of Urology, Dr. B.R. Ambedkar Medical College, Bengaluru, Karnataka, India
| | - Hadis Karimi
- Manipal College of Pharmaceutical Sciences, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Rahul Paul
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Nithesh Naik
- International Training and Research in Uro-oncology and Endourology (iTRUE) Group, Manipal, India
| | - Sachin Modi
- Department of Interventional Radiology, University Hospital Southampton NHS Foundation Trust, Southampton, UK
| | - Ganesh Vigneswaran
- Department of Interventional Radiology, University Hospital Southampton NHS Foundation Trust, Southampton, UK
| | - Bhavan Prasad Rai
- International Training and Research in Uro-oncology and Endourology (iTRUE) Group Manipal, India
| | - Piotr Chłosta
- Department of Urology, Jagiellonian University in Kraków, Kraków, Poland
| | - Bhaskar K Somani
- International Training and Research in Uro-oncology and Endourology (iTRUE) Group, Manipal, India
| |
Collapse
|
127
|
Yang D, Zhang J, Tao Y, Lv W, Lu S, Chen H, Xu W, Shi Y. Dynamic coherent diffractive imaging with a physics-driven untrained learning method. OPTICS EXPRESS 2021; 29:31426-31442. [PMID: 34615235 DOI: 10.1364/oe.433507] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Accepted: 09/07/2021] [Indexed: 06/13/2023]
Abstract
Reconstruction of a complex field from one single diffraction measurement remains a challenging task among the community of coherent diffraction imaging (CDI). Conventional iterative algorithms are time-consuming and struggle to converge to a feasible solution because of the inherent ambiguities. Recently, deep-learning-based methods have shown considerable success in computational imaging, but they require large amounts of training data that in many cases are difficult to obtain. Here, we introduce a physics-driven untrained learning method, termed Deep CDI, which addresses the above problem and can image a dynamic process with high confidence and fast reconstruction. Without any labeled data for pretraining, the Deep CDI can reconstruct a complex-valued object from a single diffraction pattern by combining a conventional artificial neural network with a real-world physical imaging model. To our knowledge, we are the first to demonstrate that the support region constraint, which is widely used in the iteration-algorithm-based method, can be utilized for loss calculation. The loss calculated from support constraint and free propagation constraint are summed up to optimize the network's weights. As a proof of principle, numerical simulations and optical experiments on a static sample are carried out to demonstrate the feasibility of our method. We then continuously collect 3600 diffraction patterns and demonstrate that our method can predict the dynamic process with an average reconstruction speed of 228 frames per second (FPS) using only a fraction of the diffraction data to train the weights.
Collapse
|
128
|
Fu T, Zang Y, Huang H, Du Z, Hu C, Chen M, Yang S, Chen H. On-chip photonic diffractive optical neural network based on a spatial domain electromagnetic propagation model. OPTICS EXPRESS 2021; 29:31924-31940. [PMID: 34615274 DOI: 10.1364/oe.435183] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 09/07/2021] [Indexed: 06/13/2023]
Abstract
An integrated physical diffractive optical neural network (DONN) is proposed based on a standard silicon-on-insulator (SOI) substrate. This DONN has compact structure and can realize the function of machine learning with whole-passive fully-optical manners. The DONN structure is designed by the spatial domain electromagnetic propagation model, and the approximate process of the neuron value mapping is optimized well to guarantee the consistence between the pre-trained neuron value and the SOI integration implementation. This model can better ensure the manufacturability and the scale of the on-chip neural network, which can be used to guide the design and manufacturing of the real chip. The performance of our DONN is numerically demonstrated on the prototypical machine learning task of prediction of coronary heart disease from the UCI Heart Disease Dataset, and accuracy comparable to the state-of-the-art is achieved.
Collapse
|
129
|
Chen D, Wang Z, Chen K, Zeng Q, Wang L, Xu X, Liang J, Chen X. Classification of unlabeled cells using lensless digital holographic images and deep neural networks. Quant Imaging Med Surg 2021; 11:4137-4148. [PMID: 34476194 DOI: 10.21037/qims-21-16] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 05/08/2021] [Indexed: 11/06/2022]
Abstract
Background Image-based cell analytic methodologies offer a relatively simple and economical way to analyze and understand cell heterogeneities and developments. Owing to developments in high-resolution image sensors and high-performance computation processors, the emerging lensless digital holography technique enables a simple and cost-effective approach to obtain label-free cell images with a large field of view and microscopic spatial resolution. Methods The holograms of three types of cells, including MCF-10A, EC-109, and MDA-MB-231 cells, were recorded using a lensless digital holography system composed of a laser diode, a sample stage, an image sensor, and a laptop computer. The amplitude images were reconstructed using the angular spectrum method, and the sample to sensor distance was determined using the autofocusing criteria based on the sparsity of image edges and corner points. Four convolutional neural networks (CNNs) were used to classify the cell types based on the recovered holographic images. Results Classification of two cell types and three cell types achieved an accuracy of higher than 91% by all the networks used. The ResNet and the DenseNet models had similar classification accuracy of 95% or greater, outperforming the GoogLeNet and the CNN-5 models. Conclusions These experiments demonstrated that the CNNs were effective at classifying two or three types of tumor cells. The lensless holography combined with machine learning holds great promise in the application of stainless cell imaging and classification, such as in cancer diagnosis and cancer biology research, where distinguishing normal cells from cancer cells and recognizing different cancer cell types will be greatly beneficial.
Collapse
Affiliation(s)
- Duofang Chen
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Zhaohui Wang
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Kai Chen
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Qi Zeng
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Lin Wang
- School of Computer Science, Xi'an Polytechnic University, Xi'an, China
| | - Xinyi Xu
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Jimin Liang
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Xueli Chen
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| |
Collapse
|
130
|
Schackart KE, Yoon JY. Machine Learning Enhances the Performance of Bioreceptor-Free Biosensors. SENSORS (BASEL, SWITZERLAND) 2021; 21:5519. [PMID: 34450960 PMCID: PMC8401027 DOI: 10.3390/s21165519] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 08/09/2021] [Accepted: 08/13/2021] [Indexed: 01/06/2023]
Abstract
Since their inception, biosensors have frequently employed simple regression models to calculate analyte composition based on the biosensor's signal magnitude. Traditionally, bioreceptors provide excellent sensitivity and specificity to the biosensor. Increasingly, however, bioreceptor-free biosensors have been developed for a wide range of applications. Without a bioreceptor, maintaining strong specificity and a low limit of detection have become the major challenge. Machine learning (ML) has been introduced to improve the performance of these biosensors, effectively replacing the bioreceptor with modeling to gain specificity. Here, we present how ML has been used to enhance the performance of these bioreceptor-free biosensors. Particularly, we discuss how ML has been used for imaging, Enose and Etongue, and surface-enhanced Raman spectroscopy (SERS) biosensors. Notably, principal component analysis (PCA) combined with support vector machine (SVM) and various artificial neural network (ANN) algorithms have shown outstanding performance in a variety of tasks. We anticipate that ML will continue to improve the performance of bioreceptor-free biosensors, especially with the prospects of sharing trained models and cloud computing for mobile computation. To facilitate this, the biosensing community would benefit from increased contributions to open-access data repositories for biosensor data.
Collapse
Affiliation(s)
- Kenneth E. Schackart
- Department of Biosystems Engineering, The University of Arizona, Tucson, AZ 85721, USA;
| | - Jeong-Yeol Yoon
- Department of Biosystems Engineering, The University of Arizona, Tucson, AZ 85721, USA;
- Department of Biomedical Engineering, The University of Arizona, Tucson, AZ 85721, USA
| |
Collapse
|
131
|
Wang Y, Jiang F, Ju G, Xu B, An Q, Zhang C, Wang S, Xu S. Deep learning wavefront sensing for fine phasing of segmented mirrors. OPTICS EXPRESS 2021; 29:25960-25978. [PMID: 34614912 DOI: 10.1364/oe.434024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 07/10/2021] [Indexed: 06/13/2023]
Abstract
Segmented primary mirror provides many crucial important advantages for the construction of extra-large space telescopes. The imaging quality of this class of telescope is susceptible to phasing error between primary mirror segments. Deep learning has been widely applied in the field of optical imaging and wavefront sensing, including phasing segmented mirrors. Compared to other image-based phasing techniques, such as phase retrieval and phase diversity, deep learning has the advantage of high efficiency and free of stagnation problem. However, at present deep learning methods are mainly applied to coarse phasing and used to estimate piston error between segments. In this paper, deep Bi-GRU neural work is introduced to fine phasing of segmented mirrors, which not only has a much simpler structure than CNN or LSTM network, but also can effectively solve the gradient vanishing problem in training due to long term dependencies. By incorporating phasing errors (piston and tip-tilt errors), some low-order aberrations as well as other practical considerations, Bi-GRU neural work can effectively be used for fine phasing of segmented mirrors. Simulations and real experiments are used to demonstrate the accuracy and effectiveness of the proposed methods.
Collapse
|
132
|
Ryu D, Kim J, Lim D, Min HS, Yoo IY, Cho D, Park Y. Label-Free White Blood Cell Classification Using Refractive Index Tomography and Deep Learning. BME FRONTIERS 2021; 2021:9893804. [PMID: 37849908 PMCID: PMC10521749 DOI: 10.34133/2021/9893804] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 06/29/2021] [Indexed: 10/19/2023] Open
Abstract
Objective and Impact Statement. We propose a rapid and accurate blood cell identification method exploiting deep learning and label-free refractive index (RI) tomography. Our computational approach that fully utilizes tomographic information of bone marrow (BM) white blood cell (WBC) enables us to not only classify the blood cells with deep learning but also quantitatively study their morphological and biochemical properties for hematology research. Introduction. Conventional methods for examining blood cells, such as blood smear analysis by medical professionals and fluorescence-activated cell sorting, require significant time, costs, and domain knowledge that could affect test results. While label-free imaging techniques that use a specimen's intrinsic contrast (e.g., multiphoton and Raman microscopy) have been used to characterize blood cells, their imaging procedures and instrumentations are relatively time-consuming and complex. Methods. The RI tomograms of the BM WBCs are acquired via Mach-Zehnder interferometer-based tomographic microscope and classified by a 3D convolutional neural network. We test our deep learning classifier for the four types of bone marrow WBC collected from healthy donors (n = 10 ): monocyte, myelocyte, B lymphocyte, and T lymphocyte. The quantitative parameters of WBC are directly obtained from the tomograms. Results. Our results show >99% accuracy for the binary classification of myeloids and lymphoids and >96% accuracy for the four-type classification of B and T lymphocytes, monocyte, and myelocytes. The feature learning capability of our approach is visualized via an unsupervised dimension reduction technique. Conclusion. We envision that the proposed cell classification framework can be easily integrated into existing blood cell investigation workflows, providing cost-effective and rapid diagnosis for hematologic malignancy.
Collapse
Affiliation(s)
- DongHun Ryu
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon 34141, Republic of Korea
| | - Jinho Kim
- Department of Health Sciences and Technology, Samsung Advanced Institute For Health Sciences and Technology, Sungkyunkwan University, Seoul 06355, Republic of Korea
| | - Daejin Lim
- Department of Health and Safety Convergence Science, Korea University, Seoul 02841, Republic of Korea
- Department of Laboratory Medicine and Genetics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea
| | | | - In Young Yoo
- Department of Laboratory Medicine, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Duck Cho
- Department of Health Sciences and Technology, Samsung Advanced Institute For Health Sciences and Technology, Sungkyunkwan University, Seoul 06355, Republic of Korea
- Department of Laboratory Medicine and Genetics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea
- Stem Cell & Regenerative Medicine Institute, Samsung Medical Center, Seoul 06531, Republic of Korea
| | - YongKeun Park
- KAIST Institute for Health Science and Technology, KAIST, Daejeon 34141, Republic of Korea
- Department of Health Sciences and Technology, Samsung Advanced Institute For Health Sciences and Technology, Sungkyunkwan University, Seoul 06355, Republic of Korea
- Tomocube, Inc., Daejeon 34051Republic of Korea
| |
Collapse
|
133
|
Zhang Y, Liu T, Singh M, Çetintaş E, Luo Y, Rivenson Y, Larin KV, Ozcan A. Neural network-based image reconstruction in swept-source optical coherence tomography using undersampled spectral data. LIGHT, SCIENCE & APPLICATIONS 2021; 10:155. [PMID: 34326306 PMCID: PMC8322159 DOI: 10.1038/s41377-021-00594-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Revised: 07/02/2021] [Accepted: 07/06/2021] [Indexed: 05/13/2023]
Abstract
Optical coherence tomography (OCT) is a widely used non-invasive biomedical imaging modality that can rapidly provide volumetric images of samples. Here, we present a deep learning-based image reconstruction framework that can generate swept-source OCT (SS-OCT) images using undersampled spectral data, without any spatial aliasing artifacts. This neural network-based image reconstruction does not require any hardware changes to the optical setup and can be easily integrated with existing swept-source or spectral-domain OCT systems to reduce the amount of raw spectral data to be acquired. To show the efficacy of this framework, we trained and blindly tested a deep neural network using mouse embryo samples imaged by an SS-OCT system. Using 2-fold undersampled spectral data (i.e., 640 spectral points per A-line), the trained neural network can blindly reconstruct 512 A-lines in 0.59 ms using multiple graphics-processing units (GPUs), removing spatial aliasing artifacts due to spectral undersampling, also presenting a very good match to the images of the same samples, reconstructed using the full spectral OCT data (i.e., 1280 spectral points per A-line). We also successfully demonstrate that this framework can be further extended to process 3× undersampled spectral data per A-line, with some performance degradation in the reconstructed image quality compared to 2× spectral undersampling. Furthermore, an A-line-optimized undersampling method is presented by jointly optimizing the spectral sampling locations and the corresponding image reconstruction network, which improved the overall imaging performance using less spectral data points per A-line compared to 2× or 3× spectral undersampling results. This deep learning-enabled image reconstruction approach can be broadly used in various forms of spectral-domain OCT systems, helping to increase their imaging speed without sacrificing image resolution and signal-to-noise ratio.
Collapse
Affiliation(s)
- Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Department of Bioengineering, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Department of Bioengineering, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Manmohan Singh
- Department of Biomedical Engineering, University of Houston, Houston, TX, 77204, USA
| | - Ege Çetintaş
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Department of Bioengineering, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Yilin Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Department of Bioengineering, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Department of Bioengineering, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Kirill V Larin
- Department of Biomedical Engineering, University of Houston, Houston, TX, 77204, USA
- Department of Molecular Physiology and Biophysics, Baylor College of Medicine, University of Houston, Houston, TX, 77204, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Department of Bioengineering, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA.
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
134
|
Abstract
Computer holography is a technology that use a mathematical model of optical holography to generate digital holograms. It has wide and promising applications in various areas, especially holographic display. However, traditional computational algorithms for generation of phase-type holograms based on iterative optimization have a built-in tradeoff between the calculating speed and accuracy, which severely limits the performance of computational holograms in advanced applications. Recently, several deep learning based computational methods for generating holograms have gained more and more attention. In this paper, a convolutional neural network for generation of multi-plane holograms and its training strategy is proposed using a multi-plane iterative angular spectrum algorithm (ASM). The well-trained network indicates an excellent ability to generate phase-only holograms for multi-plane input images and to reconstruct correct images in the corresponding depth plane. Numerical simulations and optical reconstructions show that the accuracy of this method is almost the same with traditional iterative methods but the computational time decreases dramatically. The result images show a high quality through analysis of the image performance indicators, e.g., peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and contrast ratio. Finally, the effectiveness of the proposed method is verified through experimental investigations.
Collapse
|
135
|
Abstract
Digital holography is a very efficient technique for 3D imaging and the characterization of changes at the surfaces of objects. However, during the process of holographic interferometry, the reconstructed phase images suffer from speckle noise. In this paper, de-noising is addressed with phase images corrupted with speckle noise. To do so, DnCNN residual networks with different depths were built and trained with various holographic noisy phase data. The possibility of using a network pre-trained on natural images with Gaussian noise is also investigated. All models are evaluated in terms of phase error with HOLODEEP benchmark data and with three unseen images corresponding to different experimental conditions. The best results are obtained using a network with only four convolutional blocks and trained with a wide range of noisy phase patterns.
Collapse
|
136
|
Nehme E, Ferdman B, Weiss LE, Naor T, Freedman D, Michaeli T, Shechtman Y. Learning Optimal Wavefront Shaping for Multi-Channel Imaging. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:2179-2192. [PMID: 34029185 DOI: 10.1109/tpami.2021.3076873] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Fast acquisition of depth information is crucial for accurate 3D tracking of moving objects. Snapshot depth sensing can be achieved by wavefront coding, in which the point-spread function (PSF) is engineered to vary distinctively with scene depth by altering the detection optics. In low-light applications, such as 3D localization microscopy, the prevailing approach is to condense signal photons into a single imaging channel with phase-only wavefront modulation to achieve a high pixel-wise signal to noise ratio. Here we show that this paradigm is generally suboptimal and can be significantly improved upon by employing multi-channel wavefront coding, even in low-light applications. We demonstrate our multi-channel optimization scheme on 3D localization microscopy in densely labelled live cells where detectability is limited by overlap of modulated PSFs. At extreme densities, we show that a split-signal system, with end-to-end learned phase masks, doubles the detection rate and reaches improved precision compared to the current state-of-the-art, single-channel design. We implement our method using a bifurcated optical system, experimentally validating our approach by snapshot volumetric imaging and 3D tracking of fluorescently labelled subcellular elements in dense environments.
Collapse
|
137
|
Luo Y, Wu Y, Li L, Guo Y, Çetintaş E, Zhu Y, Ozcan A. Dynamic Imaging and Characterization of Volatile Aerosols in E-Cigarette Emissions Using Deep Learning-Based Holographic Microscopy. ACS Sens 2021; 6:2403-2410. [PMID: 34081429 DOI: 10.1021/acssensors.1c00628] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Various volatile aerosols have been associated with adverse health effects; however, characterization of these aerosols is challenging due to their dynamic nature. Here, we present a method that directly measures the volatility of particulate matter (PM) using computational microscopy and deep learning. This method was applied to aerosols generated by electronic cigarettes (e-cigs), which vaporize a liquid mixture (e-liquid) that mainly consists of propylene glycol (PG), vegetable glycerin (VG), nicotine, and flavoring compounds. E-cig-generated aerosols were recorded by a field-portable computational microscope, using an impaction-based air sampler. A lensless digital holographic microscope inside this mobile device continuously records the inline holograms of the collected particles. A deep learning-based algorithm is used to automatically reconstruct the microscopic images of e-cig-generated particles from their holograms and rapidly quantify their volatility. To evaluate the effects of e-liquid composition on aerosol dynamics, we measured the volatility of the particles generated by flavorless, nicotine-free e-liquids with various PG/VG volumetric ratios, revealing a negative correlation between the particles' volatility and the volumetric ratio of VG in the e-liquid. For a given PG/VG composition, the addition of nicotine dominated the evaporation dynamics of the e-cig aerosol and the aforementioned negative correlation was no longer observed. We also revealed that flavoring additives in e-liquids significantly decrease the volatility of e-cig aerosol. The presented holographic volatility measurement technique and the associated mobile device might provide new insights on the volatility of e-cig-generated particles and can be applied to characterize various volatile PM.
Collapse
Affiliation(s)
- Yi Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, United States
- Bioengineering Department, University of California, Los Angeles, California 90095, United States
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, United States
| | - Yichen Wu
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, United States
- Bioengineering Department, University of California, Los Angeles, California 90095, United States
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, United States
| | - Liqiao Li
- Department of Environmental Health Sciences, University of California, Los Angeles, California 90095, United States
| | - Yuening Guo
- Department of Environmental Health Sciences, University of California, Los Angeles, California 90095, United States
| | - Ege Çetintaş
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, United States
- Bioengineering Department, University of California, Los Angeles, California 90095, United States
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, United States
| | - Yifang Zhu
- Department of Environmental Health Sciences, University of California, Los Angeles, California 90095, United States
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, United States
- Bioengineering Department, University of California, Los Angeles, California 90095, United States
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, United States
- David Geffen School of Medicine, University of California, Los Angeles, California 90095, United States
| |
Collapse
|
138
|
Zhang Y, Andreas Noack M, Vagovic P, Fezzaa K, Garcia-Moreno F, Ritschel T, Villanueva-Perez P. PhaseGAN: a deep-learning phase-retrieval approach for unpaired datasets. OPTICS EXPRESS 2021; 29:19593-19604. [PMID: 34266067 DOI: 10.1364/oe.423222] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 05/27/2021] [Indexed: 06/13/2023]
Abstract
Phase retrieval approaches based on deep learning (DL) provide a framework to obtain phase information from an intensity hologram or diffraction pattern in a robust manner and in real-time. However, current DL architectures applied to the phase problem rely on i) paired datasets, i. e., they are only applicable when a satisfactory solution of the phase problem has been found, and ii) the fact that most of them ignore the physics of the imaging process. Here, we present PhaseGAN, a new DL approach based on Generative Adversarial Networks, which allows the use of unpaired datasets and includes the physics of image formation. The performance of our approach is enhanced by including the image formation physics and a novel Fourier loss function, providing phase reconstructions when conventional phase retrieval algorithms fail, such as ultra-fast experiments. Thus, PhaseGAN offers the opportunity to address the phase problem in real-time when no phase reconstructions but good simulations or data from other experiments are available.
Collapse
|
139
|
He S, Pan X, Liu C, Zhu J. Further improvements to iterative off-axis digital holography. OPTICS EXPRESS 2021; 29:18831-18844. [PMID: 34154131 DOI: 10.1364/oe.425150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 05/26/2021] [Indexed: 06/13/2023]
Abstract
In order to break through the limitation of off-axis holography in the field of measuring rough or strong scattering objects, a new iterative algorithm based on the concept of wavefront-coding was proposed. The reference wave is regarded as a wave modulator and it starts with random guess freed from the result of traditional off-axis holography. The full frequency spectrum could be retrieved iteratively after taking full advantage of the space-bandwidth production of the detector. As one form of coherent diffractive imaging, the theoretical resolution is diffraction limitation. According to the simulations and experiments with random phase plate, when the object fails to be reconstructed by traditional off-axis holography and other iterative off-axis holography algorithm due to the frequency spectrum of object is too wide, the proposed algorithm works well. It could be a general algorithm to prominently improve the capability of off-axis holography to measure rough or strong scattering objects.
Collapse
|
140
|
Zhang Z, Oh Y, Adams SD, Bennet KE, Kouzani AZ. An FSCV Deep Neural Network: Development, Pruning, and Acceleration on an FPGA. IEEE J Biomed Health Inform 2021; 25:2248-2259. [PMID: 33175684 DOI: 10.1109/jbhi.2020.3037366] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Fast-scan cyclic voltammetry (FSCV) is an electrochemical technique for measuring rapid changes in the extracellular concentration of neurotransmitters within the brain. Due to its fast scan rate and large output-data size, the current analysis of the FSCV data is often conducted on a computer external to the FSCV device. Moreover, the analysis is semi-automated and requires a good understanding of the characteristics of the underlying chemistry to interpret, making it unsuitable for real-time implementation on low-resource FSCV devices. This paper presents a hardware-software co-design approach for the analysis of FSCV data. Firstly, a deep neural network (DNN) is developed to predict the concentration of a dopamine solution and identify the data recording electrode. Secondly, the DNN is pruned to decrease its computation complexity, and a custom overlay is developed to implement the pruned DNN on a low-resource FPGA-based platform. The pruned DNN attains a recognition accuracy of 97.2% with a compression ratio of 3.18. When the DNN overlay is implemented on a PYNQ-Z2 platform, it achieves the execution time of 13 ms and power consumption of 1.479 W on the entire PYNQ-Z2 board. This study demonstrates the possibility of operating the DNN for FSCV data analysis on portable FPGA-based platforms.
Collapse
|
141
|
Niknam F, Qazvini H, Latifi H. Holographic optical field recovery using a regularized untrained deep decoder network. Sci Rep 2021; 11:10903. [PMID: 34035387 PMCID: PMC8149647 DOI: 10.1038/s41598-021-90312-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 05/10/2021] [Indexed: 02/04/2023] Open
Abstract
Image reconstruction using minimal measured information has been a long-standing open problem in many computational imaging approaches, in particular in-line holography. Many solutions are devised based on compressive sensing (CS) techniques with handcrafted image priors or supervised deep neural networks (DNN). However, the limited performance of CS methods due to lack of information about the image priors and the requirement of an enormous amount of per-sample-type training resources for DNNs has posed new challenges over the primary problem. In this study, we propose a single-shot lensless in-line holographic reconstruction method using an untrained deep neural network which is incorporated with a physical image formation algorithm. We demonstrate that by modifying a deep decoder network with simple regularizers, a Gabor hologram can be inversely reconstructed via a minimization process that is constrained by a deep image prior. The outcoming model allows to accurately recover the phase and amplitude images without any training dataset, excess measurements, or specific assumptions about the object's or the measurement's characteristics.
Collapse
Affiliation(s)
- Farhad Niknam
- grid.412502.00000 0001 0686 4748Laser and Plasma Research Institute, Shahid Beheshti University, Tehran, 1983963113 Iran
| | - Hamed Qazvini
- grid.412502.00000 0001 0686 4748Laser and Plasma Research Institute, Shahid Beheshti University, Tehran, 1983963113 Iran
| | - Hamid Latifi
- grid.412502.00000 0001 0686 4748Department of Physics, Shahid Beheshti University, Tehran, 1983963113 Iran
| |
Collapse
|
142
|
Shang R, Hoffer-Hawlik K, Wang F, Situ G, Luke GP. Two-step training deep learning framework for computational imaging without physics priors. OPTICS EXPRESS 2021; 29:15239-15254. [PMID: 33985227 PMCID: PMC8240457 DOI: 10.1364/oe.424165] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 04/21/2021] [Accepted: 04/23/2021] [Indexed: 05/20/2023]
Abstract
Deep learning (DL) is a powerful tool in computational imaging for many applications. A common strategy is to use a preprocessor to reconstruct a preliminary image as the input to a neural network to achieve an optimized image. Usually, the preprocessor incorporates knowledge of the physics priors in the imaging model. One outstanding challenge, however, is errors that arise from imperfections in the assumed model. Model mismatches degrade the quality of the preliminary image and therefore affect the DL predictions. Another main challenge is that many imaging inverse problems are ill-posed and the networks are over-parameterized; DL networks have flexibility to extract features from the data that are not directly related to the imaging model. This can lead to suboptimal training and poorer image reconstruction results. To solve these challenges, a two-step training DL (TST-DL) framework is proposed for computational imaging without physics priors. First, a single fully-connected layer (FCL) is trained to directly learn the inverse model with the raw measurement data as the inputs and the images as the outputs. Then, this pre-trained FCL is fixed and concatenated with an un-trained deep convolutional network with a U-Net architecture for a second-step training to optimize the output image. This approach has the advantage that does not rely on an accurate representation of the imaging physics since the first-step training directly learns the inverse model. Furthermore, the TST-DL approach mitigates network over-parameterization by separately training the FCL and U-Net. We demonstrate this framework using a linear single-pixel camera imaging model. The results are quantitatively compared with those from other frameworks. The TST-DL approach is shown to perform comparable to approaches which incorporate perfect knowledge of the imaging model, to be robust to noise and model ill-posedness, and to be more robust to model mismatch than approaches which incorporate imperfect knowledge of the imaging model. Furthermore, TST-DL yields better results than end-to-end training while suffering from less overfitting. Overall, this TST-DL framework is a flexible approach for image reconstruction without physics priors, applicable to diverse computational imaging systems.
Collapse
Affiliation(s)
- Ruibo Shang
- Thayer School of Engineering, Dartmouth College, 14 Engineering Dr., Hanover, NH 03755, USA
| | - Kevin Hoffer-Hawlik
- Thayer School of Engineering, Dartmouth College, 14 Engineering Dr., Hanover, NH 03755, USA
| | - Fei Wang
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guohai Situ
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
- Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou 310024, China
| | - Geoffrey P. Luke
- Thayer School of Engineering, Dartmouth College, 14 Engineering Dr., Hanover, NH 03755, USA
| |
Collapse
|
143
|
Kim J, Go T, Lee SJ. Accurate real-time monitoring of high particulate matter concentration based on holographic speckles and deep learning. JOURNAL OF HAZARDOUS MATERIALS 2021; 409:124637. [PMID: 33309383 DOI: 10.1016/j.jhazmat.2020.124637] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 10/26/2020] [Accepted: 11/17/2020] [Indexed: 06/12/2023]
Abstract
Accurate real-time monitoring of particulate matter (PM) has emerged as a global issue due to the hazardous effects of PM on public health and industry. However, conventional PM monitoring techniques are usually cumbersome and require expensive equipments. In this study, Holo-SpeckleNet is proposed as a fast and accurate PM concentration measurement technique with high throughput using a deep learning based holographic speckle pattern analysis. Speckle pattern datasets of PMs for a wide range of PM concentrations were acquired by using a digital in-line holography microscopy system. Deep autoencoder and regression algorithms were trained with the captured speckle pattern datasets to directly measure PM concentration from speckle pattern images without any air intake device and time-consuming post image processing. The proposed technique was applied to predict various PM concentrations using the test datasets, optimize hyperparameters, and compare its performance with a convolutional neural network (CNN) algorithm. As a result, high PM concentration values can be measured over air quality index of 150, above which human exposure is unhealthy. In addition, the proposed technique exhibits higher measurement accuracy and less overfitting than the CNN with a relative error of 7.46 ± 3.92%. It can be applied to design a compact air quality monitoring device for highly accurate and real-time measurement of PM concentrations under hazardous environment, such as factories or construction sites.
Collapse
Affiliation(s)
- Jihwan Kim
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang 37673, South Korea
| | - Taesik Go
- Division of Biomedical Engineering, College of Engineering, Jeonbuk National University, 567 Baekje-daero, Deokjin-gu, Jeonju-si, Jeollabuk-do 54896, South Korea
| | - Sang Joon Lee
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang 37673, South Korea.
| |
Collapse
|
144
|
Ryu D, Ryu D, Baek Y, Cho H, Kim G, Kim YS, Lee Y, Kim Y, Ye JC, Min HS, Park Y. DeepRegularizer: Rapid Resolution Enhancement of Tomographic Imaging Using Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1508-1518. [PMID: 33566760 DOI: 10.1109/tmi.2021.3058373] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Optical diffraction tomography measures the three-dimensional refractive index map of a specimen and visualizes biochemical phenomena at the nanoscale in a non-destructive manner. One major drawback of optical diffraction tomography is poor axial resolution due to limited access to the three-dimensional optical transfer function. This missing cone problem has been addressed through regularization algorithms that use a priori information, such as non-negativity and sample smoothness. However, the iterative nature of these algorithms and their parameter dependency make real-time visualization impossible. In this article, we propose and experimentally demonstrate a deep neural network, which we term DeepRegularizer, that rapidly improves the resolution of a three-dimensional refractive index map. Trained with pairs of datasets (a raw refractive index tomogram and a resolution-enhanced refractive index tomogram via the iterative total variation algorithm), the three-dimensional U-net-based convolutional neural network learns a transformation between the two tomogram domains. The feasibility and generalizability of our network are demonstrated using bacterial cells and a human leukaemic cell line, and by validating the model across different samples. DeepRegularizer offers more than an order of magnitude faster regularization performance compared to the conventional iterative method. We envision that the proposed data-driven approach can bypass the high time complexity of various image reconstructions in other imaging modalities.
Collapse
|
145
|
Chen ZY, Wei Z, Chen R, Dong JW. Focus shaping of high numerical aperture lens using physics-assisted artificial neural networks. OPTICS EXPRESS 2021; 29:13011-13024. [PMID: 33985046 DOI: 10.1364/oe.421354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Accepted: 04/02/2021] [Indexed: 06/12/2023]
Abstract
We present a physics-assisted artificial neural network (PhyANN) scheme to efficiently achieve focus shaping of high numerical aperture lens using a diffractive optical element (DOE) divided into a series of annular regions with fixed widths. Unlike the conventional ANN, the PhyANN does not require the training using labeled data, and instead output the transmission coefficients of each annular region of the DOE by fitting weights of networks to minimize the delicately designed loss function in term of focus profiles. Several focus shapes including sub-diffraction spot, flattop spot, optical needle, and multi-focus region are successfully obtained. For instance, we achieve an optical needle with 10λ depth of focus, 0.41λ lateral resolution beyond diffraction limit and high flatness of almost the same intensity distribution. Compared to typical particle swarm optimization algorithm, the PhyANN has an advantage in DOE design that generates three-dimensional focus profile. Further, the hyperparameters of the proposed PhyANN scheme are also discussed. It is expected that the obtained results benefit various applications including super-resolution imaging, optical trapping, optical lithography and so on.
Collapse
|
146
|
Kang I, Goy A, Barbastathis G. Dynamical machine learning volumetric reconstruction of objects' interiors from limited angular views. LIGHT, SCIENCE & APPLICATIONS 2021; 10:74. [PMID: 33828073 PMCID: PMC8027224 DOI: 10.1038/s41377-021-00512-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 03/04/2021] [Accepted: 03/13/2021] [Indexed: 05/26/2023]
Abstract
Limited-angle tomography of an interior volume is a challenging, highly ill-posed problem with practical implications in medical and biological imaging, manufacturing, automation, and environmental and food security. Regularizing priors are necessary to reduce artifacts by improving the condition of such problems. Recently, it was shown that one effective way to learn the priors for strongly scattering yet highly structured 3D objects, e.g. layered and Manhattan, is by a static neural network [Goy et al. Proc. Natl. Acad. Sci. 116, 19848-19856 (2019)]. Here, we present a radically different approach where the collection of raw images from multiple angles is viewed analogously to a dynamical system driven by the object-dependent forward scattering operator. The sequence index in the angle of illumination plays the role of discrete time in the dynamical system analogy. Thus, the imaging problem turns into a problem of nonlinear system identification, which also suggests dynamical learning as a better fit to regularize the reconstructions. We devised a Recurrent Neural Network (RNN) architecture with a novel Separable-Convolution Gated Recurrent Unit (SC-GRU) as the fundamental building block. Through a comprehensive comparison of several quantitative metrics, we show that the dynamic method is suitable for a generic interior-volumetric reconstruction under a limited-angle scheme. We show that this approach accurately reconstructs volume interiors under two conditions: weak scattering, when the Radon transform approximation is applicable and the forward operator well defined; and strong scattering, which is nonlinear with respect to the 3D refractive index distribution and includes uncertainty in the forward operator.
Collapse
Affiliation(s)
- Iksung Kang
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA, USA.
| | - Alexandre Goy
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- Omnisens SA, Morges, 1110, Switzerland
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, 1 Create Way, Singapore, 117543, Singapore
| |
Collapse
|
147
|
Abstract
Abstract
Deep learning is transforming most areas of science and technology, including electron microscopy. This review paper offers a practical perspective aimed at developers with limited familiarity. For context, we review popular applications of deep learning in electron microscopy. Following, we discuss hardware and software needed to get started with deep learning and interface with electron microscopes. We then review neural network components, popular architectures, and their optimization. Finally, we discuss future directions of deep learning in electron microscopy.
Collapse
|
148
|
Zhang Y. An unsupervised 2D-3D deformable registration network (2D3D-RegNet) for cone-beam CT estimation. Phys Med Biol 2021; 66. [PMID: 33631734 DOI: 10.1088/1361-6560/abe9f6] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 02/25/2021] [Indexed: 12/25/2022]
Abstract
Acquiring CBCTs from a limited scan angle can help to reduce the imaging time, save the imaging dose, and allow continuous target localizations through arc-based treatments with high temporal resolution. However, insufficient scan angle sampling leads to severe distortions and artifacts in the reconstructed CBCT images, limiting their clinical applicability. 2D-3D deformable registration can map a prior fully-sampled CT/CBCT volume to estimate a new CBCT, based on limited-angle on-board cone-beam projections. The resulting CBCT images estimated by 2D-3D deformable registration can successfully suppress the distortions and artifacts, and reflect up-to-date patient anatomy. However, traditional iterative 2D-3D deformable registration algorithm is very computationally expensive and time-consuming, which takes hours to generate a high quality deformation vector field (DVF) and the CBCT. In this work, we developed an unsupervised, end-to-end, 2D-3D deformable registration framework using convolutional neural networks (2D3D-RegNet) to address the speed bottleneck of the conventional iterative 2D-3D deformable registration algorithm. The 2D3D-RegNet was able to solve the DVFs within 5 seconds for 90 orthogonally-arranged projections covering a combined 90° scan angle, with DVF accuracy superior to 3D-3D deformable registration, and on par with the conventional 2D-3D deformable registration algorithm. We also performed a preliminary robustness analysis of 2D3D-RegNet towards projection angular sampling frequency variations, as well as scan angle offsets. The synergy of 2D3D-RegNet with biomechanical modeling was also evaluated, and demonstrated that 2D3D-RegNet can function as a fast DVF solution core for further DVF refinement.
Collapse
Affiliation(s)
- You Zhang
- Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75235, United States of America
| |
Collapse
|
149
|
Huang L, Chen H, Luo Y, Rivenson Y, Ozcan A. Recurrent neural network-based volumetric fluorescence microscopy. LIGHT, SCIENCE & APPLICATIONS 2021; 10:62. [PMID: 33753716 PMCID: PMC7985192 DOI: 10.1038/s41377-021-00506-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 02/24/2021] [Accepted: 03/02/2021] [Indexed: 05/12/2023]
Abstract
Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical, medical and life sciences. Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network, which we term as Recurrent-MZ, 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to significantly increase the depth-of-field of a 63×/1.4NA objective lens, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions, including e.g., different sequences of input images, covering various axial permutations and unknown axial positioning errors. We also demonstrated wide-field to confocal cross-modality image transformations using Recurrent-MZ framework and performed 3D image reconstruction of a sample using a few wide-field 2D fluorescence images as input, matching confocal microscopy images of the same sample volume. Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework, overcoming the limitations of current 3D scanning microscopy tools.
Collapse
Affiliation(s)
- Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Hanlong Chen
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Yilin Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California Nano Systems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
- David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
150
|
Zhang Y, Liu Y, Jiang S, Dixit K, Song P, Zhang X, Ji X, Li X. Neural network model assisted Fourier ptychography with Zernike aberration recovery and total variation constraint. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200392R. [PMID: 33768741 PMCID: PMC8330837 DOI: 10.1117/1.jbo.26.3.036502] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 03/09/2021] [Indexed: 05/31/2023]
Abstract
SIGNIFICANCE Fourier ptychography (FP) is a computational imaging approach that achieves high-resolution reconstruction. Inspired by neural networks, many deep-learning-based methods are proposed to solve FP problems. However, the performance of FP still suffers from optical aberration, which needs to be considered. AIM We present a neural network model for FP reconstructions that can make proper estimation toward aberration and achieve artifact-free reconstruction. APPROACH Inspired by the iterative reconstruction of FP, we design a neural network model that mimics the forward imaging process of FP via TensorFlow. The sample and aberration are considered as learnable weights and optimized through back-propagation. Especially, we employ the Zernike terms instead of aberration to decrease the optimization freedom of pupil recovery and perform a high-accuracy estimation. Owing to the auto-differentiation capabilities of the neural network, we additionally utilize total variation regularization to improve the visual quality. RESULTS We validate the performance of the reported method via both simulation and experiment. Our method exhibits higher robustness against sophisticated optical aberrations and achieves better image quality by reducing artifacts. CONCLUSIONS The forward neural network model can jointly recover the high-resolution sample and optical aberration in iterative FP reconstruction. We hope our method that can provide a neural-network perspective to solve iterative-based coherent or incoherent imaging problems.
Collapse
Affiliation(s)
- Yongbing Zhang
- Tsinghua University, Shenzhen International Graduate School, Department of Automation, Shenzhen, China
- Harbin Institute of Technology (Shenzhen), School of Computer of Science and Technology, Shenzhen, China
| | - Yangzhe Liu
- Tsinghua University, Shenzhen International Graduate School, Department of Automation, Shenzhen, China
| | - Shaowei Jiang
- University of Connecticut, Department of Biomedical Engineering, Storrs, Connecticut, United States
| | - Krishna Dixit
- University of Connecticut, Department of Biomedical Engineering, Storrs, Connecticut, United States
| | - Pengming Song
- University of Connecticut, Department of Electrical and Computer Engineering, Storrs, Connecticut, United States
| | - Xinfeng Zhang
- University of the Chinese Academy of Sciences, School of Computer Science and Technology, Beijing, China
| | - Xiangyang Ji
- Tsinghua University, Tsinghua National Laboratory for Information Science and Technology, Department of Automation, Beijing, China
| | - Xiu Li
- Tsinghua University, Shenzhen International Graduate School, Department of Automation, Shenzhen, China
| |
Collapse
|