251
|
He T, Hu J, Huang H. Hybrid high-order nonlocal gradient sparsity regularization for Poisson image deconvolution. APPLIED OPTICS 2018; 57:10243-10256. [PMID: 30645225 DOI: 10.1364/ao.57.010243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Accepted: 11/03/2018] [Indexed: 06/09/2023]
Abstract
Images obtained by photon-counting sensors are always contaminated with Poisson noise. Total variation (TV) has been extensively researched in image deconvolution because of its remarkable ability to preserve details. However, TV is based on the requirement that the global image gradient obeys a Laplacian distribution and can hardly maintain the information of each part of the image. We extended the global TV to nonlocal modeling and established an intensity-adaptive nonlocal regularization based on similar blocks. Meanwhile, to restrain the staircase effect caused by first-order regularization, we proposed a new hybrid nonlocal regularization by modeling the sparsity of the high-order derivative. An efficient alternating direction method of multipliers algorithm was employed to solve the proposed model, and the adaptive selection strategy of regularization parameters in the model was further studied and analyzed. The experimental results show that the proposed hybrid high-order nonlocal gradient sparsity regularization model achieves a substantial computational time improvement compared to another nonlocal restoration algorithm while producing a relatively clear recovery image.
Collapse
|
252
|
Närhi M, Salmela L, Toivonen J, Billet C, Dudley JM, Genty G. Machine learning analysis of extreme events in optical fibre modulation instability. Nat Commun 2018; 9:4923. [PMID: 30467348 PMCID: PMC6250684 DOI: 10.1038/s41467-018-07355-y] [Citation(s) in RCA: 64] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2018] [Accepted: 10/30/2018] [Indexed: 11/16/2022] Open
Abstract
A central research area in nonlinear science is the study of instabilities that drive extreme events. Unfortunately, techniques for measuring such phenomena often provide only partial characterisation. For example, real-time studies of instabilities in nonlinear optics frequently use only spectral data, limiting knowledge of associated temporal properties. Here, we show how machine learning can overcome this restriction to study time-domain properties of optical fibre modulation instability based only on spectral intensity measurements. Specifically, a supervised neural network is trained to correlate the spectral and temporal properties of modulation instability using simulations, and then applied to analyse high dynamic range experimental spectra to yield the probability distribution for the highest temporal peaks in the instability field. We also use unsupervised learning to classify noisy modulation instability spectra into subsets associated with distinct temporal dynamic structures. These results open novel perspectives in all systems exhibiting instability where direct time-domain observations are difficult.
Collapse
Affiliation(s)
- Mikko Närhi
- Tampere University of Technology, Laboratory of Photonics, FI-33101, Tampere, Finland
| | - Lauri Salmela
- Tampere University of Technology, Laboratory of Photonics, FI-33101, Tampere, Finland
| | - Juha Toivonen
- Tampere University of Technology, Laboratory of Photonics, FI-33101, Tampere, Finland
| | - Cyril Billet
- Institut FEMTO-ST, Université Bourgogne Franche-Comté, CNRS UMR 6174, 25000, Besançon, France
| | - John M Dudley
- Institut FEMTO-ST, Université Bourgogne Franche-Comté, CNRS UMR 6174, 25000, Besançon, France
| | - Goëry Genty
- Tampere University of Technology, Laboratory of Photonics, FI-33101, Tampere, Finland.
| |
Collapse
|
253
|
Kim SJ, Wang C, Zhao B, Im H, Min J, Choi HJ, Tadros J, Choi NR, Castro CM, Weissleder R, Lee H, Lee K. Deep transfer learning-based hologram classification for molecular diagnostics. Sci Rep 2018; 8:17003. [PMID: 30451953 PMCID: PMC6242900 DOI: 10.1038/s41598-018-35274-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Accepted: 11/02/2018] [Indexed: 01/17/2023] Open
Abstract
Lens-free digital in-line holography (LDIH) is a promising microscopic tool that overcomes several drawbacks (e.g., limited field of view) of traditional lens-based microcopy. However, extensive computation is required to reconstruct object images from the complex diffraction patterns produced by LDIH. This limits LDIH utility for point-of-care applications, particularly in resource limited settings. We describe a deep transfer learning (DTL) based approach to process LDIH images in the context of cellular analyses. Specifically, we captured holograms of cells labeled with molecular-specific microbeads and trained neural networks to classify these holograms without reconstruction. Using raw holograms as input, the trained networks were able to classify individual cells according to the number of cell-bound microbeads. The DTL-based approach including a VGG19 pretrained network showed robust performance with experimental data. Combined with the developed DTL approach, LDIH could be realized as a low-cost, portable tool for point-of-care diagnostics.
Collapse
Affiliation(s)
- Sung-Jin Kim
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, Massachusetts, USA
| | - Chuangqi Wang
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, Massachusetts, USA
| | - Bing Zhao
- Department of Computer Science, Worcester Polytechnic Institute, Worcester, Massachusetts, USA
| | - Hyungsoon Im
- Center for Systems Biology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Jouha Min
- Center for Systems Biology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Hee June Choi
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, Massachusetts, USA
| | - Joseph Tadros
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, Massachusetts, USA
| | - Nu Ri Choi
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, Massachusetts, USA
| | - Cesar M Castro
- Center for Systems Biology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Ralph Weissleder
- Center for Systems Biology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Department of Systems Biology, Harvard Medical School, Boston, Massachusetts, USA
| | - Hakho Lee
- Center for Systems Biology, Massachusetts General Hospital, Boston, Massachusetts, USA.
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA.
| | - Kwonmoo Lee
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, Massachusetts, USA.
- Department of Electrical and Computer Engineering, Worcester Polytechnic Institute, Worcester, Massachusetts, USA.
| |
Collapse
|
254
|
Zhang X, Chen Y, Ning K, Zhou C, Han Y, Gong H, Yuan J. Deep learning optical-sectioning method. OPTICS EXPRESS 2018; 26:30762-30772. [PMID: 30469968 DOI: 10.1364/oe.26.030762] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Current optical-sectioning methods require complex optical system or considerable computation time to improve imaging quality. Here we propose a deep learning-based method for optical sectioning of wide-field images. This method only needs one pair of contrast images for training to facilitate reconstruction of an optically sectioned image. The removal effect of background information and resolution that is achievable with our technique is similar to traditional optical-sectioning methods, but offers lower noise levels and a higher imaging depth. Moreover, reconstruction speed can be optimized to 14 Hz. This cost-effective and convenient method enables high-throughput optical sectioning techniques to be developed.
Collapse
|
255
|
Cherukara MJ, Nashed YSG, Harder RJ. Real-time coherent diffraction inversion using deep generative networks. Sci Rep 2018; 8:16520. [PMID: 30410034 PMCID: PMC6224523 DOI: 10.1038/s41598-018-34525-1] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Accepted: 10/19/2018] [Indexed: 11/11/2022] Open
Abstract
Phase retrieval, or the process of recovering phase information in reciprocal space to reconstruct images from measured intensity alone, is the underlying basis to a variety of imaging applications including coherent diffraction imaging (CDI). Typical phase retrieval algorithms are iterative in nature, and hence, are time-consuming and computationally expensive, making real-time imaging a challenge. Furthermore, iterative phase retrieval algorithms struggle to converge to the correct solution especially in the presence of strong phase structures. In this work, we demonstrate the training and testing of CDI NN, a pair of deep deconvolutional networks trained to predict structure and phase in real space of a 2D object from its corresponding far-field diffraction intensities alone. Once trained, CDI NN can invert a diffraction pattern to an image within a few milliseconds of compute time on a standard desktop machine, opening the door to real-time imaging.
Collapse
Affiliation(s)
- Mathew J Cherukara
- Advanced Photon Source, Argonne National Laboratory, Lemont, IL, 60439, USA.
- Center for Nanoscale Materials, Argonne National Laboratory, Lemont, IL, 60439, USA.
| | - Youssef S G Nashed
- Mathematics and Computer Science, Argonne National Laboratory, Lemont, IL, 60439, USA
| | - Ross J Harder
- Advanced Photon Source, Argonne National Laboratory, Lemont, IL, 60439, USA
| |
Collapse
|
256
|
Li S, Barbastathis G. Spectral pre-modulation of training examples enhances the spatial resolution of the phase extraction neural network (PhENN). OPTICS EXPRESS 2018; 26:29340-29352. [PMID: 30470099 DOI: 10.1364/oe.26.029340] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Accepted: 09/21/2018] [Indexed: 05/27/2023]
Abstract
The phase extraction neural network (PhENN) [Optica 4, 1117 (2017)] is a computational architecture, based on deep machine learning, for lens-less quantitative phase retrieval from raw intensity data. PhENN is a deep convolutional neural network trained through examples consisting of pairs of true phase objects and their corresponding intensity diffraction patterns; thereafter, given a test raw intensity pattern, PhENN is capable of reconstructing the original phase object robustly, in many cases even for objects outside the database where the training examples were drawn from. Here, we show that the spatial frequency content of the training examples is an important factor limiting PhENN's spatial frequency response. For example, if the training database is relatively sparse in high spatial frequencies, as most natural scenes are, PhENN's ability to resolve fine spatial features in test patterns will be correspondingly limited. To combat this issue, we propose "flattening" the power spectral density of the training examples before presenting them to PhENN. For phase objects following the statistics of natural scenes, we demonstrate experimentally that the spectral pre-modulation method enhances the spatial resolution of PhENN by a factor of 2.
Collapse
|
257
|
Grant-Jacob JA, Mackay BS, Baker JAG, Heath DJ, Xie Y, Loxham M, Eason RW, Mills B. Real-time particle pollution sensing using machine learning. OPTICS EXPRESS 2018; 26:27237-27246. [PMID: 30469796 DOI: 10.1364/oe.26.027237] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Accepted: 09/11/2018] [Indexed: 06/09/2023]
Abstract
Particle pollution is a global health challenge that is linked to around three million premature deaths per year. There is therefore great interest in the development of sensors capable of precisely quantifying both the number and type of particles. Here, we demonstrate an approach that leverages machine learning in order to identify particulates directly from their scattering patterns. We show the capability for producing a 2D sample map of spherical particles present on a coverslip, and also demonstrate real-time identification of a range of particles including those from diesel combustion.
Collapse
|
258
|
Rahmani B, Loterie D, Konstantinou G, Psaltis D, Moser C. Multimode optical fiber transmission with a deep learning network. LIGHT, SCIENCE & APPLICATIONS 2018; 7:69. [PMID: 30302240 PMCID: PMC6168552 DOI: 10.1038/s41377-018-0074-1] [Citation(s) in RCA: 83] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Revised: 09/04/2018] [Accepted: 09/11/2018] [Indexed: 05/02/2023]
Abstract
Multimode fibers (MMFs) are an example of a highly scattering medium, which scramble the coherent light propagating within them to produce seemingly random patterns. Thus, for applications such as imaging and image projection through an MMF, careful measurements of the relationship between the inputs and outputs of the fiber are required. We show, as a proof of concept, that a deep neural network can learn the input-output relationship in a 0.75 m long MMF. Specifically, we demonstrate that a deep convolutional neural network (CNN) can learn the nonlinear relationships between the amplitude of the speckle pattern (phase information lost) obtained at the output of the fiber and the phase or the amplitude at the input of the fiber. Effectively, the network performs a nonlinear inversion task. We obtained image fidelities (correlations) as high as ~98% for reconstruction and ~94% for image projection in the MMF compared with the image recovered using the full knowledge of the system transmission characterized with the complex measured matrix. We further show that the network can be trained for transfer learning, i.e., it can transmit images through the MMF, which belongs to another class not used for training/testing.
Collapse
Affiliation(s)
- Babak Rahmani
- Ecole Polytechnique Fédérale de Lausanne, Laboratory of Applied Photonics Devices, CH-1015 Lausanne, Switzerland
| | - Damien Loterie
- Ecole Polytechnique Fédérale de Lausanne, Laboratory of Applied Photonics Devices, CH-1015 Lausanne, Switzerland
| | - Georgia Konstantinou
- Ecole Polytechnique Fédérale de Lausanne, Laboratory of Applied Photonics Devices, CH-1015 Lausanne, Switzerland
| | - Demetri Psaltis
- Ecole Polytechnique Fédérale de Lausanne, Laboratory of Optics, CH-1015 Lausanne, Switzerland
| | - Christophe Moser
- Ecole Polytechnique Fédérale de Lausanne, Laboratory of Applied Photonics Devices, CH-1015 Lausanne, Switzerland
| |
Collapse
|
259
|
Nguyen T, Xue Y, Li Y, Tian L, Nehmetallah G. Deep learning approach for Fourier ptychography microscopy. OPTICS EXPRESS 2018; 26:26470-26484. [PMID: 30469733 DOI: 10.1364/oe.26.026470] [Citation(s) in RCA: 89] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800×10800 pixel phase image using only ∼25 seconds, a 50× speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ∼ 6×. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.
Collapse
|
260
|
Li D, Xu S, Qi X, Wang D, Cao X. Variable step size adaptive cuckoo search optimization algorithm for phase diversity. APPLIED OPTICS 2018; 57:8212-8219. [PMID: 30461770 DOI: 10.1364/ao.57.008212] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 08/31/2018] [Indexed: 06/09/2023]
Abstract
The phase diversity (PD) algorithm will eventually be converted into a large-scale nonlinear numerical optimization problem, so the selection of numerical optimization algorithm will directly determine the accuracy and speed of the algorithm settlement. In this paper, we introduce the cuckoo search optimization algorithm, which has the advantages of simple model, few parameters, and easy implementation, to the phase diversity algorithm. By improving the step size control factor in the original cuckoo search algorithm, we can make it have faster optimization speed for PD. In the simulation experiments, we further proved and gave a simple explanation in theory that in the case of large-scale wavefront sensing, compared to the traditional particle swarm algorithm, this improved algorithm has higher accuracy and faster convergence speed. Finally, we set up a simple experimental system and proved the effectiveness of the improved cuckoo search algorithm for PD.
Collapse
|
261
|
Liang J, Wang LV. Single-shot ultrafast optical imaging. OPTICA 2018; 5:1113-1127. [PMID: 30820445 PMCID: PMC6388706 DOI: 10.1364/optica.5.001113] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Accepted: 08/21/2018] [Indexed: 05/18/2023]
Abstract
Single-shot ultrafast optical imaging can capture two-dimensional transient scenes in the optical spectral range at ≥100 million frames per second. This rapidly evolving field surpasses conventional pump-probe methods by possessing the real-time imaging capability, which is indispensable for recording non-repeatable and difficult-to-reproduce events and for understanding physical, chemical, and biological mechanisms. In this mini-review, we survey comprehensively the state-of-the-art single-shot ultrafast optical imaging. Based on the illumination requirement, we categorized the field into active-detection and passive-detection domains. Depending on the specific image acquisition and reconstruction strategies, these two categories are further divided into a total of six sub-categories. Under each sub-category, we describe operating principles, present representative cutting-edge techniques with a particular emphasis on their methodology and applications, and discuss their advantages and challenges. Finally, we envision prospects of technical advancement in this field.
Collapse
Affiliation(s)
- Jinyang Liang
- Laboratory of Applied Computational Imaging, Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 Boulevard Lionel-Boulet, Varennes, QC J3X1S2, Canada
| | - Lihong V. Wang
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 East California Boulevard, Mail Code 138-78, Pasadena, CA 91125, USA
| |
Collapse
|
262
|
Gӧrӧcs Z, Tamamitsu M, Bianco V, Wolf P, Roy S, Shindo K, Yanny K, Wu Y, Koydemir HC, Rivenson Y, Ozcan A. A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples. LIGHT, SCIENCE & APPLICATIONS 2018; 7:66. [PMID: 30245813 PMCID: PMC6143550 DOI: 10.1038/s41377-018-0067-0] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Revised: 08/28/2018] [Accepted: 08/29/2018] [Indexed: 05/12/2023]
Abstract
We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput of 100 mL/h. The device is based on partially coherent lens-free holographic microscopy and acquires the diffraction patterns of flowing micro-objects inside a microfluidic channel. These holographic diffraction patterns are reconstructed in real time using a deep learning-based phase-recovery and image-reconstruction method to produce a color image of each micro-object without the use of external labeling. Motion blur is eliminated by simultaneously illuminating the sample with red, green, and blue light-emitting diodes that are pulsed. Operated by a laptop computer, this portable device measures 15.5 cm × 15 cm × 12.5 cm, weighs 1 kg, and compared to standard imaging flow cytometers, it provides extreme reductions of cost, size and weight while also providing a high volumetric throughput over a large object size range. We demonstrated the capabilities of this device by measuring ocean samples at the Los Angeles coastline and obtaining images of its micro- and nanoplankton composition. Furthermore, we measured the concentration of a potentially toxic alga (Pseudo-nitzschia) in six public beaches in Los Angeles and achieved good agreement with measurements conducted by the California Department of Public Health. The cost-effectiveness, compactness, and simplicity of this computational platform might lead to the creation of a network of imaging flow cytometers for large-scale and continuous monitoring of the ocean microbiome, including its plankton composition.
Collapse
Affiliation(s)
- Zoltán Gӧrӧcs
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095 USA
| | - Miu Tamamitsu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095 USA
| | - Vittorio Bianco
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
| | - Patrick Wolf
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
| | - Shounak Roy
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
| | - Koyoshi Shindo
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
| | - Kyrollos Yanny
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
| | - Yichen Wu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095 USA
| | - Hatice Ceylan Koydemir
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095 USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095 USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095 USA
| |
Collapse
|
263
|
Pietrini A, Nettelblad C. Using convex optimization of autocorrelation with constrained support and windowing for improved phase retrieval accuracy. OPTICS EXPRESS 2018; 26:24422-24443. [PMID: 30469561 DOI: 10.1364/oe.26.024422] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Accepted: 08/08/2018] [Indexed: 06/09/2023]
Abstract
In imaging modalities recording diffraction data, such as the imaging of viruses at X-ray free electron laser facilities, the original image can be reconstructed assuming known phases. When phases are unknown, oversampling and a constraint on the support region in the original object can be used to solve a non-convex optimization problem using iterative alternating-projection methods. Such schemes are ill-suited for finding the optimum solution for sparse data, since the recorded pattern does not correspond exactly to the original wave function. Different iteration starting points can give rise to different solutions. We construct a convex optimization problem, where the only local optimum is also the global optimum. This is achieved using a modified support constraint and a maximum-likelihood treatment of the recorded data as a sample from the underlying wave function. This relaxed problem is solved in order to provide a new set of most probable "healed" signal intensities, without sparseness and missing data. For these new intensities, it should be possible to satisfy the support constraint and intensity constraint exactly, without conflicts between them. By making both constraints satisfiable, traditional phase retrieval with superior results is made possible. On simulated data, we demonstrate the benefits of our approach visually, and quantify the improvement in terms of the crystallographic R factor for the recovered scalar amplitudes relative to true simulations from .405 to .097, as well as the mean-squared error in the reconstructed image from .233 to .139. We also compare our approach, with regards to theory and simulation results, to other approaches for healing as well as noise-tolerant phase retrieval. These tests indicate that the COACS pre-processing allows for best-in-class results.
Collapse
|
264
|
Wang H, Lyu M, Situ G. eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction. OPTICS EXPRESS 2018; 26:22603-22614. [PMID: 30184918 DOI: 10.1364/oe.26.022603] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2018] [Accepted: 08/05/2018] [Indexed: 05/23/2023]
Abstract
It is well known that in-line digital holography (DH) makes use of the full pixel count in forming the holographic imaging. But it usually requires phase-shifting or phase retrieval techniques to remove the zero-order and twin-image terms, resulting in the so-called two-step reconstruction process, i.e., phase recovery and focusing. Here, we propose a one-step end-to-end learning-based method for in-line holography reconstruction, namely, the eHoloNet, which can reconstruct the object wavefront directly from a single-shot in-line digital hologram. In addition, the proposed learning-based DH technique has strong robustness to the change of optical path difference between reference beam and object light and does not require the reference beam to be a plane or spherical wave.
Collapse
|
265
|
Zhang W, Cao L, Brady DJ, Zhang H, Cang J, Zhang H, Jin G. Twin-Image-Free Holography: A Compressive Sensing Approach. PHYSICAL REVIEW LETTERS 2018; 121:093902. [PMID: 30230890 DOI: 10.1103/physrevlett.121.093902] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Indexed: 05/26/2023]
Abstract
Holographic reconstruction is troubled by the phase-conjugate wave front arising from Hermitian symmetry of the complex field. The so-called twin image obfuscates the reconstruction in solving the inverse problem. Here we quantitatively reveal how and how much the twin image affects the reconstruction and propose a compressive sensing (CS) approach to reconstruct a hologram completely free from the twin image. Using the canonical basis, the incoherence condition of CS is naturally satisfied by the Fourier transformation associated with wave propagation. With the propagation kernel function related to the distance, the object wave diffracts into a sharp pattern while the phase-conjugate wave diffracts into a diffuse pattern. An iterative algorithm using a total variation sparsity constraint could filter out the diffuse conjugated signal and overcome the inherent physical symmetry of holographic reconstruction. The feasibility is verified by simulation and experimental results, as well as a comparative study to an existing phase retrieval method.
Collapse
Affiliation(s)
- Wenhui Zhang
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China
- Department of Electrical and Computer Engineering, Box 90291, Duke University, Durham, North Carolina 27708, USA
| | - Liangcai Cao
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China
| | - David J Brady
- Department of Electrical and Computer Engineering, Box 90291, Duke University, Durham, North Carolina 27708, USA
| | - Hua Zhang
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China
- Department of Electrical and Computer Engineering, Box 90291, Duke University, Durham, North Carolina 27708, USA
| | - Ji Cang
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China
| | - Hao Zhang
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China
| | - Guofan Jin
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China
| |
Collapse
|
266
|
Zhou Y, Wu J, Suo J, Han X, Zheng G, Dai Q. Single-shot lensless imaging via simultaneous multi-angle LED illumination. OPTICS EXPRESS 2018; 26:21418-21432. [PMID: 30130850 DOI: 10.1364/oe.26.021418] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2018] [Accepted: 07/17/2018] [Indexed: 06/08/2023]
Abstract
Lensless imaging is a technique that records diffraction patterns without using lenses and recovers the complex field of object via phase retrieval. Robust lensless phase retrieval process usually requires multiple measurements with defocus variation, transverse translation or angle-varied illumination. However, making such diverse measurements is time-consuming and limits the application of lensless setup for dynamic samples. In this paper, we propose a single-shot lensless imaging scheme via simultaneous multi-angle LED illumination. Diffraction patterns under multi-angle lights are recorded by different areas of the sensor within a single shot. An optimization algorithm is applied to utilize the single-shot measurement and retrieve the aliasing information for reconstruction. We first use numerical simulations to evaluate the proposed scheme quantitatively by comparisons with the multi-acquisition case. Then a proof-of-concept lensless setup is built to validate the method by imaging a resolution chart and biological samples, achieving ∼ 4.92 μm half-pitch resolution and ∼ 1.202mm2 field of view (FOV). We also discuss different design tradeoffs and present a 4-frame acquisition scheme (with ∼ 3.48 μm half-pitch resolution and ∼ 2.35 × 2.55 mm2 FOV) to show the flexibility of performance enhancement by capturing more measurements.
Collapse
|
267
|
Zhang X, Li C, Meng Q, Liu S, Zhang Y, Wang J. Infrared Image Super Resolution by Combining Compressive Sensing and Deep Learning. SENSORS (BASEL, SWITZERLAND) 2018; 18:E2587. [PMID: 30087286 PMCID: PMC6111996 DOI: 10.3390/s18082587] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Revised: 07/26/2018] [Accepted: 08/03/2018] [Indexed: 12/01/2022]
Abstract
Super resolution methods alleviate the high cost and high difficulty in applying high resolution infrared image sensors. In this paper we present a novel single image super resolution method for infrared images by combining compressive sensing theory and deep learning. Low resolution images can be regarded as the compressed sampling results of the high resolution ones in compressive sensing. With sparsity in this theory, higher resolution images can be reconstructed. However, because of diverse level of sparsity for different images, the output contains noise and loss of high frequency information. Deep convolutional neural network provides a solution to relieve the noise and supplement some missing high frequency information. By concatenating two methods, we manage to produce better results in super resolution tasks for infrared images than SRCNN and ScSR. PSNR and SSIM values are used to quantify the performance. Applying our method to open datasets and actual infrared imaging experiments, we also find better visual results are preserved.
Collapse
Affiliation(s)
- Xudong Zhang
- University of Chinese Academy of Sciences, Beijing 101408, China.
- Key Laboratory of Space Active Opto-Electronics Technology, Shanghai Institute of Technical Physics of the Chinese Academy of Sciences, Shanghai 200083, China.
| | - Chunlai Li
- Key Laboratory of Space Active Opto-Electronics Technology, Shanghai Institute of Technical Physics of the Chinese Academy of Sciences, Shanghai 200083, China.
| | - Qingpeng Meng
- University of Chinese Academy of Sciences, Beijing 101408, China.
- Key Laboratory of Space Active Opto-Electronics Technology, Shanghai Institute of Technical Physics of the Chinese Academy of Sciences, Shanghai 200083, China.
| | - Shijie Liu
- University of Chinese Academy of Sciences, Beijing 101408, China.
- Key Laboratory of Space Active Opto-Electronics Technology, Shanghai Institute of Technical Physics of the Chinese Academy of Sciences, Shanghai 200083, China.
| | - Yue Zhang
- University of Chinese Academy of Sciences, Beijing 101408, China.
- Key Laboratory of Space Active Opto-Electronics Technology, Shanghai Institute of Technical Physics of the Chinese Academy of Sciences, Shanghai 200083, China.
| | - Jianyu Wang
- University of Chinese Academy of Sciences, Beijing 101408, China.
- Key Laboratory of Space Active Opto-Electronics Technology, Shanghai Institute of Technical Physics of the Chinese Academy of Sciences, Shanghai 200083, China.
| |
Collapse
|
268
|
Compression of Phase-Only Holograms with JPEG Standard and Deep Learning. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8081258] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
It is a critical issue to reduce the enormous amount of data in the processing, storage and transmission of a hologram in digital format. In photograph compression, the JPEG standard is commonly supported by almost every system and device. It will be favorable if JPEG standard is applicable to hologram compression, with advantages of universal compatibility. However, the reconstructed image from a JPEG compressed hologram suffers from severe quality degradation since some high frequency features in the hologram will be lost during the compression process. In this work, we employ a deep convolutional neural network to reduce the artifacts in a JPEG compressed hologram. Simulation and experimental results reveal that our proposed “JPEG + deep learning” hologram compression scheme can achieve satisfactory reconstruction results for a computer-generated phase-only hologram after compression.
Collapse
|
269
|
Lin X, Rivenson Y, Yardimci NT, Veli M, Luo Y, Jarrahi M, Ozcan A. All-optical machine learning using diffractive deep neural networks. Science 2018; 361:1004-1008. [PMID: 30049787 DOI: 10.1126/science.aat8084] [Citation(s) in RCA: 377] [Impact Index Per Article: 62.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Accepted: 07/12/2018] [Indexed: 12/18/2022]
Abstract
Deep learning has been transforming our ability to execute advanced inference tasks using computers. Here we introduce a physical mechanism to perform machine learning by demonstrating an all-optical diffractive deep neural network (D2NN) architecture that can implement various functions following the deep learning-based design of passive diffractive layers that work collectively. We created 3D-printed D2NNs that implement classification of images of handwritten digits and fashion products, as well as the function of an imaging lens at a terahertz spectrum. Our all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can execute; will find applications in all-optical image analysis, feature detection, and object classification; and will also enable new camera designs and optical components that perform distinctive tasks using D2NNs.
Collapse
Affiliation(s)
- Xing Lin
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90095, USA.,Department of Bioengineering, University of California, Los Angeles, CA 90095, USA.,California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Yair Rivenson
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90095, USA.,Department of Bioengineering, University of California, Los Angeles, CA 90095, USA.,California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Nezih T Yardimci
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90095, USA.,California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Muhammed Veli
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90095, USA.,Department of Bioengineering, University of California, Los Angeles, CA 90095, USA.,California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Yi Luo
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90095, USA.,Department of Bioengineering, University of California, Los Angeles, CA 90095, USA.,California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Mona Jarrahi
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90095, USA.,California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Aydogan Ozcan
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90095, USA. .,Department of Bioengineering, University of California, Los Angeles, CA 90095, USA.,California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA.,Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
270
|
Zhang G, Guan T, Shen Z, Wang X, Hu T, Wang D, He Y, Xie N. Fast phase retrieval in off-axis digital holographic microscopy through deep learning. OPTICS EXPRESS 2018; 26:19388-19405. [PMID: 30114112 DOI: 10.1364/oe.26.019388] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Accepted: 07/10/2018] [Indexed: 06/08/2023]
Abstract
Traditional digital holographic imaging algorithms need multiple iterations to obtain focused reconstructed image, which is time-consuming. In terms of phase retrieval, there is also the problem of phase compensation in addition to focusing task. Here, a new method is proposed for fast digital focus, where we use U-type convolutional neural network (U-net) to recover the original phase of microscopic samples. Generated data sets are used to simulate different degrees of defocused image, and verify that the U-net can restore the original phase to a great extent and realize phase compensation at the same time. We apply this method in the construction of real-time off-axis digital holographic microscope and obtain great breakthroughs in imaging speed.
Collapse
|
271
|
Jiang S, Guo K, Liao J, Zheng G. Solving Fourier ptychographic imaging problems via neural network modeling and TensorFlow. BIOMEDICAL OPTICS EXPRESS 2018; 9:3306-3319. [PMID: 29984099 PMCID: PMC6033553 DOI: 10.1364/boe.9.003306] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Revised: 06/03/2018] [Accepted: 06/13/2018] [Indexed: 05/05/2023]
Abstract
Fourier ptychography is a recently developed imaging approach for large field-of-view and high-resolution microscopy. Here we model the Fourier ptychographic forward imaging process using a convolutional neural network (CNN) and recover the complex object information in a network training process. In this approach, the input of the network is the point spread function in the spatial domain or the coherent transfer function in the Fourier domain. The object is treated as 2D learnable weights of a convolutional or a multiplication layer. The output of the network is modeled as the loss function we aim to minimize. The batch size of the network corresponds to the number of captured low-resolution images in one forward/backward pass. We use a popular open-source machine learning library, TensorFlow, for setting up the network and conducting the optimization process. We analyze the performance of different learning rates, different solvers, and different batch sizes. It is shown that a large batch size with the Adam optimizer achieves the best performance in general. To accelerate the phase retrieval process, we also discuss a strategy to implement Fourier-magnitude projection using a multiplication neural network model. Since convolution and multiplication are the two most-common operations in imaging modeling, the reported approach may provide a new perspective to examine many coherent and incoherent systems. As a demonstration, we discuss the extensions of the reported networks for modeling single-pixel imaging and structured illumination microscopy (SIM). 4-frame resolution doubling is demonstrated using a neural network for SIM. The link between imaging systems and neural network modeling may enable the use of machine-learning hardware such as neural engine and tensor processing unit for accelerating the image reconstruction process. We have made our implementation code open-source for researchers.
Collapse
Affiliation(s)
- Shaowei Jiang
- Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
- These authors contributed equally to this work
| | - Kaikai Guo
- Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
- These authors contributed equally to this work
| | - Jun Liao
- Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
- These authors contributed equally to this work
| | - Guoan Zheng
- Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Electrical and Computer Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
272
|
Lyu M, Wang W, Wang H, Wang H, Li G, Chen N, Situ G. Deep-learning-based ghost imaging. Sci Rep 2017; 7:17865. [PMID: 29259269 PMCID: PMC5736587 DOI: 10.1038/s41598-017-18171-7] [Citation(s) in RCA: 106] [Impact Index Per Article: 15.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2017] [Accepted: 12/05/2017] [Indexed: 11/10/2022] Open
Abstract
In this manuscript, we propose a novel framework of computational ghost imaging, i.e., ghost imaging using deep learning (GIDL). With a set of images reconstructed using traditional GI and the corresponding ground-truth counterparts, a deep neural network was trained so that it can learn the sensing model and increase the quality image reconstruction. Moreover, detailed comparisons between the image reconstructed using deep learning and compressive sensing shows that the proposed GIDL has a much better performance in extremely low sampling rate. Numerical simulations and optical experiments were carried out for the demonstration of the proposed GIDL.
Collapse
Affiliation(s)
- Meng Lyu
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Wei Wang
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Hao Wang
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Haichao Wang
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Guowei Li
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Ni Chen
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Guohai Situ
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
273
|
Zhang Y, Shin Y, Sung K, Yang S, Chen H, Wang H, Teng D, Rivenson Y, Kulkarni RP, Ozcan A. 3D imaging of optically cleared tissue using a simplified CLARITY method and on-chip microscopy. SCIENCE ADVANCES 2017; 3:e1700553. [PMID: 28819645 PMCID: PMC5553818 DOI: 10.1126/sciadv.1700553] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2017] [Accepted: 07/12/2017] [Indexed: 05/07/2023]
Abstract
High-throughput sectioning and optical imaging of tissue samples using traditional immunohistochemical techniques can be costly and inaccessible in resource-limited areas. We demonstrate three-dimensional (3D) imaging and phenotyping in optically transparent tissue using lens-free holographic on-chip microscopy as a low-cost, simple, and high-throughput alternative to conventional approaches. The tissue sample is passively cleared using a simplified CLARITY method and stained using 3,3'-diaminobenzidine to target cells of interest, enabling bright-field optical imaging and 3D sectioning of thick samples. The lens-free computational microscope uses pixel super-resolution and multi-height phase recovery algorithms to digitally refocus throughout the cleared tissue and obtain a 3D stack of complex-valued images of the sample, containing both phase and amplitude information. We optimized the tissue-clearing and imaging system by finding the optimal illumination wavelength, tissue thickness, sample preparation parameters, and the number of heights of the lens-free image acquisition and implemented a sparsity-based denoising algorithm to maximize the imaging volume and minimize the amount of the acquired data while also preserving the contrast-to-noise ratio of the reconstructed images. As a proof of concept, we achieved 3D imaging of neurons in a 200-μm-thick cleared mouse brain tissue over a wide field of view of 20.5 mm2. The lens-free microscope also achieved more than an order-of-magnitude reduction in raw data compared to a conventional scanning optical microscope imaging the same sample volume. Being low cost, simple, high-throughput, and data-efficient, we believe that this CLARITY-enabled computational tissue imaging technique could find numerous applications in biomedical diagnosis and research in low-resource settings.
Collapse
Affiliation(s)
- Yibo Zhang
- Electrical Engineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Yoonjung Shin
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- Division of Dermatology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA 90095 USA
| | - Kevin Sung
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- Division of Dermatology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA 90095 USA
| | - Sam Yang
- Electrical Engineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Harrison Chen
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Hongda Wang
- Electrical Engineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Da Teng
- Computer Science Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Yair Rivenson
- Electrical Engineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Rajan P. Kulkarni
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA 90095, USA
- Division of Dermatology, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA 90095 USA
- Jonsson Comprehensive Cancer Center, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Aydogan Ozcan
- Electrical Engineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA 90095, USA
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA 90095, USA
| |
Collapse
|