1
|
Chalfoun J, Lund SP, Ling C, Peskin A, Pierce L, Halter M, Elliott J, Sarkar S. Establishing a reference focal plane using convolutional neural networks and beads for brightfield imaging. Sci Rep 2024; 14:7768. [PMID: 38565548 PMCID: PMC10987482 DOI: 10.1038/s41598-024-57123-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 03/14/2024] [Indexed: 04/04/2024] Open
Abstract
Repeatability of measurements from image analytics is difficult, due to the heterogeneity and complexity of cell samples, exact microscope stage positioning, and slide thickness. We present a method to define and use a reference focal plane that provides repeatable measurements with very high accuracy, by relying on control beads as reference material and a convolutional neural network focused on the control bead images. Previously we defined a reference effective focal plane (REFP) based on the image gradient of bead edges and three specific bead image features. This paper both generalizes and improves on this previous work. First, we refine the definition of the REFP by fitting a cubic spline to describe the relationship between the distance from a bead's center and pixel intensity and by sharing information across experiments, exposures, and fields of view. Second, we remove our reliance on image features that behave differently from one instrument to another. Instead, we apply a convolutional regression neural network (ResNet 18) trained on cropped bead images that is generalizable to multiple microscopes. Our ResNet 18 network predicts the location of the REFP with only a single inferenced image acquisition that can be taken across a wide range of focal planes and exposure times. We illustrate the different strategies and hyperparameter optimization of the ResNet 18 to achieve a high prediction accuracy with an uncertainty for every image tested coming within the microscope repeatability measure of 7.5 µm from the desired focal plane. We demonstrate the generalizability of this methodology by applying it to two different optical systems and show that this level of accuracy can be achieved using only 6 beads per image.
Collapse
Affiliation(s)
- Joe Chalfoun
- National Institute of Standards and Technology, Gaithersburg, MD, USA.
| | - Steven P Lund
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Chenyi Ling
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Adele Peskin
- National Institute of Standards and Technology, Boulder, CO, USA
| | - Laura Pierce
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Michael Halter
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - John Elliott
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Sumona Sarkar
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| |
Collapse
|
2
|
Wang K, Song L, Wang C, Ren Z, Zhao G, Dou J, Di J, Barbastathis G, Zhou R, Zhao J, Lam EY. On the use of deep learning for phase recovery. LIGHT, SCIENCE & APPLICATIONS 2024; 13:4. [PMID: 38161203 PMCID: PMC10758000 DOI: 10.1038/s41377-023-01340-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 11/13/2023] [Accepted: 11/16/2023] [Indexed: 01/03/2024]
Abstract
Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and provide an outlook on how to better use DL to improve the reliability and efficiency of PR. Furthermore, we present a live-updating resource ( https://github.com/kqwang/phase-recovery ) for readers to learn more about PR.
Collapse
Affiliation(s)
- Kaiqiang Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Li Song
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Chutian Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Zhenbo Ren
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China
| | - Guangyuan Zhao
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jiazhen Dou
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jianglei Di
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Renjie Zhou
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jianlin Zhao
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
| | - Edmund Y Lam
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
3
|
Advances in Digital Holographic Interferometry. J Imaging 2022; 8:jimaging8070196. [PMID: 35877640 PMCID: PMC9323567 DOI: 10.3390/jimaging8070196] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 07/07/2022] [Accepted: 07/08/2022] [Indexed: 02/04/2023] Open
Abstract
Holographic interferometry is a well-established field of science and optical engineering. It has a half-century history of successful implementation as the solution to numerous technical tasks and problems. However, fast progress in digital and computer holography has promoted it to a new level of possibilities and has opened brand new fields of its application. In this review paper, we consider some such new techniques and applications.
Collapse
|
4
|
Cuenat S, Andréoli L, André AN, Sandoz P, Laurent GJ, Couturier R, Jacquot M. Fast autofocusing using tiny transformer networks for digital holographic microscopy. OPTICS EXPRESS 2022; 30:24730-24746. [PMID: 36237020 DOI: 10.1364/oe.458948] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 05/20/2022] [Indexed: 06/16/2023]
Abstract
The numerical wavefront backpropagation principle of digital holography confers unique extended focus capabilities, without mechanical displacements along z-axis. However, the determination of the correct focusing distance is a non-trivial and time consuming issue. A deep learning (DL) solution is proposed to cast the autofocusing as a regression problem and tested over both experimental and simulated holograms. Single wavelength digital holograms were recorded by a Digital Holographic Microscope (DHM) with a 10x microscope objective from a patterned target moving in 3D over an axial range of 92 μm. Tiny DL models are proposed and compared such as a tiny Vision Transformer (TViT), tiny VGG16 (TVGG) and a tiny Swin-Transfomer (TSwinT). The proposed tiny networks are compared with their original versions (ViT/B16, VGG16 and Swin-Transformer Tiny) and the main neural networks used in digital holography such as LeNet and AlexNet. The experiments show that the predicted focusing distance ZRPred is accurately inferred with an accuracy of 1.2 μm in average in comparison with the DHM depth of field of 15 µm. Numerical simulations show that all tiny models give the ZRPred with an error below 0.3 µm. Such a prospect would significantly improve the current capabilities of computer vision position sensing in applications such as 3D microscopy for life sciences or micro-robotics. Moreover, all models reach an inference time on CPU, inferior to 25 ms per inference. In terms of occlusions, TViT based on its Transformer architecture is the most robust.
Collapse
|
5
|
Jaferzadeh K, Fevens T. HoloPhaseNet: fully automated deep-learning-based hologram reconstruction using a conditional generative adversarial model. BIOMEDICAL OPTICS EXPRESS 2022; 13:4032-4046. [PMID: 35991913 PMCID: PMC9352290 DOI: 10.1364/boe.452645] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Revised: 06/06/2022] [Accepted: 06/11/2022] [Indexed: 06/15/2023]
Abstract
Quantitative phase imaging with off-axis digital holography in a microscopic configuration provides insight into the cells' intracellular content and morphology. This imaging is conventionally achieved by numerical reconstruction of the recorded hologram, which requires the precise setting of the reconstruction parameters, including reconstruction distance, a proper phase unwrapping algorithm, and component of wave vectors. This paper shows that deep learning can perform the complex light propagation task independent of the reconstruction parameters. We also show that the super-imposed twin-image elimination technique is not required to retrieve the quantitative phase image. The hologram at the single-cell level is fed into a trained image generator (part of a conditional generative adversarial network model), which produces the phase image. Also, the model's generalization is demonstrated by training it with holograms of size 512×512 pixels, and the resulting quantitative analysis is shown.
Collapse
|
6
|
Zuo C, Qian J, Feng S, Yin W, Li Y, Fan P, Han J, Qian K, Chen Q. Deep learning in optical metrology: a review. LIGHT, SCIENCE & APPLICATIONS 2022; 11:39. [PMID: 35197457 PMCID: PMC8866517 DOI: 10.1038/s41377-022-00714-x] [Citation(s) in RCA: 71] [Impact Index Per Article: 35.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Revised: 01/03/2022] [Accepted: 01/11/2022] [Indexed: 05/20/2023]
Abstract
With the advances in scientific foundations and technological implementations, optical metrology has become versatile problem-solving backbones in manufacturing, fundamental research, and engineering applications, such as quality control, nondestructive testing, experimental mechanics, and biomedicine. In recent years, deep learning, a subfield of machine learning, is emerging as a powerful tool to address problems by learning from data, largely driven by the availability of massive datasets, enhanced computational power, fast data storage, and novel training algorithms for the deep neural network. It is currently promoting increased interests and gaining extensive attention for its utilization in the field of optical metrology. Unlike the traditional "physics-based" approach, deep-learning-enabled optical metrology is a kind of "data-driven" approach, which has already provided numerous alternative solutions to many challenging problems in this field with better performances. In this review, we present an overview of the current status and the latest progress of deep-learning technologies in the field of optical metrology. We first briefly introduce both traditional image-processing algorithms in optical metrology and the basic concepts of deep learning, followed by a comprehensive review of its applications in various optical metrology tasks, such as fringe denoising, phase retrieval, phase unwrapping, subset correlation, and error compensation. The open challenges faced by the current deep-learning approach in optical metrology are then discussed. Finally, the directions for future research are outlined.
Collapse
Grants
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- National Key R&D Program of China (2017YFF0106403) Leading Technology of Jiangsu Basic Research Plan (BK20192003) National Defense Science and Technology Foundation of China (2019-JCJQ-JJ-381) "333 Engineering" Research Project of Jiangsu Province (BRA2016407) Fundamental Research Funds for the Central Universities (30920032101, 30919011222) Open Research Fund of Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense (3091801410411)
Collapse
Affiliation(s)
- Chao Zuo
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China.
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China.
| | - Jiaming Qian
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Shijie Feng
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Wei Yin
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Yixuan Li
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Pengfei Fan
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- School of Engineering and Materials Science, Queen Mary University of London, London, E1 4NS, UK
| | - Jing Han
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Kemao Qian
- School of Computer Science and Engineering, Nanyang Technological University, Singapore, 639798, Singapore.
| | - Qian Chen
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China.
| |
Collapse
|
7
|
Yolalmaz A, Yüce E. Comprehensive deep learning model for 3D color holography. Sci Rep 2022; 12:2487. [PMID: 35169161 PMCID: PMC8847588 DOI: 10.1038/s41598-022-06190-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 01/20/2022] [Indexed: 12/04/2022] Open
Abstract
Holography is a vital tool used in various applications from microscopy, solar energy, imaging, display to information encryption. Generation of a holographic image and reconstruction of object/hologram information from a holographic image using the current algorithms are time-consuming processes. Versatile, fast in the meantime, accurate methodologies are required to compute holograms performing color imaging at multiple observation planes and reconstruct object/sample information from a holographic image for widely accommodating optical holograms. Here, we focus on design of optical holograms for generation of holographic images at multiple observation planes and colors via a deep learning model, the CHoloNet. The CHoloNet produces optical holograms which show multitasking performance as multiplexing color holographic image planes by tuning holographic structures. Furthermore, our deep learning model retrieves an object/hologram information from an intensity holographic image without requiring phase and amplitude information from the intensity image. We show that reconstructed objects/holograms show excellent agreement with the ground-truth images. The CHoloNet does not need iteratively reconstruction of object/hologram information while conventional object/hologram recovery methods rely on multiple holographic images at various observation planes along with the iterative algorithms. We openly share the fast and efficient framework that we develop in order to contribute to the design and implementation of optical holograms, and we believe that the CHoloNet based object/hologram reconstruction and generation of holographic images will speed up wide-area implementation of optical holography in microscopy, data encryption, and communication technologies.
Collapse
Affiliation(s)
- Alim Yolalmaz
- Programmable Photonics Group, Department of Physics, Middle East Technical University, 06800, Ankara, Turkey. .,Micro and Nanotechnology Program, Middle East Technical University, 06800, Ankara, Turkey.
| | - Emre Yüce
- Programmable Photonics Group, Department of Physics, Middle East Technical University, 06800, Ankara, Turkey.,Micro and Nanotechnology Program, Middle East Technical University, 06800, Ankara, Turkey
| |
Collapse
|
8
|
Pirone D, Sirico D, Miccio L, Bianco V, Mugnano M, Ferraro P, Memmolo P. Speeding up reconstruction of 3D tomograms in holographic flow cytometry via deep learning. LAB ON A CHIP 2022; 22:793-804. [PMID: 35076055 DOI: 10.1039/d1lc01087e] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Tomographic flow cytometry by digital holography is an emerging imaging modality capable of collecting multiple views of moving and rotating cells with the aim of recovering their refractive index distribution in 3D. Although this modality allows us to access high-resolution imaging with high-throughput, the huge amount of time-lapse holographic images to be processed (hundreds of digital holograms per cell) constitutes the actual bottleneck. This prevents the system from being suitable for lab-on-a-chip platforms in real-world applications, where fast analysis of measured data is mandatory. Here we demonstrate a significant speeding-up reconstruction of phase-contrast tomograms by introducing in the processing pipeline a multi-scale fully-convolutional context aggregation network. Although it was originally developed in the context of semantic image analysis, we demonstrate for the first time that it can be successfully adapted to a holographic lab-on-chip platform for achieving 3D tomograms through a faster computational process. We trained the network with input-output image pairs to reproduce the end-to-end holographic reconstruction process, i.e. recovering quantitative phase maps (QPMs) of single cells from their digital holograms. Then, the sequence of QPMs of the same rotating cell is used to perform the tomographic reconstruction. The proposed approach significantly reduces the computational time for retrieving tomograms, thus making them available in a few seconds instead of tens of minutes, while essentially preserving the high-content information of tomographic data. Moreover, we have accomplished a compact deep convolutional neural network parameterization that can fit into on-chip SRAM and a small memory footprint, thus demonstrating its possible exploitation to provide onboard computations for lab-on-chip devices with low processing hardware resources.
Collapse
Affiliation(s)
- Daniele Pirone
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
- DIETI, Department of Electrical Engineering and Information Technologies, University of Naples "Federico II", via Claudio 21, 80125 Napoli, Italy
| | - Daniele Sirico
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
| | - Lisa Miccio
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
| | - Vittorio Bianco
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
| | - Martina Mugnano
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
| | - Pietro Ferraro
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
| | - Pasquale Memmolo
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems "E. Caianiello", Via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy.
| |
Collapse
|
9
|
Winnik J, Suski D, Zdańkowski P, Stanaszek L, Micó V, Trusiak M. Versatile optimization-based speed-up method for autofocusing in digital holographic microscopy. OPTICS EXPRESS 2021; 29:33297-33311. [PMID: 34809144 DOI: 10.1364/oe.438496] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 09/06/2021] [Indexed: 06/13/2023]
Abstract
We propose a speed-up method for the in-focus plane detection in digital holographic microscopy that can be applied to a broad class of autofocusing algorithms that involve repetitive propagation of an object wave to various axial locations to decide the in-focus position. The classical autofocusing algorithms apply a uniform search strategy, i.e., they probe multiple, uniformly distributed axial locations, which leads to heavy computational overhead. Our method substantially reduces the computational load, without sacrificing the accuracy, by skillfully selecting the next location to investigate, which results in a decreased total number of probed propagation distances. This is achieved by applying the golden selection search with parabolic interpolation, which is the gold standard for tackling single-variable optimization problems. The proposed approach is successfully applied to three diverse autofocusing cases, providing up to 136-fold speed-up.
Collapse
|
10
|
Chen D, Wang Z, Chen K, Zeng Q, Wang L, Xu X, Liang J, Chen X. Classification of unlabeled cells using lensless digital holographic images and deep neural networks. Quant Imaging Med Surg 2021; 11:4137-4148. [PMID: 34476194 DOI: 10.21037/qims-21-16] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 05/08/2021] [Indexed: 11/06/2022]
Abstract
Background Image-based cell analytic methodologies offer a relatively simple and economical way to analyze and understand cell heterogeneities and developments. Owing to developments in high-resolution image sensors and high-performance computation processors, the emerging lensless digital holography technique enables a simple and cost-effective approach to obtain label-free cell images with a large field of view and microscopic spatial resolution. Methods The holograms of three types of cells, including MCF-10A, EC-109, and MDA-MB-231 cells, were recorded using a lensless digital holography system composed of a laser diode, a sample stage, an image sensor, and a laptop computer. The amplitude images were reconstructed using the angular spectrum method, and the sample to sensor distance was determined using the autofocusing criteria based on the sparsity of image edges and corner points. Four convolutional neural networks (CNNs) were used to classify the cell types based on the recovered holographic images. Results Classification of two cell types and three cell types achieved an accuracy of higher than 91% by all the networks used. The ResNet and the DenseNet models had similar classification accuracy of 95% or greater, outperforming the GoogLeNet and the CNN-5 models. Conclusions These experiments demonstrated that the CNNs were effective at classifying two or three types of tumor cells. The lensless holography combined with machine learning holds great promise in the application of stainless cell imaging and classification, such as in cancer diagnosis and cancer biology research, where distinguishing normal cells from cancer cells and recognizing different cancer cell types will be greatly beneficial.
Collapse
Affiliation(s)
- Duofang Chen
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Zhaohui Wang
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Kai Chen
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Qi Zeng
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Lin Wang
- School of Computer Science, Xi'an Polytechnic University, Xi'an, China
| | - Xinyi Xu
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Jimin Liang
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Xueli Chen
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| |
Collapse
|
11
|
Peskin A, Lund SP, Pierce L, Kurbanov F, Chan LLY, Halter M, Elliott J, Sarkar S, Chalfoun J. Establishing a reference focal plane using beads for trypan-blue-based viability measurements. J Microsc 2021; 283:243-258. [PMID: 34115371 DOI: 10.1111/jmi.13037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 05/20/2021] [Accepted: 05/27/2021] [Indexed: 11/30/2022]
Abstract
Trypan blue dye exclusion-based cell viability measurements are highly dependent upon image quality and consistency. In order to make measurements repeatable, one must be able to reliably capture images at a consistent focal plane, and with signal-to-noise ratio within appropriate limits to support proper execution of image analysis routines. Imaging chambers and imaging systems used for trypan blue analysis can be inconsistent or can drift over time, leading to a need to assure the acquisition of images prior to automated image analysis. Although cell-based autofocus techniques can be applied, the heterogeneity and complexity of the cell samples can make it difficult to assure the effectiveness, repeatability and accuracy of the routine for each measurement. Instead of auto-focusing on cells in our images, we add control beads to the images, and use them to repeatedly return to a reference focal plane. We use bead image features that have stable profiles across a wide range of focal values and exposure levels. We created a predictive model based on image quality features computed over reference datasets. Because the beads have little variation, we can determine the reference plane from bead image features computed over a single-shot image and can reproducibly return to that reference plane with each sample. The achieved accuracy (over 95%) is within the limits of the actuator repeatability. We demonstrate that a small number of beads (less than 3 beads per image) is needed to achieve this accuracy. We have also developed an open-source Graphical User Interface called Bead Benchmarking-Focus And Intensity Tool (BB-FAIT) to implement these methods for a semi-automated cell viability analyser.
Collapse
Affiliation(s)
- Adele Peskin
- National Institute of Standards and Technology, Boulder, Colorado
| | - Steven P Lund
- National Institute of Standards and Technology, Gaithersburg, Maryland
| | - Laura Pierce
- National Institute of Standards and Technology, Gaithersburg, Maryland
| | - Firdavs Kurbanov
- National Institute of Standards and Technology, Gaithersburg, Maryland
| | - Leo Li-Ying Chan
- Department of Advanced Technology R&D, Nexcelom Bioscience LL, Lawrence, Maryland
| | - Michael Halter
- National Institute of Standards and Technology, Gaithersburg, Maryland
| | - John Elliott
- National Institute of Standards and Technology, Gaithersburg, Maryland
| | - Sumona Sarkar
- National Institute of Standards and Technology, Gaithersburg, Maryland
| | - Joe Chalfoun
- National Institute of Standards and Technology, Gaithersburg, Maryland
| |
Collapse
|
12
|
Zhu Y, Hang Yeung C, Lam EY. Digital holographic imaging and classification of microplastics using deep transfer learning. APPLIED OPTICS 2021; 60:A38-A47. [PMID: 33690352 DOI: 10.1364/ao.403366] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 09/16/2020] [Indexed: 06/12/2023]
Abstract
We devise an inline digital holographic imaging system equipped with a lightweight deep learning network, termed CompNet, and develop the transfer learning for classification and analysis. It has a compression block consisting of a concatenated rectified linear unit (CReLU) activation to reduce the channels, and a class-balanced cross-entropy loss for training. The method is particularly suitable for small and imbalanced datasets, and we apply it to the detection and classification of microplastics. Our results show good improvements both in feature extraction, and generalization and classification accuracy, effectively overcoming the problem of overfitting. This method could be attractive for future in situ microplastic particle detection and classification applications.
Collapse
|
13
|
Shao S, Mallery K, Hong J. Machine learning holography for measuring 3D particle distribution. Chem Eng Sci 2020. [DOI: 10.1016/j.ces.2020.115830] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
14
|
Moon I, Jaferzadeh K, Kim Y, Javidi B. Noise-free quantitative phase imaging in Gabor holography with conditional generative adversarial network. OPTICS EXPRESS 2020; 28:26284-26301. [PMID: 32906903 DOI: 10.1364/oe.398528] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 08/05/2020] [Indexed: 06/11/2023]
Abstract
This paper shows that deep learning can eliminate the superimposed twin-image noise in phase images of Gabor holographic setup. This is achieved by the conditional generative adversarial model (C-GAN), trained by input-output pairs of noisy phase images obtained from synthetic Gabor holography and the corresponding quantitative noise-free contrast-phase image obtained by the off-axis digital holography. To train the model, Gabor holograms are generated from digital off-axis holograms with spatial shifting of the real image and twin image in the frequency domain and then adding them with the DC term in the spatial domain. Finally, the digital propagation of the Gabor hologram with Fresnel approximation generates a super-imposed phase image for the C-GAN model input. Two models were trained: a human red blood cell model and an elliptical cancer cell model. Following the training, several quantitative analyses were conducted on the bio-chemical properties and similarity between actual noise-free phase images and the model output. Surprisingly, it is discovered that our model can recover other elliptical cell lines that were not observed during the training. Additionally, some misalignments can also be compensated with the trained model. Particularly, if the reconstruction distance is somewhat incorrect, this model can still retrieve in-focus images.
Collapse
|
15
|
Ahmadzadeh E, Jaferzadeh K, Shin S, Moon I. Automated single cardiomyocyte characterization by nucleus extraction from dynamic holographic images using a fully convolutional neural network. BIOMEDICAL OPTICS EXPRESS 2020; 11:1501-1516. [PMID: 32206425 PMCID: PMC7075611 DOI: 10.1364/boe.385218] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 02/12/2020] [Accepted: 02/12/2020] [Indexed: 05/06/2023]
Abstract
Human-induced pluripotent stem cell-derived cardiomyocytes (hiPS-CMs) beating can be efficiently characterized by time-lapse quantitative phase imaging (QPIs) obtained by digital holographic microscopy. Particularly, the CM's nucleus section can precisely reflect the associated rhythmic beating pattern of the CM suitable for subsequent beating pattern characterization. In this paper, we describe an automated method to characterize single CMs by nucleus extraction from QPIs and subsequent beating pattern reconstruction and quantification. However, accurate CM's nucleus extraction from the QPIs is a challenging task due to the variations in shape, size, orientation, and lack of special geometry. To this end, we propose a novel fully convolutional neural network (FCN)-based network architecture for accurate CM's nucleus extraction using pixel classification technique and subsequent beating pattern characterization. Our experimental results show that the beating profile of multiple extracted single CMs is less noisy and more informative compared to the whole image slide. Applying this method allows CM characterization at the single-cell level. Consequently, several single CMs are extracted from the whole slide QPIs and multiple parameters regarding their beating profile of each isolated CM are efficiently measured.
Collapse
Affiliation(s)
- Ezat Ahmadzadeh
- Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science & Technology, Dalseong-gun, Daegu, 42988, South Korea
- Department of Computer Engineering, Chosun University, Dong-gu, Gwangju 61452, South Korea
| | - Keyvan Jaferzadeh
- Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science & Technology, Dalseong-gun, Daegu, 42988, South Korea
| | - Seokjoo Shin
- Department of Computer Engineering, Chosun University, Dong-gu, Gwangju 61452, South Korea
| | - Inkyu Moon
- Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science & Technology, Dalseong-gun, Daegu, 42988, South Korea
| |
Collapse
|
16
|
Shao S, Mallery K, Kumar SS, Hong J. Machine learning holography for 3D particle field imaging. OPTICS EXPRESS 2020; 28:2987-2999. [PMID: 32121975 DOI: 10.1364/oe.379480] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Accepted: 01/01/2020] [Indexed: 06/10/2023]
Abstract
We propose a new learning-based approach for 3D particle field imaging using holography. Our approach uses a U-net architecture incorporating residual connections, Swish activation, hologram preprocessing, and transfer learning to cope with challenges arising in particle holograms where accurate measurement of individual particles is crucial. Assessments on both synthetic and experimental holograms demonstrate a significant improvement in particle extraction rate, localization accuracy and speed compared to prior methods over a wide range of particle concentrations, including highly dense concentrations where other methods are unsuitable. Our approach can be potentially extended to other types of computational imaging tasks with similar features.
Collapse
|