1
|
Castañeda R, Trujillo C, Doblas A. A human erythrocytes hologram dataset for learning-based model training. Data Brief 2024; 54:110424. [PMID: 38708305 PMCID: PMC11068518 DOI: 10.1016/j.dib.2024.110424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 04/09/2024] [Accepted: 04/09/2024] [Indexed: 05/07/2024] Open
Abstract
This manuscript presents a paired dataset with experimental holograms and their corresponding reconstructed phase maps of human red blood cells (RBCs). The holographic images were recorded using an off-axis telecentric Digital Holographic Microscope (DHM). The imaging system consists of a 40 × /0.65NA infinity-corrected microscope objective (MO) lens and a tube lens (TL) with a focal distance of 200 mm, recording diffraction-limited holograms. A CMOS camera with dimensions of 1920 × 1200 pixels and a pixel pitch of 5.86 µm was located at the back focal plane of the TL lens, capturing image-plane holograms. The off-axis, telecentric, and diffraction-limited DHM system guarantees accurate quantitative phase maps. Initially comprising 300 holograms, the dataset was augmented to 36,864 instances, enabling the investigation (i.e., training and testing) of learning-based models to reconstruct aberration-free phase images from raw holograms. This dataset facilitates the training and testing of end-to-end models for quantitative phase imaging using DHM systems operating at the telecentric regime and non-telecentric DHM systems where the spherical wavefront has been compensated physically. In other words, this dataset holds promise for advancing investigations in digital holographic microscopy and computational imaging.
Collapse
Affiliation(s)
- Raul Castañeda
- Applied Optics Group, School of Applied Sciences and Engineering EAFIT University, Medellin 050037, Colombia
| | - Carlos Trujillo
- Applied Optics Group, School of Applied Sciences and Engineering EAFIT University, Medellin 050037, Colombia
| | - Ana Doblas
- Electrical and Computer Engineering Department, University of Massachusetts – Dartmouth, USA
| |
Collapse
|
2
|
Nolte DD. Coherent light scattering from cellular dynamics in living tissues. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2024; 87:036601. [PMID: 38433567 DOI: 10.1088/1361-6633/ad2229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 01/24/2024] [Indexed: 03/05/2024]
Abstract
This review examines the biological physics of intracellular transport probed by the coherent optics of dynamic light scattering from optically thick living tissues. Cells and their constituents are in constant motion, composed of a broad range of speeds spanning many orders of magnitude that reflect the wide array of functions and mechanisms that maintain cellular health. From the organelle scale of tens of nanometers and upward in size, the motion inside living tissue is actively driven rather than thermal, propelled by the hydrolysis of bioenergetic molecules and the forces of molecular motors. Active transport can mimic the random walks of thermal Brownian motion, but mean-squared displacements are far from thermal equilibrium and can display anomalous diffusion through Lévy or fractional Brownian walks. Despite the average isotropic three-dimensional environment of cells and tissues, active cellular or intracellular transport of single light-scattering objects is often pseudo-one-dimensional, for instance as organelle displacement persists along cytoskeletal tracks or as membranes displace along the normal to cell surfaces, albeit isotropically oriented in three dimensions. Coherent light scattering is a natural tool to characterize such tissue dynamics because persistent directed transport induces Doppler shifts in the scattered light. The many frequency-shifted partial waves from the complex and dynamic media interfere to produce dynamic speckle that reveals tissue-scale processes through speckle contrast imaging and fluctuation spectroscopy. Low-coherence interferometry, dynamic optical coherence tomography, diffusing-wave spectroscopy, diffuse-correlation spectroscopy, differential dynamic microscopy and digital holography offer coherent detection methods that shed light on intracellular processes. In health-care applications, altered states of cellular health and disease display altered cellular motions that imprint on the statistical fluctuations of the scattered light. For instance, the efficacy of medical therapeutics can be monitored by measuring the changes they induce in the Doppler spectra of livingex vivocancer biopsies.
Collapse
Affiliation(s)
- David D Nolte
- Department of Physics and Astronomy, Purdue University, West Lafayette, IN 47907, United States of America
| |
Collapse
|
3
|
Rogalski M, Arcab P, Stanaszek L, Micó V, Zuo C, Trusiak M. Physics-driven universal twin-image removal network for digital in-line holographic microscopy. OPTICS EXPRESS 2024; 32:742-761. [PMID: 38175095 DOI: 10.1364/oe.505440] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 11/22/2023] [Indexed: 01/05/2024]
Abstract
Digital in-line holographic microscopy (DIHM) enables efficient and cost-effective computational quantitative phase imaging with a large field of view, making it valuable for studying cell motility, migration, and bio-microfluidics. However, the quality of DIHM reconstructions is compromised by twin-image noise, posing a significant challenge. Conventional methods for mitigating this noise involve complex hardware setups or time-consuming algorithms with often limited effectiveness. In this work, we propose UTIRnet, a deep learning solution for fast, robust, and universally applicable twin-image suppression, trained exclusively on numerically generated datasets. The availability of open-source UTIRnet codes facilitates its implementation in various DIHM systems without the need for extensive experimental training data. Notably, our network ensures the consistency of reconstruction results with input holograms, imparting a physics-based foundation and enhancing reliability compared to conventional deep learning approaches. Experimental verification was conducted among others on live neural glial cell culture migration sensing, which is crucial for neurodegenerative disease research.
Collapse
|
4
|
Wang K, Song L, Wang C, Ren Z, Zhao G, Dou J, Di J, Barbastathis G, Zhou R, Zhao J, Lam EY. On the use of deep learning for phase recovery. LIGHT, SCIENCE & APPLICATIONS 2024; 13:4. [PMID: 38161203 PMCID: PMC10758000 DOI: 10.1038/s41377-023-01340-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 11/13/2023] [Accepted: 11/16/2023] [Indexed: 01/03/2024]
Abstract
Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and provide an outlook on how to better use DL to improve the reliability and efficiency of PR. Furthermore, we present a live-updating resource ( https://github.com/kqwang/phase-recovery ) for readers to learn more about PR.
Collapse
Affiliation(s)
- Kaiqiang Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Li Song
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Chutian Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Zhenbo Ren
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China
| | - Guangyuan Zhao
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jiazhen Dou
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jianglei Di
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Renjie Zhou
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jianlin Zhao
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
| | - Edmund Y Lam
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
5
|
Xu G, Jin B, Yang S, Liu P. Field recovery from digital inline holographic images of composite propellant combustion base on denoising diffusion model. OPTICS EXPRESS 2023; 31:38216-38227. [PMID: 38017933 DOI: 10.1364/oe.499648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 09/25/2023] [Indexed: 11/30/2023]
Abstract
Digital inline holography has gained extensive application in the optical diagnosis of solid propellant combustion. However, this method confronts several challenges. Firstly, the calculation time required for reconstruction and depth of field extension is excessively long. Secondly, the excessive smoke, airflow, and flame during combustion cause significant interference and poor reconstruction quality, which reduces the accuracy of particle identification. To address these issues, we have developed a holographic image reconstruction technique for aluminum particle combustion based on the Attention Mechanism, U-net, and Diffusion models. This approach enables end-to-end reconstruction of aluminum particle combustion holographic images, while effectively circumventing the interference of airflow combustion and flame.
Collapse
|
6
|
Yang B, Liu W, Chen X, Chen G, Zhu X. A novel multi-frame wavelet generative adversarial network for scattering reconstruction of structured illumination microscopy. Phys Med Biol 2023; 68:185016. [PMID: 37619594 DOI: 10.1088/1361-6560/acf3cb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 08/24/2023] [Indexed: 08/26/2023]
Abstract
Objective. Structured illumination microscopy (SIM) is widely used in various fields of life science research. In clinical practice, it has low phototoxicity, fast imaging speed and no special fluorescent markers. However, SIM is still affected by the scattering medium of biological tissues, resulting in insufficient resolution of the obtained images, which limits the development of life sciences. A novel multi-frame wavelet generation adversarial network (MWGAN) is proposed to improve the scattering reconstruction capability of SIM.Approach. MWGAN is based on two components derived from the original image. A generative adversarial network constructed by wavelet transform is trained to reconstruct some complex details in the cell structure. Multi-frame adversarial network is used to obtain the inter-frame information of the image and use the complementary information of the before and after frames to improve the quality of the model reconstruction.Results. To demonstrate the robustness of MWGAN, multiple low-quality SIM image datasets are tested. Compared with the state-of-the-art methods, the proposed method achieves superior performance in both of the subjective and objective evaluation.Conclusion. MWGAN is effective for improving the clarity of SIM images. Meanwhile, the SIM images reconstructed by multiple frames improve the reconstruction quality of complex regions and allow clearer and dynamic observation of cellular functions.
Collapse
Affiliation(s)
- Bin Yang
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Weiping Liu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Xinghong Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Guannan Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Xiaoqin Zhu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
| |
Collapse
|
7
|
Liu X, Yan X, Wang X. The U-Net-based phase-only CGH using the two-dimensional phase grating. OPTICS EXPRESS 2022; 30:41624-41643. [PMID: 36366635 DOI: 10.1364/oe.473205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 10/10/2022] [Indexed: 06/16/2023]
Abstract
In this paper, the phase-only holograms with clear first diffraction orders have been generated based on the U-Net and the two-dimensional phase grating. Firstly, we proved the modulation effect of two-dimensional phase grating on diffraction field, and came to a conclusion that it could move the diffraction pattern of the hologram to the odd-numbered diffraction orders' center of that. Then we changed the generation process of phase-only holograms and the training strategy for U-Net according to this conclusion, which converted the optimization target of the U-Net from the zeroth diffraction order in the center of diffraction field to the first diffraction order in the edge of that. And we also used a method called "phase recombination" to improve the structure of U-Net for less memory footprint and faster generating speed. Finally, the holograms with the 4K resolution have been generated in 0.05s, and the average peak signal to noise ratio (PSNR) of the reconstructed images is about 37.2 dB in DIV2K-valid-HR dataset.
Collapse
|
8
|
Chen H, Huang L, Liu T, Ozcan A. Fourier Imager Network (FIN): A deep neural network for hologram reconstruction with superior external generalization. LIGHT, SCIENCE & APPLICATIONS 2022; 11:254. [PMID: 35970839 PMCID: PMC9378708 DOI: 10.1038/s41377-022-00949-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 07/30/2022] [Accepted: 08/01/2022] [Indexed: 05/25/2023]
Abstract
Deep learning-based image reconstruction methods have achieved remarkable success in phase recovery and holographic imaging. However, the generalization of their image reconstruction performance to new types of samples never seen by the network remains a challenge. Here we introduce a deep learning framework, termed Fourier Imager Network (FIN), that can perform end-to-end phase recovery and image reconstruction from raw holograms of new types of samples, exhibiting unprecedented success in external generalization. FIN architecture is based on spatial Fourier transform modules that process the spatial frequencies of its inputs using learnable filters and a global receptive field. Compared with existing convolutional deep neural networks used for hologram reconstruction, FIN exhibits superior generalization to new types of samples, while also being much faster in its image inference speed, completing the hologram reconstruction task in ~0.04 s per 1 mm2 of the sample area. We experimentally validated the performance of FIN by training it using human lung tissue samples and blindly testing it on human prostate, salivary gland tissue and Pap smear samples, proving its superior external generalization and image reconstruction speed. Beyond holographic microscopy and quantitative phase imaging, FIN and the underlying neural network architecture might open up various new opportunities to design broadly generalizable deep learning models in computational imaging and machine vision fields.
Collapse
Affiliation(s)
- Hanlong Chen
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California Nano Systems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
- David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, 90095, USA.
| |
Collapse
|
9
|
Ju YG, Choo HG, Park JH. Learning-based complex field recovery from digital hologram with various depth objects. OPTICS EXPRESS 2022; 30:26149-26168. [PMID: 36236811 DOI: 10.1364/oe.461782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 06/20/2022] [Indexed: 06/16/2023]
Abstract
In this paper, we investigate a learning-based complex field recovery technique of an object from its digital hologram. Most of the previous learning-based approaches first propagate the captured hologram to the object plane and then suppress the DC and conjugate noise in the reconstruction. To the contrary, the proposed technique utilizes a deep learning network to extract the object complex field in the hologram plane directly, making it robust to the object depth variations and well suited for three-dimensional objects. Unlike the previous approaches which concentrate on transparent biological samples having near-uniform amplitude, the proposed technique is applied to more general objects which have large amplitude variations. The proposed technique is verified by numerical simulations and optical experiments, demonstrating its feasibility.
Collapse
|
10
|
Jaferzadeh K, Fevens T. HoloPhaseNet: fully automated deep-learning-based hologram reconstruction using a conditional generative adversarial model. BIOMEDICAL OPTICS EXPRESS 2022; 13:4032-4046. [PMID: 35991913 PMCID: PMC9352290 DOI: 10.1364/boe.452645] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Revised: 06/06/2022] [Accepted: 06/11/2022] [Indexed: 06/15/2023]
Abstract
Quantitative phase imaging with off-axis digital holography in a microscopic configuration provides insight into the cells' intracellular content and morphology. This imaging is conventionally achieved by numerical reconstruction of the recorded hologram, which requires the precise setting of the reconstruction parameters, including reconstruction distance, a proper phase unwrapping algorithm, and component of wave vectors. This paper shows that deep learning can perform the complex light propagation task independent of the reconstruction parameters. We also show that the super-imposed twin-image elimination technique is not required to retrieve the quantitative phase image. The hologram at the single-cell level is fed into a trained image generator (part of a conditional generative adversarial network model), which produces the phase image. Also, the model's generalization is demonstrated by training it with holograms of size 512×512 pixels, and the resulting quantitative analysis is shown.
Collapse
|
11
|
Melanthota SK, Gopal D, Chakrabarti S, Kashyap AA, Radhakrishnan R, Mazumder N. Deep learning-based image processing in optical microscopy. Biophys Rev 2022; 14:463-481. [PMID: 35528030 PMCID: PMC9043085 DOI: 10.1007/s12551-022-00949-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 03/14/2022] [Indexed: 12/19/2022] Open
Abstract
Optical microscopy has emerged as a key driver of fundamental research since it provides the ability to probe into imperceptible structures in the biomedical world. For the detailed investigation of samples, a high-resolution image with enhanced contrast and minimal damage is preferred. To achieve this, an automated image analysis method is preferable over manual analysis in terms of both speed of acquisition and reduced error accumulation. In this regard, deep learning (DL)-based image processing can be highly beneficial. The review summarises and critiques the use of DL in image processing for the data collected using various optical microscopic techniques. In tandem with optical microscopy, DL has already found applications in various problems related to image classification and segmentation. It has also performed well in enhancing image resolution in smartphone-based microscopy, which in turn enablse crucial medical assistance in remote places. Graphical abstract
Collapse
Affiliation(s)
- Sindhoora Kaniyala Melanthota
- Department of Biophysics, Manipal School of Life Sciences, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Dharshini Gopal
- Department of Bioinformatics, Manipal School of Life Sciences, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Shweta Chakrabarti
- Department of Bioinformatics, Manipal School of Life Sciences, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Anirudh Ameya Kashyap
- Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Raghu Radhakrishnan
- Department of Oral Pathology, Manipal College of Dental Sciences, Manipal, Manipal Academy of Higher Education, Manipal, 576104 India
| | - Nirmal Mazumder
- Department of Biophysics, Manipal School of Life Sciences, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| |
Collapse
|
12
|
Castaneda R, Trujillo C, Doblas A. Video-Rate Quantitative Phase Imaging Using a Digital Holographic Microscope and a Generative Adversarial Network. SENSORS 2021; 21:s21238021. [PMID: 34884025 PMCID: PMC8659916 DOI: 10.3390/s21238021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 11/20/2021] [Accepted: 11/28/2021] [Indexed: 01/22/2023]
Abstract
The conventional reconstruction method of off-axis digital holographic microscopy (DHM) relies on computational processing that involves spatial filtering of the sample spectrum and tilt compensation between the interfering waves to accurately reconstruct the phase of a biological sample. Additional computational procedures such as numerical focusing may be needed to reconstruct free-of-distortion quantitative phase images based on the optical configuration of the DHM system. Regardless of the implementation, any DHM computational processing leads to long processing times, hampering the use of DHM for video-rate renderings of dynamic biological processes. In this study, we report on a conditional generative adversarial network (cGAN) for robust and fast quantitative phase imaging in DHM. The reconstructed phase images provided by the GAN model present stable background levels, enhancing the visualization of the specimens for different experimental conditions in which the conventional approach often fails. The proposed learning-based method was trained and validated using human red blood cells recorded on an off-axis Mach–Zehnder DHM system. After proper training, the proposed GAN yields a computationally efficient method, reconstructing DHM images seven times faster than conventional computational approaches.
Collapse
Affiliation(s)
- Raul Castaneda
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN 38152, USA;
| | - Carlos Trujillo
- Applied Optics Group, Physical Sciences Department, Universidad EAFIT, Medellin 050037, Colombia;
| | - Ana Doblas
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN 38152, USA;
- Correspondence:
| |
Collapse
|
13
|
Ding H, Li F, Meng Z, Feng S, Ma J, Nie S, Yuan C. Auto-focusing and quantitative phase imaging using deep learning for the incoherent illumination microscopy system. OPTICS EXPRESS 2021; 29:26385-26403. [PMID: 34615075 DOI: 10.1364/oe.434014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 07/22/2021] [Indexed: 06/13/2023]
Abstract
It is well known that the quantitative phase information which is vital in the biomedical study is hard to be directly obtained with bright-field microscopy under incoherent illumination. In addition, it is impossible to maintain the living sample in focus over long-term observation. Therefore, both the autofocusing and quantitative phase imaging techniques have to be solved in microscopy simultaneously. Here, we propose a lightweight deep learning-based framework, which is constructed by residual structure and is constrained by a novel loss function model, to realize both autofocusing and quantitative phase imaging. It outputs the corresponding in-focus amplitude and phase information at high speed (10fps) from a single-shot out-of-focus bright-field image. The training data were captured with a designed system under a hybrid incoherent and coherent illumination system. The experimental results verify that the focused and quantitative phase images of non-biological samples and biological samples can be reconstructed by using the framework. It provides a versatile quantitative technique for continuous monitoring of living cells in long-term and label-free imaging by using a traditional incoherent illumination microscopy system.
Collapse
|
14
|
Kim E, Park S, Hwang S, Moon I, Javidi B. Deep Learning-based Phenotypic Assessment of Red Cell Storage Lesions for Safe Transfusions. IEEE J Biomed Health Inform 2021; 26:1318-1328. [PMID: 34388103 DOI: 10.1109/jbhi.2021.3104650] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This study presents a novel approach to automatically perform instant phenotypic assessment of red blood cell (RBC) storage lesion in phase images obtained by digital holographic microscopy. The proposed model combines a generative adversarial network (GAN) with marker-controlled watershed segmentation scheme. The GAN model performed RBC segmentations and classifications to develop ageing markers, and the watershed segmentation was used to completely separate overlapping RBCs. Our approach achieved good segmentation and classification accuracy with a Dices coefficient of 0.94 at a high throughput rate of about 152 cells per second. These results were compared with other deep neural network architectures. Moreover, our image-based deep learning models recognized the morphological changes that occur in RBCs during storage. Our deep learning-based classification results were in good agreement with previous findings on the changes in RBC markers (dominant shapes) affected by storage duration. We believe that our image-based deep learning models can be useful for automated assessment of RBC quality, storage lesions for safe transfusions, and diagnosis of RBC-related diseases.
Collapse
|