1
|
Goswami S, Krishnan G, Javidi B. Robustness of single random phase encoding lensless imaging with camera noise. OPTICS EXPRESS 2024; 32:4916-4930. [PMID: 38439231 DOI: 10.1364/oe.510950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 01/09/2024] [Indexed: 03/06/2024]
Abstract
In this paper, we assess the noise-susceptibility of coherent macroscopic single random phase encoding (SRPE) lensless imaging by analyzing how much information is lost due to the presence of camera noise. We have used numerical simulation to first obtain the noise-free point spread function (PSF) of a diffuser-based SRPE system. Afterwards, we generated a noisy PSF by introducing shot noise, read noise and quantization noise as seen in a real-world camera. Then, we used various statistical measures to look at how the shared information content between the noise-free and noisy PSF is affected as the camera-noise becomes stronger. We have run identical simulations by replacing the diffuser in the lensless SRPE imaging system with lenses for comparison with lens-based imaging. Our results show that SRPE lensless imaging systems are better at retaining information between corresponding noisy and noiseless PSFs under high camera noise than lens-based imaging systems. We have also looked at how physical parameters of diffusers such as feature size and feature height variation affect the noise robustness of an SRPE system. To the best of our knowledge, this is the first report to investigate noise robustness of SRPE systems as a function of diffuser parameters and paves the way for the use of lensless SRPE systems to improve imaging in the presence of image sensor noise.
Collapse
|
2
|
Huang Y, Krishnan G, Goswami S, Javidi B. Underwater optical signal detection system using diffuser-based lensless imaging. OPTICS EXPRESS 2024; 32:1489-1500. [PMID: 38297699 DOI: 10.1364/oe.512438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 12/18/2023] [Indexed: 02/02/2024]
Abstract
We propose a diffuser-based lensless underwater optical signal detection system. The system consists of a lensless one-dimensional (1D) camera array equipped with random phase modulators for signal acquisition and one-dimensional integral imaging convolutional neural network (1DInImCNN) for signal classification. During the acquisition process, the encoded signal transmitted by a light-emitting diode passes through a turbid medium as well as partial occlusion. The 1D diffuser-based lensless camera array is used to capture the transmitted information. The captured pseudorandom patterns are then classified through the 1DInImCNN to output the desired signal. We compared our proposed underwater lensless optical signal detection system with an equivalent lens-based underwater optical signal detection system in terms of detection performance and computational cost. The results show that the former outperforms the latter. Moreover, we use dimensionality reduction on the lensless pattern and study their theoretical computational costs and detection performance. The results show that the detection performance of lensless systems does not suffer appreciably. This makes lensless systems a great candidate for low-cost compressive underwater optical imaging and signal detection.
Collapse
|
3
|
Sardana J, Devinder S, Zhu W, Agrawal A, Joseph J. Dielectric Metasurface Enabled Compact, Single-Shot Digital Holography for Quantitative Phase Imaging. NANO LETTERS 2023. [PMID: 38037916 DOI: 10.1021/acs.nanolett.3c03515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2023]
Abstract
Quantitative phase imaging (QPI) enables nondestructive, real-time, label-free imaging of transparent specimens and can reveal information about their fundamental properties such as cell size and morphology, mass density, particle dynamics, and cellular fluctuations. Development of high-performance and low-cost quantitative phase imaging systems is thus required in many fields, including on-site biomedical imaging and industrial inspection. Here, we propose an ultracompact, highly stable interferometer based on a single-layer dielectric metasurface for common path off-axis digital holography and experimentally demonstrate quantitative phase imaging. The interferometric imaging system leveraging an ultrathin multifunctional metasurface captures image plane holograms in a single shot and provides quantitative phase information on the test samples for extraction of its physical properties. With the benefits of planar engineering and high integrability, the proposed metasurface-based method establishes a stable miniaturized QPI system for reliable and cost-effective point-of-care devices, live cell imaging, 3D topography, and edge detection for optical computing.
Collapse
Affiliation(s)
- Jyoti Sardana
- Department of Physics, Indian Institute of Technology Delhi, New Delhi 110016, India
| | - Shital Devinder
- Centre for Sensors, Instrumentation and Cyber Physical System Engineering, Indian Institute of Technology Delhi, New Delhi 110016, India
| | - Wenqi Zhu
- Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, Maryland 20899, United States
| | - Amit Agrawal
- Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, Maryland 20899, United States
| | - Joby Joseph
- Department of Physics, Indian Institute of Technology Delhi, New Delhi 110016, India
- Centre for Sensors, Instrumentation and Cyber Physical System Engineering, Indian Institute of Technology Delhi, New Delhi 110016, India
- Optics and Photonics Center, Indian Institute of Technology Delhi, New Delhi 110016, India
| |
Collapse
|
4
|
Hu X, Abbasi R, Wachsmann-Hogiu S. Microfluidics on lensless, semiconductor optical image sensors: challenges and opportunities for democratization of biosensing at the micro-and nano-scale. NANOPHOTONICS (BERLIN, GERMANY) 2023; 12:3977-4008. [PMID: 39635640 PMCID: PMC11501743 DOI: 10.1515/nanoph-2023-0301] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 09/29/2023] [Indexed: 12/07/2024]
Abstract
Optical image sensors are 2D arrays of pixels that integrate semiconductor photodiodes and field effect transistors for efficient photon conversion and processing of generated electrons. With technological advancements and subsequent democratization of these sensors, opportunities for integration with microfluidics devices are currently explored. 2D pixel arrays of such optical image sensors can reach dimensions larger than one centimeter with a sub-micrometer pixel size, for high spatial resolution lensless imaging with large field of view, a feat that cannot be achieved with lens-based optical microscopy. Moreover, with advancements in fabrication processes, the field of microfluidics has evolved to develop microfluidic devices with an overall size below one centimeter and individual components of sub-micrometer size, such that they can now be implemented onto optical image sensors. The convergence of these fields is discussed in this article, where we review fundamental principles, opportunities, challenges, and outlook for integration, with focus on contact-mode imaging configuration. Most recent developments and applications of microfluidic lensless contact-based imaging to the field of biosensors, in particular those related to the potential for point of need applications, are also discussed.
Collapse
Affiliation(s)
- Xinyue Hu
- Department of Bioengineering, McGill University, Montreal, QC H3A 0C3, Canada
| | - Reza Abbasi
- Department of Bioengineering, McGill University, Montreal, QC H3A 0C3, Canada
| | | |
Collapse
|
5
|
Goswami S, Wani P, Gupta G, Javidi B. Assessment of lateral resolution of single random phase encoded lensless imaging systems. OPTICS EXPRESS 2023; 31:11213-11226. [PMID: 37155762 DOI: 10.1364/oe.480591] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
In this paper, we have used the angular spectrum propagation method and numerical simulations of a single random phase encoding (SRPE) based lensless imaging system, with the goal of quantifying the spatial resolution of the system and assessing its dependence on the physical parameters of the system. Our compact SRPE imaging system consists of a laser diode that illuminates a sample placed on a microscope glass slide, a diffuser that spatially modulates the optical field transmitting through the input object, and an image sensor that captures the intensity of the modulated field. We have considered two-point source apertures as the input object and analyzed the propagated optical field captured by the image sensor. The captured output intensity patterns acquired at each lateral separation between the input point sources were analyzed using a correlation between the captured output pattern for the overlapping point-sources, and the captured output intensity for the separated point sources. The lateral resolution of the system was calculated by finding the lateral separation values of the point sources for which the correlation falls below a threshold value of 35% which is a value chosen in accordance with the Abbe diffraction limit of an equivalent lens-based system. A direct comparison between the SRPE lensless imaging system and an equivalent lens-based imaging system with similar system parameters shows that despite being lensless, the performance of the SRPE system does not suffer as compared to lens-based imaging systems in terms of lateral resolution. We have also investigated how this resolution is affected as the parameters of the lensless imaging system are varied. The results show that SRPE lensless imaging system shows robustness to object to diffuser-to-sensor distance, pixel size of the image sensor, and the number of pixels of the image sensor. To the best of our knowledge, this is the first work to investigate a lensless imaging system's lateral resolution, robustness to multiple physical parameters of the system, and comparison to lens-based imaging systems.
Collapse
|
6
|
Galdón L, Garcia-Sucerquia J, Saavedra G, Martínez-Corral M, Sánchez-Ortiga E. Resolution limit in opto-digital systems revisited. OPTICS EXPRESS 2023; 31:2000-2012. [PMID: 36785223 DOI: 10.1364/oe.479458] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Accepted: 12/11/2022] [Indexed: 06/18/2023]
Abstract
The resolution limit achievable with an optical system is a fundamental piece of information when characterizing its performance, mainly in case of microscopy imaging. Usually this information is given in the form of a distance, often expressed in microns, or in the form of a cutoff spatial frequency, often expressed in line pairs per mm. In modern imaging systems, where the final image is collected by pixelated digital cameras, the resolution limit is determined by the performance of both, the optical systems and the digital sensor. Usually, one of these factors is considered to be prevalent over the other for estimating the spatial resolution, leading to the global performance of the imaging system ruled by either the classical Abbe resolution limit, based on physical diffraction, or by the Nyquist resolution limit, based on the digital sensor features. This estimation fails significantly to predict the global performance of opto-digital imaging systems, like 3D microscopes, where none of the factors is negligible. In that case, which indeed is the most common, neither the Abbe formula nor the Nyquist formula provide by themselves a reliable prediction for the resolution limit. This is a serious drawback since systems designers often use those formulae as design input parameters. Aiming to overcome this lack, a simple mathematical expression obtained by finely articulating the Abbe and Nyquist formulas, to easily predict the spatial resolution limit of opto-digital imaging systems, is proposed here. The derived expression is tested experimentally, and shows to be valid in a broad range of opto-digital combinations.
Collapse
|
7
|
Zhang Y, Wu Z, Lin P, Pan Y, Wu Y, Zhang L, Huangfu J. Hand gestures recognition in videos taken with a lensless camera. OPTICS EXPRESS 2022; 30:39520-39533. [PMID: 36298902 DOI: 10.1364/oe.470324] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 09/29/2022] [Indexed: 06/16/2023]
Abstract
A lensless camera is an imaging system that uses a mask in place of a lens, making it thinner, lighter, and less expensive than a lensed camera. However, additional complex computation and time are required for image reconstruction. This work proposes a deep learning model named Raw3dNet that recognizes hand gestures directly on raw videos captured by a lensless camera without the need for image restoration. In addition to conserving computational resources, the reconstruction-free method provides privacy protection. Raw3dNet is a novel end-to-end deep neural network model for the recognition of hand gestures in lensless imaging systems. It is created specifically for raw video captured by a lensless camera and has the ability to properly extract and combine temporal and spatial features. The network is composed of two stages: 1. spatial feature extractor (SFE), which enhances the spatial features of each frame prior to temporal convolution; 2. 3D-ResNet, which implements spatial and temporal convolution of video streams. The proposed model achieves 98.59% accuracy on the Cambridge Hand Gesture dataset in the lensless optical experiment, which is comparable to the lensed-camera result. Additionally, the feasibility of physical object recognition is assessed. Further, we show that the recognition can be achieved with respectable accuracy using only a tiny portion of the original raw data, indicating the potential for reducing data traffic in cloud computing scenarios.
Collapse
|
8
|
Douglass PM, O'Connor T, Javidi B. Automated sickle cell disease identification in human red blood cells using a lensless single random phase encoding biosensor and convolutional neural networks. OPTICS EXPRESS 2022; 30:35965-35977. [PMID: 36258535 DOI: 10.1364/oe.469199] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 09/04/2022] [Indexed: 06/16/2023]
Abstract
We present a compact, field portable, lensless, single random phase encoding biosensor for automated classification between healthy and sickle cell disease human red blood cells. Microscope slides containing 3 µl wet mounts of whole blood samples from healthy and sickle cell disease afflicted human donors are input into a lensless single random phase encoding (SRPE) system for disease identification. A partially coherent laser source (laser diode) illuminates the cells under inspection wherein the object complex amplitude propagates to and is pseudorandomly encoded by a diffuser, then the intensity of the diffracted complex waveform is captured by a CMOS image sensor. The recorded opto-biological signatures are transformed using local binary pattern map generation during preprocessing then input into a pretrained convolutional neural network for classification between healthy and disease-states. We further provide analysis that compares the performance of several neural network architectures to optimize our classification strategy. Additionally, we assess the performance and computational savings of classifying on subsets of the opto-biological signatures with substantially reduced dimensionality, including one dimensional cropping of the recorded signatures. To the best of our knowledge, this is the first report of a lensless SRPE biosensor for human disease identification. As such, the presented approach and results can be significant for low-cost disease identification both in the field and for healthcare systems in developing countries which suffer from constrained resources.
Collapse
|
9
|
Automatic Cancer Cell Taxonomy Using an Ensemble of Deep Neural Networks. Cancers (Basel) 2022; 14:cancers14092224. [PMID: 35565352 PMCID: PMC9100154 DOI: 10.3390/cancers14092224] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 04/18/2022] [Accepted: 04/26/2022] [Indexed: 12/24/2022] Open
Abstract
Microscopic image-based analysis has been intensively performed for pathological studies and diagnosis of diseases. However, mis-authentication of cell lines due to misjudgments by pathologists has been recognized as a serious problem. To address this problem, we propose a deep-learning-based approach for the automatic taxonomy of cancer cell types. A total of 889 bright-field microscopic images of four cancer cell lines were acquired using a benchtop microscope. Individual cells were further segmented and augmented to increase the image dataset. Afterward, deep transfer learning was adopted to accelerate the classification of cancer types. Experiments revealed that the deep-learning-based methods outperformed traditional machine-learning-based methods. Moreover, the Wilcoxon signed-rank test showed that deep ensemble approaches outperformed individual deep-learning-based models (p < 0.001) and were in effect to achieve the classification accuracy up to 97.735%. Additional investigation with the Wilcoxon signed-rank test was conducted to consider various network design choices, such as the type of optimizer, type of learning rate scheduler, degree of fine-tuning, and use of data augmentation. Finally, it was found that the using data augmentation and updating all the weights of a network during fine-tuning improve the overall performance of individual convolutional neural network models.
Collapse
|
10
|
Pan X, Chen X, Nakamura T, Yamaguchi M. Incoherent reconstruction-free object recognition with mask-based lensless optics and the Transformer. OPTICS EXPRESS 2021; 29:37962-37978. [PMID: 34808858 DOI: 10.1364/oe.443181] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 10/09/2021] [Indexed: 06/13/2023]
Abstract
A mask-based lensless camera adopts a thin mask to optically encode the scene and records the encoded pattern on an image sensor. The lensless camera can be thinner, lighter and cheaper than the lensed camera. But additional computation is required to reconstruct an image from the encoded pattern. Considering that the significant application of the lensless camera could be inference, we propose to perform object recognition directly on the encoded pattern. Avoiding image reconstruction not only saves computational resources but also averts errors and artifacts in reconstruction. We theoretically analyze multiplexing property in mask-based lensless optics which maps local information in the scene to overlapping global information in the encoded pattern. To better extract global features, we propose a simplified Transformer-based architecture. This is the first time to study Transformer-based architecture for encoded pattern recognition in mask-based lensless optics. In the optical experiment, the proposed system achieves 91.47% accuracy on the Fashion MNIST and 96.64% ROC AUC on the cats-vs-dogs dataset. The feasibility of physical object recognition is also evaluated.
Collapse
|
11
|
Javidi B, Carnicer A, Anand A, Barbastathis G, Chen W, Ferraro P, Goodman JW, Horisaki R, Khare K, Kujawinska M, Leitgeb RA, Marquet P, Nomura T, Ozcan A, Park Y, Pedrini G, Picart P, Rosen J, Saavedra G, Shaked NT, Stern A, Tajahuerce E, Tian L, Wetzstein G, Yamaguchi M. Roadmap on digital holography [Invited]. OPTICS EXPRESS 2021; 29:35078-35118. [PMID: 34808951 DOI: 10.1364/oe.435915] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 09/04/2021] [Indexed: 05/22/2023]
Abstract
This Roadmap article on digital holography provides an overview of a vast array of research activities in the field of digital holography. The paper consists of a series of 25 sections from the prominent experts in digital holography presenting various aspects of the field on sensing, 3D imaging and displays, virtual and augmented reality, microscopy, cell identification, tomography, label-free live cell imaging, and other applications. Each section represents the vision of its author to describe the significant progress, potential impact, important developments, and challenging issues in the field of digital holography.
Collapse
|
12
|
Pan X, Nakamura T, Chen X, Yamaguchi M. Lensless inference camera: incoherent object recognition through a thin mask with LBP map generation. OPTICS EXPRESS 2021; 29:9758-9771. [PMID: 33820129 DOI: 10.1364/oe.416613] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Accepted: 03/08/2021] [Indexed: 06/12/2023]
Abstract
We propose a preliminary lensless inference camera (LLI camera) specialized for object recognition. The LLI camera performs computationally efficient data preprocessing on the optically encoded pattern through the mask, rather than performing computationally expensive image reconstruction before inference. Therefore, the LLI camera avoids expensive computation and achieves real-time inference. This work proposes a new data preprocessing approach, named local binary patterns map generation, dedicated for optically encoded pattern through the mask. This preprocessing approach greatly improves encoded pattern's robustness to local disturbances in the scene, making the LLI camera's practical application possible. The performance of the LLI camera is analyzed through optical experiments on handwritten digit recognition and gender estimation under conditions with changing illumination and a moving target.
Collapse
|
13
|
Javidi B, Hua H, Bimber O, Huang YP. Focus issue introduction: 3D image acquisition and display: technology, perception, and applications. OPTICS EXPRESS 2021; 29:342-345. [PMID: 33362118 DOI: 10.1364/oe.417575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Indexed: 06/12/2023]
Abstract
This feature issue of Optics Express is organized in conjunction with the 2020 OSA conference on 3D image acquisition and display: technology, perception and applications which was held virtually in Vancouver from 22 to 26, June 2020 as part of the imaging and sensing congress 2020. This feature issue presents 29 articles based on the topics and scope of the 3D conference. This review provides a summary of these articles.
Collapse
|