1
|
Zhang P, Liu H, Ge Z, Wang C, Lam EY. Neuromorphic Imaging With Joint Image Deblurring and Event Denoising. IEEE Trans Image Process 2024; 33:2318-2333. [PMID: 38470586 DOI: 10.1109/tip.2024.3374074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Neuromorphic imaging reacts to per-pixel brightness changes of a dynamic scene with high temporal precision and responds with asynchronous streaming events as a result. It also often supports a simultaneous output of an intensity image. Nevertheless, the raw events typically involve a large amount of noise due to the high sensitivity of the sensor, while capturing fast-moving objects at low frame rates results in blurry images. These deficiencies significantly degrade human observation and machine processing. Fortunately, the two information sources are inherently complementary - events with microsecond-level temporal resolution, which are triggered by the edges of objects recorded in a latent sharp image, can supply rich motion details missing from the blurry one. In this work, we bring the two types of data together and introduce a simple yet effective unifying algorithm to jointly reconstruct blur-free images and noise-robust events in an iterative coarse-to-fine fashion. Specifically, an event-regularized prior offers precise high-frequency structures and dynamic features for blind deblurring, while image gradients serve as a kind of faithful supervision in regulating neuromorphic noise removal. Comprehensively evaluated on real and synthetic samples, such a synergy delivers superior reconstruction quality for both images with severe motion blur and raw event streams with a storm of noise, and also exhibits greater robustness to challenging realistic scenarios such as varying levels of illumination, contrast and motion magnitude. Meanwhile, it can be driven by much fewer events and holds a competitive edge at computational time overhead, rendering itself preferable as available computing resources are limited. Our solution gives impetus to the improvement of both sensing data and paves the way for highly accurate neuromorphic reasoning and analysis.
Collapse
|
2
|
Zhang Y, Liu X, Lam EY. Single-shot inline holography using a physics-aware diffusion model. Opt Express 2024; 32:10444-10460. [PMID: 38571256 DOI: 10.1364/oe.517233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 02/27/2024] [Indexed: 04/05/2024]
Abstract
Among holographic imaging configurations, inline holography excels in its compact design and portability, making it the preferred choice for on-site or field applications with unique imaging requirements. However, effectively holographic reconstruction from a single-shot measurement remains a challenge. While several approaches have been proposed, our novel unsupervised algorithm, the physics-aware diffusion model for digital holographic reconstruction (PadDH), offers distinct advantages. By seamlessly integrating physical information with a pre-trained diffusion model, PadDH overcomes the need for a holographic training dataset and significantly reduces the number of parameters involved. Through comprehensive experiments using both synthetic and experimental data, we validate the capabilities of PadDH in reducing twin-image contamination and generating high-quality reconstructions. Our work represents significant advancements in unsupervised holographic imaging by harnessing the full potential of the pre-trained diffusion prior.
Collapse
|
3
|
Li Y, Zhu Y, Huang J, Ho YW, Fang JKH, Lam EY. High-throughput microplastic assessment using polarization holographic imaging. Sci Rep 2024; 14:2355. [PMID: 38287056 PMCID: PMC10824714 DOI: 10.1038/s41598-024-52762-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 01/22/2024] [Indexed: 01/31/2024] Open
Abstract
Microplastic (MP) pollution has emerged as a global environmental concern due to its ubiquity and harmful impacts on ecosystems and human health. MP assessment has therefore become increasingly necessary and common in environmental and experimental samples. Microscopy and spectroscopy are widely employed for the physical and chemical characterization of MPs. However, these analytical methods often require time-consuming pretreatments of samples or expensive instrumentation. In this work, we develop a portable and cost-effective polarization holographic imaging system that prominently incorporates deep learning techniques, enabling efficient, high-throughput detection and dynamic analysis of MPs in aqueous environments. The integration enhances the identification and classification of MPs, eliminating the need for extensive sample preparation. The system simultaneously captures holographic interference patterns and polarization states, allowing for multimodal information acquisition to facilitate rapid MP detection. The characteristics of light waves are registered, and birefringence features are leveraged to classify the material composition and structures of MPs. Furthermore, the system automates real-time counting and morphological measurements of various materials, including MP sheets and additional natural substances. This innovative approach significantly improves the dynamic monitoring of MPs and provides valuable information for their effective filtration and management.
Collapse
Affiliation(s)
- Yuxing Li
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Yanmin Zhu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Jianqing Huang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
- Key Lab of Education Ministry for Power Machinery and Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai, 200240, China
| | - Yuen-Wa Ho
- Department of Food Science and Nutrition, The Hong Kong Polytechnic University, Hung Hom, Hong Kong SAR, China
| | - James Kar-Hei Fang
- Department of Food Science and Nutrition, The Hong Kong Polytechnic University, Hung Hom, Hong Kong SAR, China
- State Key Laboratory of Marine Pollution, City University of Hong Kong, Kowloon Tong, Hong Kong SAR, China
| | - Edmund Y Lam
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.
| |
Collapse
|
4
|
Wang K, Song L, Wang C, Ren Z, Zhao G, Dou J, Di J, Barbastathis G, Zhou R, Zhao J, Lam EY. On the use of deep learning for phase recovery. Light Sci Appl 2024; 13:4. [PMID: 38161203 PMCID: PMC10758000 DOI: 10.1038/s41377-023-01340-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 11/13/2023] [Accepted: 11/16/2023] [Indexed: 01/03/2024]
Abstract
Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and provide an outlook on how to better use DL to improve the reliability and efficiency of PR. Furthermore, we present a live-updating resource ( https://github.com/kqwang/phase-recovery ) for readers to learn more about PR.
Collapse
Affiliation(s)
- Kaiqiang Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Li Song
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Chutian Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Zhenbo Ren
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China
| | - Guangyuan Zhao
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jiazhen Dou
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jianglei Di
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Renjie Zhou
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jianlin Zhao
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
| | - Edmund Y Lam
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
5
|
Zhang S, Meng N, Lam EY. LRT: An Efficient Low-Light Restoration Transformer for Dark Light Field Images. IEEE Trans Image Process 2023; PP:1-1. [PMID: 37490378 DOI: 10.1109/tip.2023.3297412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
Light field (LF) images containing information for multiple views have numerous applications, which can be severely affected by low-light imaging. Recent learning-based methods for low-light enhancement have some disadvantages, such as a lack of noise suppression, complex training process and poor performance in extremely low-light conditions. To tackle these deficiencies while fully utilizing the multi-view information, we propose an efficient Low-light Restoration Transformer (LRT) for LF images, with multiple heads to perform intermediate tasks within a single network, including denoising, luminance adjustment, refinement and detail enhancement, achieving progressive restoration from small scale to full scale. Moreover, we design an angular transformer block with an efficient view-token scheme to model the global angular dependencies, and a multi-scale spatial transformer block to encode the multi-scale local and global information within each view. To address the issue of insufficient training data, we formulate a synthesis pipeline by simulating the major noise sources with the estimated noise parameters of LF camera. Experimental results demonstrate that our method achieves the state-of-the-art performance on low-light LF restoration with high efficiency.
Collapse
|
6
|
Guo X, Li Y, Qian J, Che Y, Zuo C, Chen Q, Lam EY, Wang H, Feng S. Unifying temporal phase unwrapping framework using deep learning. Opt Express 2023; 31:16659-16675. [PMID: 37157741 DOI: 10.1364/oe.488597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Temporal phase unwrapping (TPU) is significant for recovering an unambiguous phase of discontinuous surfaces or spatially isolated objects in fringe projection profilometry. Generally, temporal phase unwrapping algorithms can be classified into three groups: the multi-frequency (hierarchical) approach, the multi-wavelength (heterodyne) approach, and the number-theoretic approach. For all of them, extra fringe patterns of different spatial frequencies are required for retrieving the absolute phase. Due to the influence of image noise, people have to use many auxiliary patterns for high-accuracy phase unwrapping. Consequently, image noise limits the efficiency and the measurement speed greatly. Further, these three groups of TPU algorithms have their own theories and are usually applied in different ways. In this work, for the first time to our knowledge, we show that a generalized framework using deep learning can be developed to perform the TPU task for different groups of TPU algorithms. Experimental results show that benefiting from the assistance of deep learning the proposed framework can mitigate the impact of noise effectively and enhance the phase unwrapping reliability significantly without increasing the number of auxiliary patterns for different TPU approaches. We believe that the proposed method demonstrates great potential for developing powerful and reliable phase retrieval techniques.
Collapse
|
7
|
Zhang Z, Lee KCM, Siu DMD, Lo MCK, Lai QTK, Lam EY, Tsia KK. Morphological profiling by high-throughput single-cell biophysical fractometry. Commun Biol 2023; 6:449. [PMID: 37095203 PMCID: PMC10126163 DOI: 10.1038/s42003-023-04839-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 04/12/2023] [Indexed: 04/26/2023] Open
Abstract
Complex and irregular cell architecture is known to statistically exhibit fractal geometry, i.e., a pattern resembles a smaller part of itself. Although fractal variations in cells are proven to be closely associated with the disease-related phenotypes that are otherwise obscured in the standard cell-based assays, fractal analysis with single-cell precision remains largely unexplored. To close this gap, here we develop an image-based approach that quantifies a multitude of single-cell biophysical fractal-related properties at subcellular resolution. Taking together with its high-throughput single-cell imaging performance (~10,000 cells/sec), this technique, termed single-cell biophysical fractometry, offers sufficient statistical power for delineating the cellular heterogeneity, in the context of lung-cancer cell subtype classification, drug response assays and cell-cycle progression tracking. Further correlative fractal analysis shows that single-cell biophysical fractometry can enrich the standard morphological profiling depth and spearhead systematic fractal analysis of how cell morphology encodes cellular health and pathological conditions.
Collapse
Affiliation(s)
- Ziqi Zhang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong
| | - Kelvin C M Lee
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong
| | - Dickson M D Siu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong
| | - Michelle C K Lo
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong
| | - Queenie T K Lai
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong
| | - Edmund Y Lam
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong
| | - Kevin K Tsia
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong.
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Shatin, New Territories, Hong Kong.
| |
Collapse
|
8
|
Song L, Lam EY. Phase retrieval with a dual recursive scheme. Opt Express 2023; 31:10386-10400. [PMID: 37157586 DOI: 10.1364/oe.484649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Since optical sensors cannot detect the phase information of the light wave, recovering the missing phase from the intensity measurements, called phase retrieval (PR), is a natural and important problem in many imaging applications. In this paper, we propose a learning-based recursive dual alternating direction method of multipliers, called RD-ADMM, for phase retrieval with a dual and recursive scheme. This method tackles the PR problem by solving the primal and dual problems separately. We design a dual structure to take advantage of the information embedded in the dual problem that can help with solving the PR problem, and we show that it is feasible to use the same operator for both the primal and dual problems for regularization. To demonstrate the efficiency of this scheme, we propose a learning-based coded holographic coherent diffractive imaging system to generate the reference pattern automatically according to the intensity information of the latent complex-valued wavefront. Experiments on different kinds of images with a high noise level indicate that our method is effective and robust, and can provide higher-quality results than other commonly-used PR methods for this setup.
Collapse
|
9
|
Mai TTN, Lam EY, Lee C. Deep Unrolled Low-Rank Tensor Completion for High Dynamic Range Imaging. IEEE Trans Image Process 2022; 31:5774-5787. [PMID: 36048976 DOI: 10.1109/tip.2022.3201708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The major challenge in high dynamic range (HDR) imaging for dynamic scenes is suppressing ghosting artifacts caused by large object motions or poor exposures. Whereas recent deep learning-based approaches have shown significant synthesis performance, interpretation and analysis of their behaviors are difficult and their performance is affected by the diversity of training data. In contrast, traditional model-based approaches yield inferior synthesis performance to learning-based algorithms despite their theoretical thoroughness. In this paper, we propose an algorithm unrolling approach to ghost-free HDR image synthesis algorithm that unrolls an iterative low-rank tensor completion algorithm into deep neural networks to take advantage of the merits of both learning- and model-based approaches while overcoming their weaknesses. First, we formulate ghost-free HDR image synthesis as a low-rank tensor completion problem by assuming the low-rank structure of the tensor constructed from low dynamic range (LDR) images and linear dependency among LDR images. We also define two regularization functions to compensate for modeling inaccuracy by extracting hidden model information. Then, we solve the problem efficiently using an iterative optimization algorithm by reformulating it into a series of subproblems. Finally, we unroll the iterative algorithm into a series of blocks corresponding to each iteration, in which the optimization variables are updated by rigorous closed-form solutions and the regularizers are updated by learned deep neural networks. Experimental results on different datasets show that the proposed algorithm provides better HDR image synthesis performance with superior robustness compared with state-of-the-art algorithms, while using significantly fewer training samples.
Collapse
|
10
|
Song L, Lam EY. Iterative phase retrieval with a sensor mask. Opt Express 2022; 30:25788-25802. [PMID: 36237101 DOI: 10.1364/oe.461367] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 06/05/2022] [Indexed: 06/16/2023]
Abstract
As an important inverse imaging problem in diffraction optics, Fourier phase retrieval aims at estimating the latent image of the target object only from the magnitude of its Fourier measurement. Although in real applications alternating methods are widely-used for Fourier phase retrieval considering the constraints in the object and Fourier domains, they need a lot of initial guesses and iterations to achieve reasonable results. In this paper, we show that a proper sensor mask directly attached to the Fourier magnitude can improve the efficiency of the iterative phase retrieval algorithms, such as alternating direction method of multipliers (ADMM). Furthermore, we refer to the learning-based method to determine the sensor mask according to the Fourier measurement, and unrolled ADMM is used for phase retrieval. Numerical results show that our method outperforms other existing methods for the Fourier phase retrieval problem.
Collapse
|
11
|
Abstract
Inverse imaging covers a wide range of imaging applications, including super-resolution, deblurring, and compressive sensing. We propose a novel scheme to solve such problems by combining duality and the alternating direction method of multipliers (ADMM). In addition to a conventional ADMM process, we introduce a second one that solves the dual problem to find the estimated nontrivial lower bound of the objective function, and the related iteration results are used in turn to guide the primal iterations. We call this D-ADMM, and show that it converges to the global minimum when the regularization function is convex and the optimization problem has at least one optimizer. Furthermore, we show how the scheme can give rise to two specific algorithms, called D-ADMM-L2 and D-ADMM-TV, by having different regularization functions. We compare D-ADMM-TV with other methods on image super-resolution and demonstrate comparable or occasionally slightly better quality results. This paves the way of incorporating advanced operators and strategies designed for basic ADMM into the D-ADMM method as well to further improve the performances of those methods.
Collapse
|
12
|
Zhou Z, Tam VWL, Lam EY. A Portable Sign Language Collection and Translation Platform with Smart Watches Using a BLSTM-Based Multi-Feature Framework. Micromachines 2022; 13:mi13020333. [PMID: 35208457 PMCID: PMC8877205 DOI: 10.3390/mi13020333] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 02/12/2022] [Accepted: 02/17/2022] [Indexed: 11/16/2022]
Abstract
Continuous sign language recognition (CSLR) using different types of sensors to precisely recognize sign language in real time is a very challenging but important research direction in sensor technology. Many previous methods are vision-based, with computationally intensive algorithms to process a large number of image/video frames possibly contaminated with noises, which can result in a large translation delay. On the other hand, gesture-based CSLR relying on hand movement data captured on wearable devices may require less computation resources and translation time. Thus, it is more efficient to provide instant translation during real-world communication. However, the insufficient amount of information provided by the wearable sensors often affect the overall performance of this system. To tackle this issue, we propose a bidirectional long short-term memory (BLSTM)-based multi-feature framework for conducting gesture-based CSLR precisely with two smart watches. In this framework, multiple sets of input features are extracted from the collected gesture data to provide a diverse spectrum of valuable information to the underlying BLSTM model for CSLR. To demonstrate the effectiveness of the proposed framework, we test it on an extremely challenging and radically new dataset of Hong Kong sign language (HKSL), in which hand movement data are collected from 6 individual signers for 50 different sentences. The experimental results reveal that the proposed framework attains a much lower word error rate compared with other existing machine learning or deep learning approaches for gesture-based CSLR. Based on this framework, we further propose a portable sign language collection and translation platform, which can simplify the procedure of collecting gesture-based sign language dataset and recognize sign language through smart watch data in real time, in order to break the communication barrier for the sign language users.
Collapse
|
13
|
Zhang Y, Zhu Y, Lam EY. Holographic 3D particle reconstruction using a one-stage network. Appl Opt 2022; 61:B111-B120. [PMID: 35201132 DOI: 10.1364/ao.444856] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 11/10/2021] [Indexed: 06/14/2023]
Abstract
Volumetric reconstruction of a three-dimensional (3D) particle field with high resolution and low latency is an ambitious and valuable task. As a compact and high-throughput imaging system, digital holography (DH) encodes the 3D information of a particle volume into a two-dimensional (2D) interference pattern. In this work, we propose a one-stage network (OSNet) for 3D particle volumetric reconstruction. Specifically, by a single feed-forward process, OSNet can retrieve the 3D coordinates of the particles directly from the holograms without high-fidelity image reconstruction at each depth slice. Evaluation results from both synthetic and experimental data confirm the feasibility and robustness of our method under different particle concentrations and noise levels in terms of detection rate and position accuracy, with improved processing speed. The additional applications of 3D particle tracking are also investigated, facilitating the analysis of the dynamic displacements and motions for micro-objects or cells. It can be further extended to various types of computational imaging problems sharing similar traits.
Collapse
|
14
|
Zhang L, Lam EY, Ke J. Temporal compressive imaging reconstruction based on a 3D-CNN network. Opt Express 2022; 30:3577-3591. [PMID: 35209612 DOI: 10.1364/oe.448490] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 01/09/2022] [Indexed: 06/14/2023]
Abstract
In temporal compressive imaging (TCI), high-speed object frames are reconstructed from measurements collected by a low-speed detector array to improve the system imaging speed. Compared with iterative algorithms, deep learning approaches utilize a trained network to reconstruct high-quality images in a short time. In this work, we study a 3D convolutional neural network for TCI reconstruction to make full use of the temporal and spatial correlation among consecutive object frames. Both simulated and experimental results demonstrate that our network can achieve better reconstruction quality with fewer number of layers.
Collapse
|
15
|
Ge Z, Zhang P, Gao Y, So HKH, Lam EY. Lens-free motion analysis via neuromorphic laser speckle imaging. Opt Express 2022; 30:2206-2218. [PMID: 35209366 DOI: 10.1364/oe.444948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 12/29/2021] [Indexed: 06/14/2023]
Abstract
Laser speckle imaging (LSI) is a powerful tool for motion analysis owing to the high sensitivity of laser speckles. Traditional LSI techniques rely on identifying changes from the sequential intensity speckle patterns, where each pixel performs synchronous measurements. However, a lot of redundant data of the static speckles without motion information in the scene will also be recorded, resulting in considerable resources consumption for data processing and storage. Moreover, the motion cues are inevitably lost during the "blind" time interval between successive frames. To tackle such challenges, we propose neuromorphic laser speckle imaging (NLSI) as an efficient alternative approach for motion analysis. Our method preserves the motion information while excluding the redundant data by exploring the use of the neuromorphic event sensor, which acquires only the relevant information of the moving parts and responds asynchronously with a much higher sampling rate. This neuromorphic data acquisition mechanism captures fast-moving objects on the order of microseconds. In the proposed NLSI method, the moving object is illuminated using a coherent light source, and the reflected high frequency laser speckle patterns are captured with a bare neuromorphic event sensor. We present the data processing strategy to analyze motion from event-based laser speckles, and the experimental results demonstrate the feasibility of our method at different motion speeds.
Collapse
|
16
|
Zeng T, Zhu Y, Lam EY. Deep learning for digital holography: a review. Opt Express 2021; 29:40572-40593. [PMID: 34809394 DOI: 10.1364/oe.443367] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 06/13/2023]
Abstract
Recent years have witnessed the unprecedented progress of deep learning applications in digital holography (DH). Nevertheless, there remain huge potentials in how deep learning can further improve performance and enable new functionalities for DH. Here, we survey recent developments in various DH applications powered by deep learning algorithms. This article starts with a brief introduction to digital holographic imaging, then summarizes the most relevant deep learning techniques for DH, with discussions on their benefits and challenges. We then present case studies covering a wide range of problems and applications in order to highlight research achievements to date. We provide an outlook of several promising directions to widen the use of deep learning in various DH applications.
Collapse
|
17
|
Ge Z, Gao Y, So HKH, Lam EY. Event-based laser speckle correlation for micro motion estimation: erratum. Opt Lett 2021; 46:5083. [PMID: 34653120 DOI: 10.1364/ol.442448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Indexed: 06/13/2023]
Abstract
We present an erratum to our Letter [Opt. Lett.46, 3885 (2021)OPLEDP0146-959210.1364/OL.430419]. This erratum corrects an inadvertent error in Eq. (4). The corrections have no influence on the results and conclusions of the original Letter.
Collapse
|
18
|
|
19
|
Ge Z, Gao Y, So HKH, Lam EY. Event-based laser speckle correlation for micro motion estimation. Opt Lett 2021; 46:3885-3888. [PMID: 34388766 DOI: 10.1364/ol.430419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 07/13/2021] [Indexed: 06/13/2023]
Abstract
Micro motion estimation has important applications in various fields such as microfluidic particle detection and biomedical cell imaging. Conventional methods analyze the motion from intensity images captured using frame-based imaging sensors such as the complementary metal-oxide semiconductor (CMOS) and the charge-coupled device (CCD). Recently, event-based sensors have evolved with the special capability to record asynchronous light changes with high dynamic range, high temporal resolution, low latency, and no motion blur. In this Letter, we explore the potential of using the event sensor to estimate the micro motion based on the laser speckle correlation technique.
Collapse
|
20
|
Meng N, Li K, Liu J, Lam EY. Light Field View Synthesis via Aperture Disparity and Warping Confidence Map. IEEE Trans Image Process 2021; 30:3908-3921. [PMID: 33750690 DOI: 10.1109/tip.2021.3066293] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This paper presents a learning-based approach to synthesize the view from an arbitrary camera position given a sparse set of images. A key challenge for this novel view synthesis arises from the reconstruction process, when the views from different input images may not be consistent due to obstruction in the light path. We overcome this by jointly modeling the epipolar property and occlusion in designing a convolutional neural network. We start by defining and computing the aperture disparity map, which approximates the parallax and measures the pixel-wise shift between two views. While this relates to free-space rendering and can fail near the object boundaries, we further develop a warping confidence map to address pixel occlusion in these challenging regions. The proposed method is evaluated on diverse real-world and synthetic light field scenes, and it shows better performance over several state-of-the-art techniques.
Collapse
|
21
|
Meng N, So HKH, Sun X, Lam EY. High-Dimensional Dense Residual Convolutional Neural Network for Light Field Reconstruction. IEEE Trans Pattern Anal Mach Intell 2021; 43:873-886. [PMID: 31581075 DOI: 10.1109/tpami.2019.2945027] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We consider the problem of high-dimensional light field reconstruction and develop a learning-based framework for spatial and angular super-resolution. Many current approaches either require disparity clues or restore the spatial and angular details separately. Such methods have difficulties with non-Lambertian surfaces or occlusions. In contrast, we formulate light field super-resolution (LFSR) as tensor restoration and develop a learning framework based on a two-stage restoration with 4-dimensional (4D) convolution. This allows our model to learn the features capturing the geometry information encoded in multiple adjacent views. Such geometric features vary near the occlusion regions and indicate the foreground object border. To train a feasible network, we propose a novel normalization operation based on a group of views in the feature maps, design a stage-wise loss function, and develop the multi-range training strategy to further improve the performance. Evaluations are conducted on a number of light field datasets including real-world scenes, synthetic data, and microscope light fields. The proposed method achieves superior performance and less execution time comparing with other state-of-the-art schemes.
Collapse
|
22
|
Ke J, Zhang L, Zhou Q, Lam EY. Broad dual-band temporal compressive imaging with optical calibration. Opt Express 2021; 29:5710-5729. [PMID: 33726105 DOI: 10.1364/oe.415271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Accepted: 02/02/2021] [Indexed: 06/12/2023]
Abstract
For applications such as remote sensing and bio-imaging, images from multiple bands can provide much richer information compared to a single band. However, most multispectral imaging systems have difficulty in acquiring images for high-speed moving objects. In this paper, we use a DMD-based temporal compressive imaging (TCI) system to obtain high-speed images of moving objects over a broad dual-band spectral range, in the visible and the near-infrared (NIR) bands simultaneously. To deal with the degraded reconstruction caused by the optics, four nonuniform calibration strategies are studied, which can also be implemented into other compressive imaging systems. Moving objects covered by paint or through a diffuser are reconstructed to demonstrate the superior performance of the calibrated broad dual-band TCI system.
Collapse
|
23
|
Zhu Y, Hang Yeung C, Lam EY. Digital holographic imaging and classification of microplastics using deep transfer learning. Appl Opt 2021; 60:A38-A47. [PMID: 33690352 DOI: 10.1364/ao.403366] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 09/16/2020] [Indexed: 06/12/2023]
Abstract
We devise an inline digital holographic imaging system equipped with a lightweight deep learning network, termed CompNet, and develop the transfer learning for classification and analysis. It has a compression block consisting of a concatenated rectified linear unit (CReLU) activation to reduce the channels, and a class-balanced cross-entropy loss for training. The method is particularly suitable for small and imbalanced datasets, and we apply it to the detection and classification of microplastics. Our results show good improvements both in feature extraction, and generalization and classification accuracy, effectively overcoming the problem of overfitting. This method could be attractive for future in situ microplastic particle detection and classification applications.
Collapse
|
24
|
Ge Z, Meng N, Song L, Lam EY. Dynamic laser speckle analysis using the event sensor. Appl Opt 2021; 60:172-178. [PMID: 33362087 DOI: 10.1364/ao.412601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 12/02/2020] [Indexed: 06/12/2023]
Abstract
Dynamic laser speckle analysis (DLSA) can obtain useful information about the scene dynamics. Traditional implementations use intensity-based imaging sensors such as a complementary metal oxide semiconductor and charge-coupled device to capture time-varying intensity frames. We use an event sensor that measures pixel-wise asynchronous brightness changes to record speckle pattern sequences. Our approach takes advantage of the low latency and high contrast sensitivity of the event sensor to implement DLSA with high temporal resolution. We also propose two evaluation metrics designed especially for event data. Comparison experiments are conducted in identical conditions to demonstrate the feasibility of our proposed approach.
Collapse
|
25
|
Chan RKY, He H, Ren YX, Lai CSW, Lam EY, Wong KKY. Axially resolved volumetric two-photon microscopy with an extended field of view using depth localization under mirrored Airy beams. Opt Express 2020; 28:39563-39573. [PMID: 33379502 DOI: 10.1364/oe.412453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 12/08/2020] [Indexed: 06/12/2023]
Abstract
It is a great challenge in two-photon microscopy (2PM) to have a high volumetric imaging speed without sacrificing the spatial and temporal resolution in three dimensions (3D). The structure in 2PM images could be reconstructed with better spatial and temporal resolution by the proper choice of the data processing algorithm. Here, we propose a method to reconstruct 3D volume from 2D projections imaged by mirrored Airy beams. We verified that our approach can achieve high accuracy in 3D localization over a large axial range and is applicable to continuous and dense sample. The effective field of view after reconstruction is expanded. It is a promising technique for rapid volumetric 2PM with axial localization at high resolution.
Collapse
|
26
|
Harvey AR, Cossairt O, Ke J, Lam EY, Rangarajan P. Computational Optical Sensing and Imaging: feature issue introduction. Opt Express 2020; 28:18131-18134. [PMID: 32680013 DOI: 10.1364/oe.397510] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Indexed: 06/11/2023]
Abstract
This Feature Issue includes 19 articles that highlight advances in the field of Computational Optical Sensing and Imaging. Many of the articles were presented at the 2019 OSA Topical Meeting on Computational Optical Sensing and Imaging held in Munich, Germany, on June 24-27. Articles featured in the issue cover a broad array of topics ranging from imaging through scattering media, imaging round corners and compressive imaging to machine learning for recovery of images.
Collapse
|
27
|
Ren Z, Lam EY, Zhao J. Acceleration of autofocusing with improved edge extraction using structure tensor and Schatten norm. Opt Express 2020; 28:14712-14728. [PMID: 32403507 DOI: 10.1364/oe.392544] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 04/21/2020] [Indexed: 06/11/2023]
Abstract
Determining the optimal focal plane amongst a stack of blurred images in a short response time is a non-trivial task in optical imaging like microscopy and photography. An autofocusing algorithm, or in other words, a focus metric, is key to effectively dealing with such problem. In previous work, we proposed a structure tensor-based autofocusing algorithm for coherent imaging, i.e., digital holography. In this paper, we further extend the realm of this method in more imaging modalities. With an optimized computation scheme of structure tensor, a significant acceleration of about fivefold in computation speed without sacrificing the autofocusing accuracy is achieved by using the Schatten matrix norm instead of the vector norm. Besides, we also demonstrate its edge extraction capability by retrieving the intermediate tensor image. Synthesized and experimental data acquired in various imaging scenarios such as incoherent microscopy and photography are demonstrated to verify the efficacy of this method.
Collapse
|
28
|
Zeng T, So HKH, Lam EY. RedCap: residual encoder-decoder capsule network for holographic image reconstruction. Opt Express 2020; 28:4876-4887. [PMID: 32121718 DOI: 10.1364/oe.383350] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 01/27/2020] [Indexed: 06/10/2023]
Abstract
A capsule network, as an advanced technique in deep learning, is designed to overcome information loss in the pooling operation and internal data representation of a convolutional neural network (CNN). It has shown promising results in several applications, such as digit recognition and image segmentation. In this work, we investigate for the first time the use of capsule network in digital holographic reconstruction. The proposed residual encoder-decoder capsule network, which we call RedCap, uses a novel windowed spatial dynamic routing algorithm and residual capsule block, which extends the idea of a residual block. Compared with the CNN-based neural network, RedCap exhibits much better experimental results in digital holographic reconstruction, while having a dramatic 75% reduction in the number of parameters. It indicates that RedCap is more efficient in the way it processes data and requires a much less memory storage for the learned model, which therefore makes it possible to be applied to some challenging situations with limited computational resources, such as portable devices.
Collapse
|
29
|
Meng N, Lam EY, Tsia KK, So HKH. Large-Scale Multi-Class Image-Based Cell Classification With Deep Learning. IEEE J Biomed Health Inform 2019; 23:2091-2098. [DOI: 10.1109/jbhi.2018.2878878] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
30
|
Shi R, Wong JSJ, Lam EY, Tsia KK, So HKH. A Real-Time Coprime Line Scan Super-Resolution System for Ultra-Fast Microscopy. IEEE Trans Biomed Circuits Syst 2019; 13:781-792. [PMID: 31059454 DOI: 10.1109/tbcas.2019.2914946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
A fundamental technical challenge for ultra-fast cell microscopy is the tradeoff between imaging throughput and resolution. In addition to throughput, real-time applications such as image-based cell sorting further requires ultra-low imaging latency to facilitate rapid decision making on a single-cell level. Using a novel coprime line scan sampling scheme, a real-time low-latency hardware super-resolution system for ultra-fast time-stretch microscopy is presented. The proposed scheme utilizes analog-to-digital converter with a carefully tuned sampling pattern (shifted sampling grid) to enable super-resolution image reconstruction using line scan input from an optical front-end. A fully pipelined FPGA-based system is built to efficiently handle the real-time high-resolution image reconstruction process with the input subpixel samples while achieving minimal output latency. The proposed super-resolution sampling and reconstruction scheme is parametrizable and is readily applicable to different line scan imaging systems. In our experiments, an imaging latency of 0.29 μs has been achieved based on a pixel-stream throughput of 4.123 giga pixels per second, which translates into imaging throughput of approximately 120000 cells per second.
Collapse
|
31
|
Du R, Lee VH, Yuan H, Lam KO, Pang HH, Chen Y, Lam EY, Khong PL, Lee AW, Kwong DL, Vardhanabhuti V. Radiomics Model to Predict Early Progression of Nonmetastatic Nasopharyngeal Carcinoma after Intensity Modulation Radiation Therapy: A Multicenter Study. Radiol Artif Intell 2019; 1:e180075. [PMID: 33937796 DOI: 10.1148/ryai.2019180075] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Revised: 04/04/2019] [Accepted: 05/07/2019] [Indexed: 12/23/2022]
Abstract
Purpose To examine the prognostic value of a machine learning model trained with pretreatment MRI radiomic features in the assessment of patients with nonmetastatic nasopharyngeal carcinoma (NPC) who are at risk for 3-year disease progression after intensity-modulated radiation therapy and to explain the radiomics features in the model. Materials and Methods A total of 277 patients with nonmetastatic NPC admitted between March 2008 and December 2014 at two imaging centers were retrospectively reviewed. Patients were allocated to a discovery or validation cohort based on where they underwent MRI (discovery cohort, n = 217; validation cohort, n = 60). A total of 525 radiomics features extracted from contrast material-enhanced T1- or T2-weighted MRI studies and five clinical features were subjected to radiomic machine learning modeling to predict 3-year disease progression. Feature selection was performed by analyzing robustness to resampling, reproducibility between observers, and redundancy. Features for the final model were selected with Kaplan-Meier analysis and the log-rank test. A support vector machine was used as the classifier for the model. To interpret the pattern learned from the model, Shapley additive explanations (SHAP) was applied. Results The final model yielded an area under the receiver operating characteristic curve of 0.80 in both the discovery (95% bootstrap confidence interval: 0.80, 0.81) and independent validation (95% bootstrap confidence interval: 0.73, 0.89) cohorts. Analysis with SHAP revealed that tumor shape sphericity, first-order mean absolute deviation, T stage, and overall stage were important factors in 3-year disease progression. Conclusion These results add to the growing evidence of the role of radiomics in the assessment of NPC. By using explanatory techniques, such as SHAP, the complex interaction of features learned by the model may be understood.© RSNA, 2019Supplemental material is available for this article.
Collapse
Affiliation(s)
- Richard Du
- Departments of Diagnostic Radiology (R.D., H.Y., P.L.K., V.V.) and Clinical Oncology (V.H.L., K.O.L., A.W.L., D.L.K.) and the School of Public Health (H.H.P.), Li Ka Shing Faculty of Medicine, The University of Hong Kong, Room 406, Block K, Queen Mary Hospital, Pok Fu Lam Road, Hong Kong SAR; Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China (Y.C.); and Department of Electrical and Electronic Engineering, Faculty of Engineering, The University of Hong Kong, Hong Kong SAR (E.Y.L.)
| | - Victor H Lee
- Departments of Diagnostic Radiology (R.D., H.Y., P.L.K., V.V.) and Clinical Oncology (V.H.L., K.O.L., A.W.L., D.L.K.) and the School of Public Health (H.H.P.), Li Ka Shing Faculty of Medicine, The University of Hong Kong, Room 406, Block K, Queen Mary Hospital, Pok Fu Lam Road, Hong Kong SAR; Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China (Y.C.); and Department of Electrical and Electronic Engineering, Faculty of Engineering, The University of Hong Kong, Hong Kong SAR (E.Y.L.)
| | - Hui Yuan
- Departments of Diagnostic Radiology (R.D., H.Y., P.L.K., V.V.) and Clinical Oncology (V.H.L., K.O.L., A.W.L., D.L.K.) and the School of Public Health (H.H.P.), Li Ka Shing Faculty of Medicine, The University of Hong Kong, Room 406, Block K, Queen Mary Hospital, Pok Fu Lam Road, Hong Kong SAR; Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China (Y.C.); and Department of Electrical and Electronic Engineering, Faculty of Engineering, The University of Hong Kong, Hong Kong SAR (E.Y.L.)
| | - Ka-On Lam
- Departments of Diagnostic Radiology (R.D., H.Y., P.L.K., V.V.) and Clinical Oncology (V.H.L., K.O.L., A.W.L., D.L.K.) and the School of Public Health (H.H.P.), Li Ka Shing Faculty of Medicine, The University of Hong Kong, Room 406, Block K, Queen Mary Hospital, Pok Fu Lam Road, Hong Kong SAR; Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China (Y.C.); and Department of Electrical and Electronic Engineering, Faculty of Engineering, The University of Hong Kong, Hong Kong SAR (E.Y.L.)
| | - Herbert H Pang
- Departments of Diagnostic Radiology (R.D., H.Y., P.L.K., V.V.) and Clinical Oncology (V.H.L., K.O.L., A.W.L., D.L.K.) and the School of Public Health (H.H.P.), Li Ka Shing Faculty of Medicine, The University of Hong Kong, Room 406, Block K, Queen Mary Hospital, Pok Fu Lam Road, Hong Kong SAR; Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China (Y.C.); and Department of Electrical and Electronic Engineering, Faculty of Engineering, The University of Hong Kong, Hong Kong SAR (E.Y.L.)
| | - Yu Chen
- Departments of Diagnostic Radiology (R.D., H.Y., P.L.K., V.V.) and Clinical Oncology (V.H.L., K.O.L., A.W.L., D.L.K.) and the School of Public Health (H.H.P.), Li Ka Shing Faculty of Medicine, The University of Hong Kong, Room 406, Block K, Queen Mary Hospital, Pok Fu Lam Road, Hong Kong SAR; Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China (Y.C.); and Department of Electrical and Electronic Engineering, Faculty of Engineering, The University of Hong Kong, Hong Kong SAR (E.Y.L.)
| | - Edmund Y Lam
- Departments of Diagnostic Radiology (R.D., H.Y., P.L.K., V.V.) and Clinical Oncology (V.H.L., K.O.L., A.W.L., D.L.K.) and the School of Public Health (H.H.P.), Li Ka Shing Faculty of Medicine, The University of Hong Kong, Room 406, Block K, Queen Mary Hospital, Pok Fu Lam Road, Hong Kong SAR; Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China (Y.C.); and Department of Electrical and Electronic Engineering, Faculty of Engineering, The University of Hong Kong, Hong Kong SAR (E.Y.L.)
| | - Pek-Lan Khong
- Departments of Diagnostic Radiology (R.D., H.Y., P.L.K., V.V.) and Clinical Oncology (V.H.L., K.O.L., A.W.L., D.L.K.) and the School of Public Health (H.H.P.), Li Ka Shing Faculty of Medicine, The University of Hong Kong, Room 406, Block K, Queen Mary Hospital, Pok Fu Lam Road, Hong Kong SAR; Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China (Y.C.); and Department of Electrical and Electronic Engineering, Faculty of Engineering, The University of Hong Kong, Hong Kong SAR (E.Y.L.)
| | - Anne W Lee
- Departments of Diagnostic Radiology (R.D., H.Y., P.L.K., V.V.) and Clinical Oncology (V.H.L., K.O.L., A.W.L., D.L.K.) and the School of Public Health (H.H.P.), Li Ka Shing Faculty of Medicine, The University of Hong Kong, Room 406, Block K, Queen Mary Hospital, Pok Fu Lam Road, Hong Kong SAR; Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China (Y.C.); and Department of Electrical and Electronic Engineering, Faculty of Engineering, The University of Hong Kong, Hong Kong SAR (E.Y.L.)
| | - Dora L Kwong
- Departments of Diagnostic Radiology (R.D., H.Y., P.L.K., V.V.) and Clinical Oncology (V.H.L., K.O.L., A.W.L., D.L.K.) and the School of Public Health (H.H.P.), Li Ka Shing Faculty of Medicine, The University of Hong Kong, Room 406, Block K, Queen Mary Hospital, Pok Fu Lam Road, Hong Kong SAR; Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China (Y.C.); and Department of Electrical and Electronic Engineering, Faculty of Engineering, The University of Hong Kong, Hong Kong SAR (E.Y.L.)
| | - Varut Vardhanabhuti
- Departments of Diagnostic Radiology (R.D., H.Y., P.L.K., V.V.) and Clinical Oncology (V.H.L., K.O.L., A.W.L., D.L.K.) and the School of Public Health (H.H.P.), Li Ka Shing Faculty of Medicine, The University of Hong Kong, Room 406, Block K, Queen Mary Hospital, Pok Fu Lam Road, Hong Kong SAR; Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China (Y.C.); and Department of Electrical and Electronic Engineering, Faculty of Engineering, The University of Hong Kong, Hong Kong SAR (E.Y.L.)
| |
Collapse
|
32
|
Zhou Q, Ke J, Lam EY. Near-infrared temporal compressive imaging for video. Opt Lett 2019; 44:1702-1705. [PMID: 30933126 DOI: 10.1364/ol.44.001702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 02/25/2019] [Indexed: 06/09/2023]
Abstract
Without decreasing spatial resolution, temporal compressive imaging (TCI) can improve the temporal resolution of an imaging sensor and relax the requirement of the data readout speed in high-speed imaging. In this Letter, we describe a near-infrared TCI system that can reconstruct 500 fps videos from coded measurement frames sampled at 50 fps.
Collapse
|
33
|
Watnik AT, Harvey AR, Lam EY, Rangarajan P. Computational optical sensing and imaging: introduction. Appl Opt 2019; 58:COS1-COS2. [PMID: 30874226 DOI: 10.1364/ao.58.00cos1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Indexed: 06/09/2023]
Abstract
The OSA Topical Meeting on Computational Optical Sensing and Imaging (COSI) was held June 25-June 28, 2018 in Orlando, Florida, USA, as part of the Imaging and Applied Optics Congress. In this feature issue, we present several papers that cover the techniques, topics, and advancements in the field presented at the COSI meeting highlighting the integration of opto-electric measurement and computational processing.
Collapse
|
34
|
Abstract
We develop an image despeckling method that combines nonlocal self-similarity filters with machine learning, which makes use of convolutional neural network (CNN) denoisers. It consists of three major steps: block matching, CNN despeckling, and group shrinkage. Through the use of block matching, we can take advantage of the similarity across image patches as a regularizer to augment the performance of data-driven denoising using a pre-trained network. The outputs from the CNN denoiser and the group coordinates from block matching are further used to form 3D groups of similar patches, which are then filtered through a wavelet-domain shrinkage. The experimental results show that the proposed method achieves noticeable improvement compared with state-of-the-art speckle suppression techniques in both visual inspection and objective assessments.
Collapse
|
35
|
Lam EY. Golden anniversary of Fourier optics: guest editorial. Appl Opt 2019; 58:ED1-ED2. [PMID: 30874229 DOI: 10.1364/ao.58.000ed1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Indexed: 06/09/2023]
Abstract
In 1968, Professor Joseph W. Goodman published the first edition of Introduction to Fourier Optics, which also laid the foundation of this field. For half a century, the book has been the definitive teaching and reference text, well known in particular for its clear and insightful writing. At OSA's Imaging and Applied Optics Congress 2018, a special event was organized to commemorate the fiftieth anniversary of the book, with a series of talks covering the teaching and scientific development of Fourier optics.
Collapse
|
36
|
Chen N, Zuo C, Lam EY, Lee B. 3D Imaging Based on Depth Measurement Technologies. Sensors (Basel) 2018; 18:E3711. [PMID: 30384501 PMCID: PMC6263433 DOI: 10.3390/s18113711] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 10/26/2018] [Accepted: 10/26/2018] [Indexed: 01/21/2023]
Abstract
Three-dimensional (3D) imaging has attracted more and more interest because of its widespread applications, especially in information and life science. These techniques can be broadly divided into two types: ray-based and wavefront-based 3D imaging. Issues such as imaging quality and system complexity of these techniques limit the applications significantly, and therefore many investigations have focused on 3D imaging from depth measurements. This paper presents an overview of 3D imaging from depth measurements, and provides a summary of the connection between the ray-based and wavefront-based 3D imaging techniques.
Collapse
Affiliation(s)
- Ni Chen
- Department of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul 08826, Korea.
| | - Chao Zuo
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, Nanjing 210094, China.
| | - Edmund Y Lam
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong, China.
| | - Byoungho Lee
- Department of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul 08826, Korea.
| |
Collapse
|
37
|
Kang J, Feng P, Li B, Zhang C, Wei X, Lam EY, Tsia KK, Wong KKY. Video-rate centimeter-range optical coherence tomography based on dual optical frequency combs by electro-optic modulators. Opt Express 2018; 26:24928-24939. [PMID: 30469601 DOI: 10.1364/oe.26.024928] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Accepted: 08/27/2018] [Indexed: 06/09/2023]
Abstract
Imaging speed and range are two important parameters for optical coherence tomography (OCT). A conventional video-rate centimeter-range OCT requires an optical source with hundreds of kHz repetition rate and needs the support of broadband detectors and electronics (>1 GHz). In this paper, a type of video-rate centimeter-range OCT system is proposed and demonstrated based on dual optical frequency combs by leveraging electro-optic modulators. The repetition rate difference between dual combs, i.e. the A-scan rate of dual-comb OCT, can be adjusted within 0~6 MHz. By down-converting the interference signal from optical domain to radio-frequency domain through dual comb beating, the down-converted bandwidth of the interference signal is less than 22.5 MHz which is at least two orders of magnitude lower than that in conventional OCT systems. A LabVIEW program is developed for video-rate operation, and the centimeter imaging depth is proved by using 10 pieces of 1-mm thick glass stacked as the sample. The effective beating bandwidth between two optical comb sources is 7 nm corresponding to ~108 comb lines, and the axial resolution of the dual-comb OCT is 158 µm. Dual optical frequency combs provide a promising solution to relax the detection bandwidth requirement in fast long-range OCT systems.
Collapse
|
38
|
Zhou A, Wang W, Chen N, Lam EY, Lee B, Situ G. Fast and robust misalignment correction of Fourier ptychographic microscopy for full field of view reconstruction. Opt Express 2018; 26:23661-23674. [PMID: 30184864 DOI: 10.1364/oe.26.023661] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 08/15/2018] [Indexed: 05/25/2023]
Abstract
Fourier ptychographic microscopy (FPM) is a newly developed computational imaging technique that can provide gigapixel images with both high resolution (HR) and wide field of view (FOV). However, there are two possible reasons for position misalignment, which induce a degradation of the reconstructed image. The first one is the position misalignment of the LED array, which can largely be eliminated during the experimental system building process. The more important one is the segment-dependent position misalignment. Note that, this segment-dependent positional misalignment still exists, even after we correct the central coordinates of every small segment. In this paper, we carefully analyze this segment-dependent misalignment and find that this global shift matters more, compared with the rotational misalignments. According to this fact, we propose a robust and fast method to correct the two factors of position misalignment of the FPM, termed as misalignment correction for the FPM misalignment correction (mcFPM). Although different regions in the FOV have different sensitivities to the position misalignment, the experimental results show that the mcFPM is robust with respect to the elimination of each region. Compared with the state-of-the-art methods, the mcFPM is much faster.
Collapse
|
39
|
Ou H, Wu Y, Lam EY, Wang BZ. New autofocus and reconstruction method based on a connected domain. Opt Lett 2018; 43:2201-2203. [PMID: 29714789 DOI: 10.1364/ol.43.002201] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Accepted: 04/06/2018] [Indexed: 06/08/2023]
Abstract
In this Letter, we propose a new method for auto-focusing and reconstruction without defocus noise in optical scanning holography. By using a connected domain (CD) to calculate the area of different domains, which are labeled by a connected component, the focus distance can be found via the smallest area of each CD. Meanwhile, the sectional images without defocus noise can also be reconstructed based on the labeled domains. The effectiveness of this method has been verified with a simulation and experiments.
Collapse
|
40
|
Kang J, Feng P, Wei X, Lam EY, Tsia KK, Wong KKY. 102-nm, 44.5-MHz inertial-free swept source by mode-locked fiber laser and time stretch technique for optical coherence tomography. Opt Express 2018; 26:4370-4381. [PMID: 29475287 DOI: 10.1364/oe.26.004370] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
A swept source with both high repetition-rate and broad bandwidth is indispensable to enable optical coherence tomography (OCT) with high imaging rate and high axial resolution. However, available swept sources are commonly either limited in speed (sub-MHz) by inertial or kinetic component, or limited in bandwidth (<100 nm) by the gain medium. Here we report an ultrafast broadband (over 100 nm centered at 1.55-µm) all-fiber inertial-free swept source built upon a high-power dispersion-managed fiber laser in conjunction with an optical time-stretch module which bypasses complex optical amplification scheme, which result in a portable and compact implementation of time-stretch OCT (TS-OCT) at A-scan rate of 44.5-MHz, axial resolution of 14 µm in air (or 10 µm in tissue), and flat sensitivity roll-off within 4.3 mm imaging range. Together with the demonstration of two- and three-dimensional OCT imaging of a mud-fish eye anterior segment, we also perform comprehensive studies on the imaging depth, receiver bandwidth, and group velocity dispersion condition. This all-fiber inertia-free swept source could provide a promising source solution for SS-OCT system to realize high-performance volumetric OCT imaging in real time.
Collapse
|
41
|
Ou H, Wu Y, Lam EY, Wang BZ. Axial localization using time reversal multiple signal classification in optical scanning holography. Opt Express 2018; 26:3756-3771. [PMID: 29475355 DOI: 10.1364/oe.26.003756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2017] [Accepted: 01/31/2018] [Indexed: 06/08/2023]
Abstract
This paper presents a method to identify the axial location of targets in an optical scanning holography (OSH) system. By combining time reversal (TR) technique with the multiple signal classification (MUSIC) method in OSH, axial location can be detected with high resolution. Both simulation and experimental work have been carried out to verify the feasibility of the proposed work.
Collapse
|
42
|
Ren Z, Chen N, Lam EY. Automatic focusing for multisectional objects in digital holography using the structure tensor. Opt Lett 2017; 42:1720-1723. [PMID: 28454144 DOI: 10.1364/ol.42.001720] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Determining the axial position of the recorded object in digital holography is a crucial step for image reconstruction. When multiple discrete sections of a three-dimensional object are overlapping each other, this issue becomes more challenging. In this Letter, an autofocusing algorithm using the structure tensor and its eigenvalues is proposed. This method can extract the focal distance of each section for a multi-sectional object irrespective of whether the sections are overlapping or not. We validate the applicability of the proposed technique with synthesized and experimental data using two types of holographic systems.
Collapse
|
43
|
Chen N, Ren Z, Li D, Lam EY, Situ G. Analysis of the noise in backprojection light field acquisition and its optimization. Appl Opt 2017; 56:F20-F26. [PMID: 28463294 DOI: 10.1364/ao.56.000f20] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Light field reconstruction from images captured by focal plane sweeping can achieve high lateral resolution comparable to the modern camera sensor. This is impossible for the conventional micro-lenslet-based light field capture systems. However, the severe defocus noise and the low depth resolution limit its applications. In this paper, we analyze the defocus noise in the focal-plane-sweeping-based light field reconstruction technique, and propose a method to reduce the defocus noise. Both numerical and experimental results verify the proposed method.
Collapse
|
44
|
Wu JL, Xu YQ, Xu JJ, Wei XM, Chan ACS, Tang AHL, Lau AKS, Chung BMF, Cheung Shum H, Lam EY, Wong KKY, Tsia KK. Ultrafast laser-scanning time-stretch imaging at visible wavelengths. Light Sci Appl 2017; 6:e16196. [PMID: 30167195 PMCID: PMC6061895 DOI: 10.1038/lsa.2016.196] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Revised: 08/04/2016] [Accepted: 08/11/2016] [Indexed: 05/10/2023]
Abstract
Optical time-stretch imaging enables the continuous capture of non-repetitive events in real time at a line-scan rate of tens of MHz-a distinct advantage for the ultrafast dynamics monitoring and high-throughput screening that are widely needed in biological microscopy. However, its potential is limited by the technical challenge of achieving significant pulse stretching (that is, high temporal dispersion) and low optical loss, which are the critical factors influencing imaging quality, in the visible spectrum demanded in many of these applications. We present a new pulse-stretching technique, termed free-space angular-chirp-enhanced delay (FACED), with three distinguishing features absent in the prevailing dispersive-fiber-based implementations: (1) it generates substantial, reconfigurable temporal dispersion in free space (>1 ns nm-1) with low intrinsic loss (<6 dB) at visible wavelengths; (2) its wavelength-invariant pulse-stretching operation introduces a new paradigm in time-stretch imaging, which can now be implemented both with and without spectral encoding; and (3) pulse stretching in FACED inherently provides an ultrafast all-optical laser-beam scanning mechanism at a line-scan rate of tens of MHz. Using FACED, we demonstrate not only ultrafast laser-scanning time-stretch imaging with superior bright-field image quality compared with previous work but also, for the first time, MHz fluorescence and colorized time-stretch microscopy. Our results show that this technique could enable a wider scope of applications in high-speed and high-throughput biological microscopy that were once out of reach.
Collapse
Affiliation(s)
- Jiang-Lai Wu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong 999077, China
| | - Yi-Qing Xu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong 999077, China
| | - Jing-Jiang Xu
- Department of Bioengineering, University of Washington, Seattle, Washington 98195, USA
| | - Xiao-Ming Wei
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong 999077, China
| | - Antony CS Chan
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong 999077, China
| | - Anson HL Tang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong 999077, China
| | - Andy KS Lau
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong 999077, China
| | - Bob MF Chung
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong 999077, China
| | - Ho Cheung Shum
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong 999077, China
| | - Edmund Y Lam
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong 999077, China
| | - Kenneth KY Wong
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong 999077, China
| | - Kevin K Tsia
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong 999077, China
| |
Collapse
|
45
|
Lee C, Lam EY. Computationally Efficient Truncated Nuclear Norm Minimization for High Dynamic Range Imaging. IEEE Trans Image Process 2016; 25:4145-4157. [PMID: 27352392 DOI: 10.1109/tip.2016.2585047] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Matrix completion is a rank minimization problem to recover a low-rank data matrix from a small subset of its entries. Since the matrix rank is nonconvex and discrete, many existing approaches approximate the matrix rank as the nuclear norm. However, the truncated nuclear norm is known to be a better approximation to the matrix rank than the nuclear norm, exploiting a priori target rank information about the problem in rank minimization. In this paper, we propose a computationally efficient truncated nuclear norm minimization algorithm for matrix completion, which we call TNNM-ALM. We reformulate the original optimization problem by introducing slack variables and considering noise in the observation. The central contribution of this paper is to solve it efficiently via the augmented Lagrange multiplier (ALM) method, where the optimization variables are updated by closed-form solutions. We apply the proposed TNNM-ALM algorithm to ghost-free high dynamic range imaging by exploiting the low-rank structure of irradiance maps from low dynamic range images. Experimental results on both synthetic and real visual data show that the proposed algorithm achieves significantly lower reconstruction errors and superior robustness against noise than the conventional approaches, while providing substantial improvement in speed, thereby applicable to a wide range of imaging applications.
Collapse
|
46
|
Ke J, Lam EY. Fast compressive measurements acquisition using optimized binary sensing matrices for low-light-level imaging. Opt Express 2016; 24:9869-9887. [PMID: 27137599 DOI: 10.1364/oe.24.009869] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Compressive measurements benefit low-light-level imaging (L3-imaging) due to the significantly improved measurement signal-to-noise ratio (SNR). However, as with other compressive imaging (CI) systems, compressive L3-imaging is slow. To accelerate the data acquisition, we develop an algorithm to compute the optimal binary sensing matrix that can minimize the image reconstruction error. First, we make use of the measurement SNR and the reconstruction mean square error (MSE) to define the optimal gray-value sensing matrix. Then, we construct an equality-constrained optimization problem to solve for a binary sensing matrix. From several experimental results, we show that the latter delivers a similar reconstruction performance as the former, while having a smaller dynamic range requirement to system sensors.
Collapse
|
47
|
Chen N, Ren Z, Lam EY. High-resolution Fourier hologram synthesis from photographic images through computing the light field. Appl Opt 2016; 55:1751-1756. [PMID: 26974639 DOI: 10.1364/ao.55.001751] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a technique for synthesizing the Fourier hologram of a three-dimensional scene from its light field. The light field captures the volumetric information of an object, and an important advantage is that it does not require coherent illumination, as in conventional holography. In this work, we show how to obtain a high-resolution digital hologram with the light field obtained from a series of photographic images captured along the optical axis. The method is verified both by simulations and experimentally captured light field.
Collapse
|
48
|
Abstract
In conventional microscopy, specimens lying within the depth of field are clearly recorded whereas other parts are blurry. Although digital holographic microscopy allows post-processing on holograms to reconstruct multifocus images, it suffers from defocus noise as a traditional microscope in numerical reconstruction. In this paper, we demonstrate a method that can achieve extended focused imaging (EFI) and reconstruct a depth map (DM) of three-dimensional (3D) objects. We first use a depth-from-focus algorithm to create a DM for each pixel based on entropy minimization. Then we show how to achieve EFI of the whole 3D scene computationally. Simulation and experimental results involving objects with multiple axial sections are presented to validate the proposed approach.
Collapse
|
49
|
Lam EY. Computational photography with plenoptic camera and light field capture: tutorial. J Opt Soc Am A Opt Image Sci Vis 2015; 32:2021-2032. [PMID: 26560916 DOI: 10.1364/josaa.32.002021] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.
Collapse
|
50
|
Wu X, Liu S, Lv W, Lam EY. Sparse nonlinear inverse imaging for shot count reduction in inverse lithography. Opt Express 2015; 23:26919-26931. [PMID: 26480353 DOI: 10.1364/oe.23.026919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Inverse lithography technique (ILT) is significant to reduce the feature size of ArF optical lithography due to its strong ability to overcome the optical proximity effect. A critical issue for inverse lithography is the complex curvilinear patterns produced, which are very costly to write due to the large number of shots needed with the current variable shape beam (VSB) writers. In this paper, we devise an inverse lithography method to reduce the shot count by incorporating a model-based fracturing (MBF) in the optimization. The MBF is formulated as a sparse nonlinear inverse imaging problem based on representing the mask as a linear combination of shots followed by a threshold function. The problem is approached with a Gauss-Newton algorithm, which is adapted to promote sparsity of the solution, corresponding to the reduction of the shot count. Simulations of inverse lithography are performed on several test cases, and results demonstrate reduced shot count of the resulting mask.
Collapse
|