1
|
Fanous MJ, Casteleiro Costa P, Işıl Ç, Huang L, Ozcan A. Neural network-based processing and reconstruction of compromised biophotonic image data. LIGHT, SCIENCE & APPLICATIONS 2024; 13:231. [PMID: 39237561 DOI: 10.1038/s41377-024-01544-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 07/16/2024] [Accepted: 07/18/2024] [Indexed: 09/07/2024]
Abstract
In recent years, the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging. A compelling trend in this field involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of e.g., cost, speed, and form-factor, followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal, superior or alternative data. This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging. One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed, critical for capturing fine dynamic biological processes. Additionally, this approach offers the prospect of simplifying hardware requirements and complexities, thereby making advanced imaging standards more accessible in terms of cost and/or size. This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups, including the point spread function (PSF), signal-to-noise ratio (SNR), sampling density, and pixel resolution. By deliberately compromising these metrics, researchers aim to not only recuperate them through the application of deep learning networks, but also bolster in return other crucial parameters, such as the field of view (FOV), depth of field (DOF), and space-bandwidth product (SBP). Throughout this article, we discuss various biophotonic methods that have successfully employed this strategic approach. These techniques span a wide range of applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data. Finally, by offering our perspectives on the exciting future possibilities of this rapidly evolving concept, we hope to motivate our readers from various disciplines to explore novel ways of balancing hardware compromises with compensation via artificial intelligence (AI).
Collapse
Affiliation(s)
- Michael John Fanous
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
| | - Paloma Casteleiro Costa
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
| | - Çağatay Işıl
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA.
- Bioengineering Department, University of California, Los Angeles, CA, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, USA.
| |
Collapse
|
2
|
Elmalam N, Ben Nedava L, Zaritsky A. In silico labeling in cell biology: Potential and limitations. Curr Opin Cell Biol 2024; 89:102378. [PMID: 38838549 DOI: 10.1016/j.ceb.2024.102378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 05/16/2024] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
In silico labeling is the computational cross-modality image translation where the output modality is a subcellular marker that is not specifically encoded in the input image, for example, in silico localization of organelles from transmitted light images. In principle, in silico labeling has the potential to facilitate rapid live imaging of multiple organelles with reduced photobleaching and phototoxicity, a technology enabling a major leap toward understanding the cell as an integrated complex system. However, five years have passed since feasibility was attained, without any demonstration of using in silico labeling to uncover new biological insight. In here, we discuss the current state of in silico labeling, the limitations preventing it from becoming a practical tool, and how we can overcome these limitations to reach its full potential.
Collapse
Affiliation(s)
- Nitsan Elmalam
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Lion Ben Nedava
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel.
| |
Collapse
|
3
|
Xu J, Feng T, Wang A, Xu F, Pan A. Fourier ptychographic microscopy with adaptive resolution strategy. OPTICS LETTERS 2024; 49:3548-3551. [PMID: 38950206 DOI: 10.1364/ol.525289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 05/27/2024] [Indexed: 07/03/2024]
Abstract
Fourier ptychographic microscopy (FPM) is a method capable of reconstructing a high-resolution, wide field-of-view (FOV) image, where dark-field images provide the high-frequency information required for the iterative process. Theoretically, using more dark-field images can lead to results with higher resolution. However, the resolution required to clearly detect samples with different microscales varies. For certain samples, the limit resolution of the imaging system may exceed the one required to resolve the details. This suggests that simply increasing the number of dark-field images will not improve the recognition capability for such samples and may instead significantly increase the computational cost. To address this issue, this Letter proposes an adaptive resolution strategy that automatically assigns the resolution required for the sample. Based on a Tenengrad approach, this strategy determines the number of images required for reconstruction by evaluating a series of differential images among the reconstructions for a certain subregion and then efficiently completes the full-FOV reconstruction according to the determined resolution. We conducted the full-FOV reconstruction utilizing feature-domain FPM for both the USAF resolution test chart and a human red blood cell sample. Employing the adaptive resolution strategy, the preservation of reconstruction resolution can be ensured while respectively economizing approximately 76% and 89% of the time.
Collapse
|
4
|
Xu F, Wu Z, Tan C, Liao Y, Wang Z, Chen K, Pan A. Fourier Ptychographic Microscopy 10 Years on: A Review. Cells 2024; 13:324. [PMID: 38391937 PMCID: PMC10887115 DOI: 10.3390/cells13040324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 01/31/2024] [Accepted: 02/08/2024] [Indexed: 02/24/2024] Open
Abstract
Fourier ptychographic microscopy (FPM) emerged as a prominent imaging technique in 2013, attracting significant interest due to its remarkable features such as precise phase retrieval, expansive field of view (FOV), and superior resolution. Over the past decade, FPM has become an essential tool in microscopy, with applications in metrology, scientific research, biomedicine, and inspection. This achievement arises from its ability to effectively address the persistent challenge of achieving a trade-off between FOV and resolution in imaging systems. It has a wide range of applications, including label-free imaging, drug screening, and digital pathology. In this comprehensive review, we present a concise overview of the fundamental principles of FPM and compare it with similar imaging techniques. In addition, we present a study on achieving colorization of restored photographs and enhancing the speed of FPM. Subsequently, we showcase several FPM applications utilizing the previously described technologies, with a specific focus on digital pathology, drug screening, and three-dimensional imaging. We thoroughly examine the benefits and challenges associated with integrating deep learning and FPM. To summarize, we express our own viewpoints on the technological progress of FPM and explore prospective avenues for its future developments.
Collapse
Affiliation(s)
- Fannuo Xu
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (F.X.); (Z.W.); (C.T.); (Y.L.); (Z.W.); (K.C.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zipei Wu
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (F.X.); (Z.W.); (C.T.); (Y.L.); (Z.W.); (K.C.)
- School of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China
| | - Chao Tan
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (F.X.); (Z.W.); (C.T.); (Y.L.); (Z.W.); (K.C.)
- School of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
| | - Yizheng Liao
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (F.X.); (Z.W.); (C.T.); (Y.L.); (Z.W.); (K.C.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zhiping Wang
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (F.X.); (Z.W.); (C.T.); (Y.L.); (Z.W.); (K.C.)
- School of Physical Science and Technology, Lanzhou University, Lanzhou 730000, China
| | - Keru Chen
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (F.X.); (Z.W.); (C.T.); (Y.L.); (Z.W.); (K.C.)
- School of Automation Science and Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - An Pan
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (F.X.); (Z.W.); (C.T.); (Y.L.); (Z.W.); (K.C.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
5
|
Thapa V, Galande AS, Ram GHP, John R. TIE-GANs: single-shot quantitative phase imaging using transport of intensity equation with integration of GANs. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:016010. [PMID: 38293292 PMCID: PMC10826717 DOI: 10.1117/1.jbo.29.1.016010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/18/2023] [Accepted: 01/09/2024] [Indexed: 02/01/2024]
Abstract
Significance Artificial intelligence (AI) has become a prominent technology in computational imaging over the past decade. The expeditious and label-free characteristics of quantitative phase imaging (QPI) render it a promising contender for AI investigation. Though interferometric methodologies exhibit potential efficacy, their implementation involves complex experimental platforms and computationally intensive reconstruction procedures. Hence, non-interferometric methods, such as transport of intensity equation (TIE), are preferred over interferometric methods. Aim TIE method, despite its effectiveness, is tedious as it requires the acquisition of many images at varying defocus planes. The proposed methodology holds the ability to generate a phase image utilizing a single intensity image using generative adversarial networks (GANs). We present a method called TIE-GANs to overcome the multi-shot scheme of conventional TIE. Approach The present investigation employs the TIE as a QPI methodology, which necessitates reduced experimental and computational efforts. TIE is being used for the dataset preparation as well. The proposed method captures images from different defocus planes for training. Our approach uses an image-to-image translation technique to produce phase maps and is based on GANs. The main contribution of this work is the introduction of GANs with TIE (TIE:GANs) that can give better phase reconstruction results with shorter computation times. This is the first time the GANs is proposed for TIE phase retrieval. Results The characterization of the system was carried out with microbeads of 4 μ m size and structural similarity index (SSIM) for microbeads was found to be 0.98. We demonstrated the application of the proposed method with oral cells, which yielded a maximum SSIM value of 0.95. The key characteristics include mean squared error and peak-signal-to-noise ratio values of 140 and 26.42 dB for oral cells and 100 and 28.10 dB for microbeads. Conclusions The proposed methodology holds the ability to generate a phase image utilizing a single intensity image. Our method is feasible for digital cytology because of its reported high value of SSIM. Our approach can handle defocused images in such a way that it can take intensity image from any defocus plane within the provided range and able to generate phase map.
Collapse
Affiliation(s)
- Vikas Thapa
- Indian Institute of Technology Hyderabad, Medical Optics and Sensors Laboratory, Department of Biomedical Engineering, Hyderabad, Telangana, India
| | - Ashwini Subhash Galande
- Indian Institute of Technology Hyderabad, Medical Optics and Sensors Laboratory, Department of Biomedical Engineering, Hyderabad, Telangana, India
| | - Gurram Hanu Phani Ram
- Indian Institute of Technology Hyderabad, Medical Optics and Sensors Laboratory, Department of Biomedical Engineering, Hyderabad, Telangana, India
| | - Renu John
- Indian Institute of Technology Hyderabad, Medical Optics and Sensors Laboratory, Department of Biomedical Engineering, Hyderabad, Telangana, India
| |
Collapse
|
6
|
Wang K, Song L, Wang C, Ren Z, Zhao G, Dou J, Di J, Barbastathis G, Zhou R, Zhao J, Lam EY. On the use of deep learning for phase recovery. LIGHT, SCIENCE & APPLICATIONS 2024; 13:4. [PMID: 38161203 PMCID: PMC10758000 DOI: 10.1038/s41377-023-01340-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 11/13/2023] [Accepted: 11/16/2023] [Indexed: 01/03/2024]
Abstract
Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and provide an outlook on how to better use DL to improve the reliability and efficiency of PR. Furthermore, we present a live-updating resource ( https://github.com/kqwang/phase-recovery ) for readers to learn more about PR.
Collapse
Affiliation(s)
- Kaiqiang Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Li Song
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Chutian Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Zhenbo Ren
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China
| | - Guangyuan Zhao
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jiazhen Dou
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jianglei Di
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Renjie Zhou
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jianlin Zhao
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
| | - Edmund Y Lam
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
7
|
Hu X, Jia X, Zhang K, Lo TW, Fan Y, Liu D, Wen J, Yong H, Rahmani M, Zhang L, Lei D. Deep-learning-augmented microscopy for super-resolution imaging of nanoparticles. OPTICS EXPRESS 2024; 32:879-890. [PMID: 38175110 DOI: 10.1364/oe.505060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 12/04/2023] [Indexed: 01/05/2024]
Abstract
Conventional optical microscopes generally provide blurry and indistinguishable images for subwavelength nanostructures. However, a wealth of intensity and phase information is hidden in the corresponding diffraction-limited optical patterns and can be used for the recognition of structural features, such as size, shape, and spatial arrangement. Here, we apply a deep-learning framework to improve the spatial resolution of optical imaging for metal nanostructures with regular shapes yet varied arrangement. A convolutional neural network (CNN) is constructed and pre-trained by the optical images of randomly distributed gold nanoparticles as input and the corresponding scanning-electron microscopy images as ground truth. The CNN is then learned to recover reversely the non-diffracted super-resolution images of both regularly arranged nanoparticle dimers and randomly clustered nanoparticle multimers from their blurry optical images. The profiles and orientations of these structures can also be reconstructed accurately. Moreover, the same network is extended to deblur the optical images of randomly cross-linked silver nanowires. Most sections of these intricate nanowire nets are recovered well with a slight discrepancy near their intersections. This deep-learning augmented framework opens new opportunities for computational super-resolution optical microscopy with many potential applications in the fields of bioimaging and nanoscale fabrication and characterization. It could also be applied to significantly enhance the resolving capability of low-magnification scanning-electron microscopy.
Collapse
|
8
|
Wu R, Luo Z, Liu M, Zhang H, Zhen J, Yan L, Luo J, Wu Y. Fast Fourier ptychographic quantitative phase microscopy for in vitro label-free imaging. BIOMEDICAL OPTICS EXPRESS 2024; 15:95-113. [PMID: 38223174 PMCID: PMC10783909 DOI: 10.1364/boe.505267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 11/18/2023] [Accepted: 11/18/2023] [Indexed: 01/16/2024]
Abstract
Quantitative phase microscopy (QPM) is indispensable in biomedical research due to its advantages in unlabeled transparent sample thickness quantification and obtaining refractive index information. Fourier ptychographic microscopy (FPM) is among the most promising QPM methods, incorporating multi-angle illumination and iterative phase recovery for high-resolution quantitative phase imaging (QPI) of large cell populations over a wide field of-view (FOV) in a single pass. However, FPM is limited by data redundancy and sequential acquisition strategies, resulting in low imaging efficiency, which in turn limits its real-time application in in vitro label-free imaging. Here, we report a fast QPM based on Fourier ptychography (FQP-FPM), which uses an optimized annular downsampling and parallel acquisition strategy to minimize the amount of data required in the front end and reduce the iteration time of the back-end algorithm (3.3% and 4.4% of conventional FPM, respectively). Theoretical and data redundancy analyses show that FQP-FPM can realize high-throughput quantitative phase reconstruction at thrice the resolution of the coherent diffraction limit by acquiring only ten raw images, providing a precondition for in vitro label-free real-time imaging. The FQP-FPM application was validated for various in vitro label-free live-cell imaging. Cell morphology and subcellular phenomena in different periods were observed with a synthetic aperture of 0.75 NA at a 10× FOV, demonstrating its advantages and application potential for fast high-throughput QPI.
Collapse
Affiliation(s)
- Ruofei Wu
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528000, China
| | - Zicong Luo
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528000, China
| | - Mingdi Liu
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528000, China
| | - Haiqi Zhang
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528000, China
| | - Junrui Zhen
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528000, China
| | - Lisong Yan
- School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Jiaxiong Luo
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528000, China
| | - Yanxiong Wu
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528000, China
- Ji Hua Laboratory, Foshan, Guangdong 528200, China
| |
Collapse
|
9
|
Gao H, Pan A, Gao Y, Zhang Y, Wan Q, Mu T, Yao B. Redundant information model for Fourier ptychographic microscopy. OPTICS EXPRESS 2023; 31:42822-42837. [PMID: 38178392 DOI: 10.1364/oe.505407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 11/08/2023] [Indexed: 01/06/2024]
Abstract
Fourier ptychographic microscopy (FPM) is a computational optical imaging technique that overcomes the traditional trade-off between resolution and field of view (FOV) by exploiting abundant redundant information in both spatial and frequency domains for high-quality image reconstruction. However, the redundant information in FPM remains ambiguous or abstract, which presents challenges to further enhance imaging capabilities and deepen our understanding of the FPM technique. Inspired by Shannon's information theory and extensive experimental experience in FPM, we defined the specimen complexity and reconstruction algorithm utilization rate and reported a model of redundant information for FPM to predict reconstruction results and guide the optimization of imaging parameters. The model has been validated through extensive simulations and experiments. In addition, it provides a useful tool to evaluate different algorithms, revealing a utilization rate of 24%±1% for the Gauss-Newton algorithm, LED Multiplexing, Wavelength Multiplexing, EPRY-FPM, and GS. In contrast, mPIE exhibits a lower utilization rate of 19%±1%.
Collapse
|
10
|
Astratov VN, Sahel YB, Eldar YC, Huang L, Ozcan A, Zheludev N, Zhao J, Burns Z, Liu Z, Narimanov E, Goswami N, Popescu G, Pfitzner E, Kukura P, Hsiao YT, Hsieh CL, Abbey B, Diaspro A, LeGratiet A, Bianchini P, Shaked NT, Simon B, Verrier N, Debailleul M, Haeberlé O, Wang S, Liu M, Bai Y, Cheng JX, Kariman BS, Fujita K, Sinvani M, Zalevsky Z, Li X, Huang GJ, Chu SW, Tzang O, Hershkovitz D, Cheshnovsky O, Huttunen MJ, Stanciu SG, Smolyaninova VN, Smolyaninov II, Leonhardt U, Sahebdivan S, Wang Z, Luk’yanchuk B, Wu L, Maslov AV, Jin B, Simovski CR, Perrin S, Montgomery P, Lecler S. Roadmap on Label-Free Super-Resolution Imaging. LASER & PHOTONICS REVIEWS 2023; 17:2200029. [PMID: 38883699 PMCID: PMC11178318 DOI: 10.1002/lpor.202200029] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Indexed: 06/18/2024]
Abstract
Label-free super-resolution (LFSR) imaging relies on light-scattering processes in nanoscale objects without a need for fluorescent (FL) staining required in super-resolved FL microscopy. The objectives of this Roadmap are to present a comprehensive vision of the developments, the state-of-the-art in this field, and to discuss the resolution boundaries and hurdles which need to be overcome to break the classical diffraction limit of the LFSR imaging. The scope of this Roadmap spans from the advanced interference detection techniques, where the diffraction-limited lateral resolution is combined with unsurpassed axial and temporal resolution, to techniques with true lateral super-resolution capability which are based on understanding resolution as an information science problem, on using novel structured illumination, near-field scanning, and nonlinear optics approaches, and on designing superlenses based on nanoplasmonics, metamaterials, transformation optics, and microsphere-assisted approaches. To this end, this Roadmap brings under the same umbrella researchers from the physics and biomedical optics communities in which such studies have often been developing separately. The ultimate intent of this paper is to create a vision for the current and future developments of LFSR imaging based on its physical mechanisms and to create a great opening for the series of articles in this field.
Collapse
Affiliation(s)
- Vasily N. Astratov
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Yair Ben Sahel
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Yonina C. Eldar
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
- David Geffen School of Medicine, University of California, Los Angeles, California 90095, USA
| | - Nikolay Zheludev
- Optoelectronics Research Centre, University of Southampton, Southampton, SO17 1BJ, UK
- Centre for Disruptive Photonic Technologies, The Photonics Institute, School of Physical and Mathematical Sciences, Nanyang Technological University, 637371, Singapore
| | - Junxiang Zhao
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zachary Burns
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zhaowei Liu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
- Material Science and Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Evgenii Narimanov
- School of Electrical Engineering, and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA
| | - Neha Goswami
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Gabriel Popescu
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Emanuel Pfitzner
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Philipp Kukura
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Yi-Teng Hsiao
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Chia-Lung Hsieh
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Brian Abbey
- Australian Research Council Centre of Excellence for Advanced Molecular Imaging, La Trobe University, Melbourne, Victoria, Australia
- Department of Chemistry and Physics, La Trobe Institute for Molecular Science (LIMS), La Trobe University, Melbourne, Victoria, Australia
| | - Alberto Diaspro
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Aymeric LeGratiet
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- Université de Rennes, CNRS, Institut FOTON - UMR 6082, F-22305 Lannion, France
| | - Paolo Bianchini
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Natan T. Shaked
- Tel Aviv University, Faculty of Engineering, Department of Biomedical Engineering, Tel Aviv 6997801, Israel
| | - Bertrand Simon
- LP2N, Institut d’Optique Graduate School, CNRS UMR 5298, Université de Bordeaux, Talence France
| | - Nicolas Verrier
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | | | - Olivier Haeberlé
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | - Sheng Wang
- School of Physics and Technology, Wuhan University, China
- Wuhan Institute of Quantum Technology, China
| | - Mengkun Liu
- Department of Physics and Astronomy, Stony Brook University, USA
- National Synchrotron Light Source II, Brookhaven National Laboratory, USA
| | - Yeran Bai
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Ji-Xin Cheng
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Behjat S. Kariman
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Katsumasa Fujita
- Department of Applied Physics and the Advanced Photonics and Biosensing Open Innovation Laboratory (AIST); and the Transdimensional Life Imaging Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| | - Moshe Sinvani
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Zeev Zalevsky
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Xiangping Li
- Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Institute of Photonics Technology, Jinan University, Guangzhou 510632, China
| | - Guan-Jie Huang
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Shi-Wei Chu
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Omer Tzang
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Dror Hershkovitz
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Ori Cheshnovsky
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Mikko J. Huttunen
- Laboratory of Photonics, Physics Unit, Tampere University, FI-33014, Tampere, Finland
| | - Stefan G. Stanciu
- Center for Microscopy – Microanalysis and Information Processing, Politehnica University of Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
| | - Vera N. Smolyaninova
- Department of Physics Astronomy and Geosciences, Towson University, 8000 York Rd., Towson, MD 21252, USA
| | - Igor I. Smolyaninov
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - Ulf Leonhardt
- Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Sahar Sahebdivan
- EMTensor GmbH, TechGate, Donau-City-Strasse 1, 1220 Wien, Austria
| | - Zengbo Wang
- School of Computer Science and Electronic Engineering, Bangor University, Bangor, LL57 1UT, United Kingdom
| | - Boris Luk’yanchuk
- Faculty of Physics, Lomonosov Moscow State University, Moscow 119991, Russia
| | - Limin Wu
- Department of Materials Science and State Key Laboratory of Molecular Engineering of Polymers, Fudan University, Shanghai 200433, China
| | - Alexey V. Maslov
- Department of Radiophysics, University of Nizhny Novgorod, Nizhny Novgorod, 603022, Russia
| | - Boya Jin
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Constantin R. Simovski
- Department of Electronics and Nano-Engineering, Aalto University, FI-00076, Espoo, Finland
- Faculty of Physics and Engineering, ITMO University, 199034, St-Petersburg, Russia
| | - Stephane Perrin
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Paul Montgomery
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Sylvain Lecler
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| |
Collapse
|
11
|
Park J, Bai B, Ryu D, Liu T, Lee C, Luo Y, Lee MJ, Huang L, Shin J, Zhang Y, Ryu D, Li Y, Kim G, Min HS, Ozcan A, Park Y. Artificial intelligence-enabled quantitative phase imaging methods for life sciences. Nat Methods 2023; 20:1645-1660. [PMID: 37872244 DOI: 10.1038/s41592-023-02041-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 09/11/2023] [Indexed: 10/25/2023]
Abstract
Quantitative phase imaging, integrated with artificial intelligence, allows for the rapid and label-free investigation of the physiology and pathology of biological systems. This review presents the principles of various two-dimensional and three-dimensional label-free phase imaging techniques that exploit refractive index as an intrinsic optical imaging contrast. In particular, we discuss artificial intelligence-based analysis methodologies for biomedical studies including image enhancement, segmentation of cellular or subcellular structures, classification of types of biological samples and image translation to furnish subcellular and histochemical information from label-free phase images. We also discuss the advantages and challenges of artificial intelligence-enabled quantitative phase imaging analyses, summarize recent notable applications in the life sciences, and cover the potential of this field for basic and industrial research in the life sciences.
Collapse
Affiliation(s)
- Juyeon Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - DongHun Ryu
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Chungha Lee
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - Yi Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Mahn Jae Lee
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
- Graduate School of Medical Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Jeongwon Shin
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
- Department of Biological Sciences, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | | | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Geon Kim
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | | | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA.
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA, USA.
| | - YongKeun Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea.
- Tomocube, Daejeon, Republic of Korea.
| |
Collapse
|
12
|
Shang R, O’Brien MA, Wang F, Situ G, Luke GP. Approximating the uncertainty of deep learning reconstruction predictions in single-pixel imaging. COMMUNICATIONS ENGINEERING 2023; 2:53. [PMID: 38463559 PMCID: PMC10923550 DOI: 10.1038/s44172-023-00103-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 07/23/2023] [Indexed: 03/12/2024]
Abstract
Single-pixel imaging (SPI) has the advantages of high-speed acquisition over a broad wavelength range and system compactness. Deep learning (DL) is a powerful tool that can achieve higher image quality than conventional reconstruction approaches. Here, we propose a Bayesian convolutional neural network (BCNN) to approximate the uncertainty of the DL predictions in SPI. Each pixel in the predicted image represents a probability distribution rather than an image intensity value, indicating the uncertainty of the prediction. We show that the BCNN uncertainty predictions are correlated to the reconstruction errors. When the BCNN is trained and used in practical applications where the ground truths are unknown, the level of the predicted uncertainty can help to determine whether system, data, or network adjustments are needed. Overall, the proposed BCNN can provide a reliable tool to indicate the confidence levels of DL predictions as well as the quality of the model and dataset for many applications of SPI.
Collapse
Affiliation(s)
- Ruibo Shang
- Thayer School of Engineering, Dartmouth College, Hanover,
NH 03755, USA
- Department of Bioengineering, University of Washington,
Seattle, WA 98195, USA
| | | | - Fei Wang
- Shanghai Institute of Optics and Fine Mechanics, Chinese
Academy of Sciences, Shanghai 201800, China
- Center of Materials Science and Optoelectronics
Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guohai Situ
- Shanghai Institute of Optics and Fine Mechanics, Chinese
Academy of Sciences, Shanghai 201800, China
- Center of Materials Science and Optoelectronics
Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
- Hangzhou Institute for Advanced Study, University of
Chinese Academy of Sciences, Hangzhou 310024, China
| | - Geoffrey P. Luke
- Thayer School of Engineering, Dartmouth College, Hanover,
NH 03755, USA
| |
Collapse
|
13
|
Han L, Su H, Yin Z. Phase Contrast Image Restoration by Formulating Its Imaging Principle and Reversing the Formulation With Deep Neural Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1068-1082. [PMID: 36409800 DOI: 10.1109/tmi.2022.3223677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Phase contrast microscopy, as a noninvasive imaging technique, has been widely used to monitor the behavior of transparent cells without staining or altering them. Due to the optical principle of the specifically-designed microscope, phase contrast microscopy images contain artifacts such as halo and shade-off which hinder the cell segmentation and detection tasks. Some previous works developed simplified computational imaging models for phase contrast microscopes by linear approximations and convolutions. The approximated models do not exactly reflect the imaging principle of the phase contrast microscope and accordingly the image restoration by solving the corresponding deconvolution process is not perfect. In this paper, we revisit the optical principle of the phase contrast microscope to precisely formulate its imaging model without any approximation. Based on this model, we propose an image restoration procedure by reversing this imaging model with a deep neural network, instead of mathematically deriving the inverse operator of the model which is technically impossible. Extensive experiments are conducted to demonstrate the superiority of the newly derived phase contrast microscopy imaging model and the power of the deep neural network on modeling the inverse imaging procedure. Moreover, the restored images enable that high quality cell segmentation task can be easily achieved by simply thresholding methods. Implementations of this work are publicly available at https://github.com/LiangHann/Phase-Contrast-Microscopy-Image-Restoration.
Collapse
|
14
|
Wang H, Zhu J, Sung J, Hu G, Greene J, Li Y, Park S, Kim W, Lee M, Yang Y, Tian L. Fourier ptychographic topography. OPTICS EXPRESS 2023; 31:11007-11018. [PMID: 37155746 DOI: 10.1364/oe.481712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Topography measurement is essential for surface characterization, semiconductor metrology, and inspection applications. To date, performing high-throughput and accurate topography remains challenging due to the trade-off between field-of-view (FOV) and spatial resolution. Here we demonstrate a novel topography technique based on the reflection-mode Fourier ptychographic microscopy, termed Fourier ptychograhpic topography (FPT). We show that FPT provides both a wide FOV and high resolution, and achieves nanoscale height reconstruction accuracy. Our FPT prototype is based on a custom-built computational microscope consisting of programmable brightfield and darkfield LED arrays. The topography reconstruction is performed by a sequential Gauss-Newton-based Fourier ptychographic phase retrieval algorithm augmented with total variation regularization. We achieve a synthetic numerical aperture (NA) of 0.84 and a diffraction-limited resolution of 750 nm, increasing the native objective NA (0.28) by 3×, across a 1.2 × 1.2 mm2 FOV. We experimentally demonstrate the FPT on a variety of reflective samples with different patterned structures. The reconstructed resolution is validated on both amplitude and phase resolution test features. The accuracy of the reconstructed surface profile is benchmarked against high-resolution optical profilometry measurements. In addition, we show that the FPT provides robust surface profile reconstructions even on complex patterns with fine features that cannot be reliably measured by the standard optical profilometer. The spatial and temporal noise of our FPT system is characterized to be 0.529 nm and 0.027 nm, respectively.
Collapse
|
15
|
You X, Liu J, Li Y, Jiang Y, Liu J. 3D microscopy in industrial measurements. J Microsc 2023; 289:137-156. [PMID: 36427335 DOI: 10.1111/jmi.13161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 11/19/2022] [Accepted: 11/21/2022] [Indexed: 11/27/2022]
Abstract
Quality control is essential to ensure the performance and yield of microdevices in industrial processing and manufacturing. In particular, 3D microscopy can be considered as a separate branch of microscopic instruments and plays a pivotal role in monitoring processing quality. For industrial measurements, 3D microscopy is mainly used for both the inspection of critical dimensions to ensure the design performance and detection of defects for improving the yield of microdevices. However, with the progress of advanced manufacturing technology and the increasing demand for high-performance microdevices, 3D microscopy has ushered in new challenges and development opportunities, such as breakthroughs in diffraction limit, 3D characterisation and calibrations of critical dimensions, high-precision detection and physical property determination of defects, and application of artificial intelligence. In this review, we provide a comprehensive survey about the state of the art and challenges in 3D microscopy for industrial measurements, and provide development ideas for future research. By describing techniques and methods with their advantages and limitations, we provide guidance to researchers and developers about the most suitable technique available for their intended industrial measurements.
Collapse
Affiliation(s)
- Xiaoyu You
- Advanced Microscopy and Instrumentation Research Centre, Harbin Institute of Technology, Harbin, Heilongjiang, China.,State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin, Heilongjiang, China.,Key Lab of Ultra-Precision Intelligent Instrumentation Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, Heilongjiang, China.,Key Laboratory of Microsystems and Microstructures Manufacturing Ministry of Education, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Jing Liu
- Advanced Microscopy and Instrumentation Research Centre, Harbin Institute of Technology, Harbin, Heilongjiang, China.,State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin, Heilongjiang, China.,Key Lab of Ultra-Precision Intelligent Instrumentation Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, Heilongjiang, China.,Key Laboratory of Microsystems and Microstructures Manufacturing Ministry of Education, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Yifei Li
- Advanced Microscopy and Instrumentation Research Centre, Harbin Institute of Technology, Harbin, Heilongjiang, China.,State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin, Heilongjiang, China.,Key Lab of Ultra-Precision Intelligent Instrumentation Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, Heilongjiang, China.,Key Laboratory of Microsystems and Microstructures Manufacturing Ministry of Education, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Yong Jiang
- Advanced Microscopy and Instrumentation Research Centre, Harbin Institute of Technology, Harbin, Heilongjiang, China.,State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin, Heilongjiang, China.,Key Lab of Ultra-Precision Intelligent Instrumentation Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, Heilongjiang, China.,Key Laboratory of Microsystems and Microstructures Manufacturing Ministry of Education, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Jian Liu
- Advanced Microscopy and Instrumentation Research Centre, Harbin Institute of Technology, Harbin, Heilongjiang, China.,State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin, Heilongjiang, China.,Key Lab of Ultra-Precision Intelligent Instrumentation Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, Heilongjiang, China.,Key Laboratory of Microsystems and Microstructures Manufacturing Ministry of Education, Harbin Institute of Technology, Harbin, Heilongjiang, China
| |
Collapse
|
16
|
Cifci MA. A Deep Learning-Based Framework for Uncertainty Quantification in Medical Imaging Using the DropWeak Technique: An Empirical Study with Baresnet. Diagnostics (Basel) 2023; 13:800. [PMID: 36832288 PMCID: PMC9955446 DOI: 10.3390/diagnostics13040800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 02/13/2023] [Accepted: 02/15/2023] [Indexed: 02/22/2023] Open
Abstract
Lung cancer is a leading cause of cancer-related deaths globally. Early detection is crucial for improving patient survival rates. Deep learning (DL) has shown promise in the medical field, but its accuracy must be evaluated, particularly in the context of lung cancer classification. In this study, we conducted uncertainty analysis on various frequently used DL architectures, including Baresnet, to assess the uncertainties in the classification results. This study focuses on the use of deep learning for the classification of lung cancer, which is a critical aspect of improving patient survival rates. The study evaluates the accuracy of various deep learning architectures, including Baresnet, and incorporates uncertainty quantification to assess the level of uncertainty in the classification results. The study presents a novel automatic tumor classification system for lung cancer based on CT images, which achieves a classification accuracy of 97.19% with an uncertainty quantification. The results demonstrate the potential of deep learning in lung cancer classification and highlight the importance of uncertainty quantification in improving the accuracy of classification results. This study's novelty lies in the incorporation of uncertainty quantification in deep learning for lung cancer classification, which can lead to more reliable and accurate diagnoses in clinical settings.
Collapse
Affiliation(s)
- Mehmet Akif Cifci
- The Institute of Computer Technology, Tu Wien University, 1040 Vienna, Austria;
- Department of Computer Eng., Bandirma Onyedi Eylul University, 10200 Balikesir, Turkey
- Department of Informatics, Klaipeda University, 92294 Klaipeda, Lithuania;
| |
Collapse
|
17
|
Wang T, Jiang S, Song P, Wang R, Yang L, Zhang T, Zheng G. Optical ptychography for biomedical imaging: recent progress and future directions [Invited]. BIOMEDICAL OPTICS EXPRESS 2023; 14:489-532. [PMID: 36874495 PMCID: PMC9979669 DOI: 10.1364/boe.480685] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/10/2022] [Accepted: 12/10/2022] [Indexed: 05/25/2023]
Abstract
Ptychography is an enabling microscopy technique for both fundamental and applied sciences. In the past decade, it has become an indispensable imaging tool in most X-ray synchrotrons and national laboratories worldwide. However, ptychography's limited resolution and throughput in the visible light regime have prevented its wide adoption in biomedical research. Recent developments in this technique have resolved these issues and offer turnkey solutions for high-throughput optical imaging with minimum hardware modifications. The demonstrated imaging throughput is now greater than that of a high-end whole slide scanner. In this review, we discuss the basic principle of ptychography and summarize the main milestones of its development. Different ptychographic implementations are categorized into four groups based on their lensless/lens-based configurations and coded-illumination/coded-detection operations. We also highlight the related biomedical applications, including digital pathology, drug screening, urinalysis, blood analysis, cytometric analysis, rare cell screening, cell culture monitoring, cell and tissue imaging in 2D and 3D, polarimetric analysis, among others. Ptychography for high-throughput optical imaging, currently in its early stages, will continue to improve in performance and expand in its applications. We conclude this review article by pointing out several directions for its future development.
Collapse
Affiliation(s)
- Tianbo Wang
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
- These authors contributed equally to this work
| | - Shaowei Jiang
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
- These authors contributed equally to this work
| | - Pengming Song
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
- These authors contributed equally to this work
| | - Ruihai Wang
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Liming Yang
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Terrance Zhang
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Guoan Zheng
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
18
|
Matlock A, Zhu J, Tian L. Multiple-scattering simulator-trained neural network for intensity diffraction tomography. OPTICS EXPRESS 2023; 31:4094-4107. [PMID: 36785385 DOI: 10.1364/oe.477396] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 12/29/2022] [Indexed: 06/18/2023]
Abstract
Recovering 3D phase features of complex biological samples traditionally sacrifices computational efficiency and processing time for physical model accuracy and reconstruction quality. Here, we overcome this challenge using an approximant-guided deep learning framework in a high-speed intensity diffraction tomography system. Applying a physics model simulator-based learning strategy trained entirely on natural image datasets, we show our network can robustly reconstruct complex 3D biological samples. To achieve highly efficient training and prediction, we implement a lightweight 2D network structure that utilizes a multi-channel input for encoding the axial information. We demonstrate this framework on experimental measurements of weakly scattering epithelial buccal cells and strongly scattering C. elegans worms. We benchmark the network's performance against a state-of-the-art multiple-scattering model-based iterative reconstruction algorithm. We highlight the network's robustness by reconstructing dynamic samples from a living worm video. We further emphasize the network's generalization capabilities by recovering algae samples imaged from different experimental setups. To assess the prediction quality, we develop a quantitative evaluation metric to show that our predictions are consistent with both multiple-scattering physics and experimental measurements.
Collapse
|
19
|
Zhang D, Liu T, Kang J. Density regression and uncertainty quantification with Bayesian deep noise neural networks. Stat (Int Stat Inst) 2023; 12:e604. [PMID: 38957733 PMCID: PMC11218593 DOI: 10.1002/sta4.604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 06/17/2023] [Indexed: 07/04/2024]
Abstract
Deep neural network (DNN) models have achieved state-of-the-art predictive accuracy in a wide range of applications. However, it remains a challenging task to accurately quantify the uncertainty in DNN predictions, especially those of continuous outcomes. To this end, we propose the Bayesian deep noise neural network (B-DeepNoise), which generalizes standard Bayesian DNNs by extending the random noise variable from the output layer to all hidden layers. Our model is capable of approximating highly complex predictive density functions and fully learn the possible random variation in the outcome variables. For posterior computation, we provide a closed-form Gibbs sampling algorithm that circumvents tuning-intensive Metropolis-Hastings methods. We establish a recursive representation of the predictive density and perform theoretical analysis on the predictive variance. Through extensive experiments, we demonstrate the superiority of B-DeepNoise over existing methods in terms of density estimation and uncertainty quantification accuracy. A neuroimaging application is included to show our model's usefulness in scientific studies.
Collapse
Affiliation(s)
- Daiwei Zhang
- Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, Pennsylvania, 19104, USA
| | - Tianci Liu
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana, 47907, USA
| | - Jian Kang
- Department of Biostatistics, University of Michigan, Ann Arbor, Michigan, 48109, USA
| |
Collapse
|
20
|
Fanous MJ, Popescu G. GANscan: continuous scanning microscopy using deep learning deblurring. LIGHT, SCIENCE & APPLICATIONS 2022; 11:265. [PMID: 36071043 PMCID: PMC9452654 DOI: 10.1038/s41377-022-00952-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 07/31/2022] [Accepted: 08/07/2022] [Indexed: 05/05/2023]
Abstract
Most whole slide imaging (WSI) systems today rely on the "stop-and-stare" approach, where, at each field of view, the scanning stage is brought to a complete stop before the camera snaps a picture. This procedure ensures that each image is free of motion blur, which comes at the expense of long acquisition times. In order to speed up the acquisition process, especially for large scanning areas, such as pathology slides, we developed an acquisition method in which the data is acquired continuously while the stage is moving at high speeds. Using generative adversarial networks (GANs), we demonstrate this ultra-fast imaging approach, referred to as GANscan, which restores sharp images from motion blurred videos. GANscan allows us to complete image acquisitions at 30x the throughput of stop-and-stare systems. This method is implemented on a Zeiss Axio Observer Z1 microscope, requires no specialized hardware, and accomplishes successful reconstructions at stage speeds of up to 5000 μm/s. We validate the proposed method by imaging H&E stained tissue sections. Our method not only retrieves crisp images from fast, continuous scans, but also adjusts for defocusing that occurs during scanning within +/- 5 μm. Using a consumer GPU, the inference runs at <20 ms/ image.
Collapse
Affiliation(s)
- Michael John Fanous
- Quantitative Light Imaging Laboratory, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA.
- Department of Bioengineering, University of Illinois at Urbana-Champaign, 306 N. Wright Street, Urbana, IL, 61801, USA.
| | - Gabriel Popescu
- Quantitative Light Imaging Laboratory, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA.
- Department of Bioengineering, University of Illinois at Urbana-Champaign, 306 N. Wright Street, Urbana, IL, 61801, USA.
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 306 N. Wright Street, Urbana, IL, 61801, USA.
| |
Collapse
|
21
|
Chen Y, Xu T, Sun H, Zhang J, Huang B, Zhang J, Li J. Integration of Fourier ptychography with machine learning: an alternative scheme. BIOMEDICAL OPTICS EXPRESS 2022; 13:4278-4297. [PMID: 36032578 PMCID: PMC9408244 DOI: 10.1364/boe.464001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 07/08/2022] [Accepted: 07/08/2022] [Indexed: 06/15/2023]
Abstract
As the core task of the reconstruction in conventional ptychography (CP) and Fourier ptychographic microscopy (FPM), the meticulous design of ptychographical iterative engine (PIE) largely affects the performance of reconstruction algorithms. Compared to traditional PIE algorithms, the paradigm of combining with machine learning to cross a local optimum has recently achieved significant progress. Nevertheless, existing designed engines still suffer drawbacks such as excessive hyper-parameters, heavy tuning work and lack of compatibility, which greatly limit their practical applications. In this work, we present a complete set of alternative schemes comprised of a kind of new perspective, a uniform design template, and a fusion framework, to naturally integrate Fourier ptychography (FP) with machine learning concepts. The new perspective, Dynamic Physics, is taken as the preferred tool to analyze a path (algorithm) at the physical level; the uniform design template, T-FP, clarifies the physical significance and optimization part in a path; the fusion framework follows two workable guidelines that are specially designed to keep convergence and make later localized modification for a new path, and further establishes a link between FP iterations and the gradient update in machine learning. Our scheme is compatible with both traditional FP paths and machine learning concepts. By combining ideas in both fields, we offer two design examples, MaFP and AdamFP. Results for both simulations and experiments show that designed algorithms following our scheme obtain better, faster (converge at the early stage after a few iterations) and more stable recovery with only minimal tuning hyper-parameters, demonstrating the effectiveness and superiority of our scheme.
Collapse
Affiliation(s)
- Yiwen Chen
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
- Beijing Institute of Technology Chongqing Innovation Center, Chongqing 401120, China
| | - Tingfa Xu
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
- Beijing Institute of Technology Chongqing Innovation Center, Chongqing 401120, China
- Contributed equally
| | - Haixin Sun
- School of Electronic and Information Engineering, Changchun University, Changchun 130022, China
| | - Jizhou Zhang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
- Beijing Institute of Technology Chongqing Innovation Center, Chongqing 401120, China
| | - Bo Huang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
- Beijing Institute of Technology Chongqing Innovation Center, Chongqing 401120, China
| | - Jinhua Zhang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
- Beijing Institute of Technology Chongqing Innovation Center, Chongqing 401120, China
| | - Jianan Li
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
- Contributed equally
| |
Collapse
|
22
|
Sun X, Zhang S, Shi Y. Cryptanalysis of an optical cryptosystem with uncertainty quantification in a probabilistic model. APPLIED OPTICS 2022; 61:5567-5574. [PMID: 36255783 DOI: 10.1364/ao.457681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 05/17/2022] [Indexed: 06/16/2023]
Abstract
In this paper, a modified probabilistic deep learning method is proposed to attack the double random phase encryption by modeling the conditional distribution of plaintext. The well-trained probabilistic model gives both predictions of plaintext and uncertainty quantification, the latter of which is first introduced to optical cryptanalysis. Predictions of the model are close to real plaintexts, showing the success of the proposed model. Uncertainty quantification reveals the level of reliability of each pixel in the prediction of plaintext without ground truth. Subsequent simulation experiments demonstrate that uncertainty quantification can effectively identify poor-quality predictions to avoid the risk of unreliability from deep learning models.
Collapse
|
23
|
A Quantitative Model of International Trade Based on Deep Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9811358. [PMID: 35685150 PMCID: PMC9173948 DOI: 10.1155/2022/9811358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 05/06/2022] [Accepted: 05/14/2022] [Indexed: 11/28/2022]
Abstract
This paper is an in-depth study of international trade quantification models based on deep neural networks. Based on an in-depth analysis of global trade characteristics, a summary of existing problems, and a comparative analysis of various prediction methods, this paper constructs the ARIMA model, BP neural network (BPNN) model, and deep neural network (DNN) model to make a comprehensive comparison of international trade quantification. The results show that the nonlinear model has a global trade quantification has some advantages over linear models, and the deep model shows better prediction performance than the shallow model. In addition, preprocessing of the time series is considered to improve the prediction accuracy or shorten the model training time. The empirical modal analysis method (EMD) is introduced to decompose the time series into eigenmodal functions (IMFs) of different scales. Then the decomposed IMF series are arranged into a matrix using principal component analysis (PCA) to reduce the dimensionality and extract the data containing the most stock index information features; these features are then input into BPNN and DNN for combined prediction, thus constructing the combined models EMD-PCA-BPNN and EMD-PCA-DNN. Based on Melitz's heterogeneous firm trade theory and its development by Chaney, a quantitative trade model incorporating production heterogeneity is constructed through a multisector extension. This paper adopts a general equilibrium analysis, which makes the modeling process pulse clear. The completed model has high flexibility and scalability, which can be applied to quantitative analysis of various problems.
Collapse
|
24
|
Chen J, Zhang Q, Lu X, Zhong L, Tian J. Quantitative phase imaging based on model transfer learning. OPTICS EXPRESS 2022; 30:16115-16133. [PMID: 36221463 DOI: 10.1364/oe.453112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 04/16/2022] [Indexed: 06/16/2023]
Abstract
Convolutional neural networks have been widely used in optical information processing and the generalization ability of the network depends greatly on the scale and diversity of the datasets, however, the acquisition of mass datasets and later annotation have become a common problem that hinders its further progress. In this study, a model transfer-based quantitative phase imaging (QPI) method is proposed, which fine-tunes the network parameters through loading pre-training base model and transfer learning, enable the network with good generalization ability. Most importantly, a feature fusion method based on moment reconstruction is proposed for training dataset generation, which can construct rich enough datasets that can cover most situations and accurately annotated, it fundamentally solves the problem from the scale and representational ability of the datasets. Besides, a feature distribution distance scoring (FDDS) rule is proposed to evaluate the rationality of the constructed datasets. The experimental results show that this method is suitable for different types of samples to achieve fast and high-accuracy phase imaging, which greatly relieves the pressure of data, tagging and generalization ability in the data-driven method.
Collapse
|
25
|
Tian L. Deep learning augmented microscopy: a faster, wider view, higher resolution autofluorescence-harmonic microscopy. LIGHT, SCIENCE & APPLICATIONS 2022; 11:109. [PMID: 35462563 PMCID: PMC9035449 DOI: 10.1038/s41377-022-00801-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep learning enables bypassing the tradeoffs between imaging speed, field of view, and spatial resolution in autofluorescence-harmonic microscopy.
Collapse
Affiliation(s)
- Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA.
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA.
| |
Collapse
|
26
|
Tan Y, Hu X, Wang J. Complex amplitude field reconstruction in atmospheric turbulence based on deep learning. OPTICS EXPRESS 2022; 30:13070-13078. [PMID: 35472929 DOI: 10.1364/oe.450710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 03/01/2022] [Indexed: 06/14/2023]
Abstract
In this paper, we use deep neural networks (DNNs) to simultaneously reconstruct the amplitude and phase information of the complex light field transmitted in atmospheric turbulence based on deep learning. The results of amplitude and phase reconstruction by four different training methods are compared comprehensively. The obtained results indicate that the training method that can more accurately reconstruct the complex amplitude field is to input the amplitude and phase pattern pairs into the neural network as two channels to train the model.
Collapse
|
27
|
Zhou S, Li J, Sun J, Zhou N, Chen Q, Zuo C. Accelerated Fourier ptychographic diffraction tomography with sparse annular LED illuminations. JOURNAL OF BIOPHOTONICS 2022; 15:e202100272. [PMID: 34846795 DOI: 10.1002/jbio.202100272] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 11/26/2021] [Accepted: 11/26/2021] [Indexed: 06/13/2023]
Abstract
Fourier ptychographic diffraction tomography (FPDT) is a recently developed label-free computational microscopy technique that retrieves high-resolution and large-field three-dimensional (3D) tomograms by synthesizing a set of low-resolution intensity images obtained with a low numerical aperture (NA) objective. However, in order to ensure sufficient overlap of Ewald spheres in 3D Fourier space, conventional FPDT requires thousands of intensity measurements and consumes a significant amount of time for stable convergence of the iterative algorithm. Herein, we present accelerated Fourier ptychographic diffraction tomography (aFPDT), which combines sparse annular light-emitting diode (LED) illuminations and multiplexing illumination to significantly decrease data amount and achieve computational acceleration of 3D refractive index (RI) tomography. Compared with existing FPDT technique, the equivalent high-resolution 3D RI results are obtained using aFPDT with reducing data requirement by more than 40 times. The validity of the proposed method is experimentally demonstrated on control samples and various biological cells, including polystyrene beads, unicellular algae and clustered HeLa cells in a large field of view. With the capability of high-resolution and high-throughput 3D imaging using small amounts of data, aFPDT has the potential to further advance its widespread applications in biomedicine.
Collapse
Affiliation(s)
- Shun Zhou
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing, China
- Smart Computational Imaging Laboratory (SCILab), Nanjing University of Science and Technology, Nanjing, China
- Smart Computational Imaging Research Institute (SCIRI) of Nanjing University of Science and Technology, Nanjing, China
| | - Jiaji Li
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing, China
- Smart Computational Imaging Laboratory (SCILab), Nanjing University of Science and Technology, Nanjing, China
- Smart Computational Imaging Research Institute (SCIRI) of Nanjing University of Science and Technology, Nanjing, China
| | - Jiasong Sun
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing, China
- Smart Computational Imaging Laboratory (SCILab), Nanjing University of Science and Technology, Nanjing, China
- Smart Computational Imaging Research Institute (SCIRI) of Nanjing University of Science and Technology, Nanjing, China
| | - Ning Zhou
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing, China
- Smart Computational Imaging Laboratory (SCILab), Nanjing University of Science and Technology, Nanjing, China
- Smart Computational Imaging Research Institute (SCIRI) of Nanjing University of Science and Technology, Nanjing, China
| | - Qian Chen
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing, China
| | - Chao Zuo
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing, China
- Smart Computational Imaging Laboratory (SCILab), Nanjing University of Science and Technology, Nanjing, China
- Smart Computational Imaging Research Institute (SCIRI) of Nanjing University of Science and Technology, Nanjing, China
| |
Collapse
|
28
|
Zhang J, Xu T, Li J, Zhang Y, Jiang S, Chen Y, Zhang J. Physics-based learning with channel attention for Fourier ptychographic microscopy. JOURNAL OF BIOPHOTONICS 2022; 15:e202100296. [PMID: 34730877 DOI: 10.1002/jbio.202100296] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 10/24/2021] [Accepted: 10/29/2021] [Indexed: 06/13/2023]
Abstract
Fourier ptychographic microscopy (FPM) is a computational imaging technology for large field-of-view, high resolution and quantitative phase imaging. In FPM, low-resolution intensity images captured with angle-varying illumination are synthesized in Fourier space with phase retrieval approaches. However, system errors such as pupil aberration and light-emitting diode (LED) intensity error seriously affect the reconstruction performance. In this article, we propose a physics-based neural network with channel attention for FPM reconstruction. With the channel attention module, which is introduced into physics-based neural networks for the first time, the spatial distribution of LED intensity can be adaptively corrected. Besides, the channel attention module is used to synthesize different Zernike modes and recover the pupil function. Detailed simulations and experiments are carried out to validate the effectiveness and robustness of the proposed method. The results demonstrate that our method achieves better performance in high-resolution complex field reconstruction, LED intensity correction and pupil function recovery compared with the state-of-art methods. The combination with deep neural network structures like channel attention modules significantly enhance the performance of physics-based neural networks and will promote the application of FPM in practical use.
Collapse
Affiliation(s)
- Jizhou Zhang
- Ministry of Education Key Laboratory of Photoelectronic Imaging Technology and System, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
- Beijing Institute of Technology Chongqing Innovation Center, Chongqing, China
| | - Tingfa Xu
- Ministry of Education Key Laboratory of Photoelectronic Imaging Technology and System, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
- Beijing Institute of Technology Chongqing Innovation Center, Chongqing, China
| | - Jianan Li
- Ministry of Education Key Laboratory of Photoelectronic Imaging Technology and System, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Yuhan Zhang
- Ministry of Education Key Laboratory of Photoelectronic Imaging Technology and System, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
- Beijing Institute of Technology Chongqing Innovation Center, Chongqing, China
| | - Shenwang Jiang
- Ministry of Education Key Laboratory of Photoelectronic Imaging Technology and System, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Yiwen Chen
- Ministry of Education Key Laboratory of Photoelectronic Imaging Technology and System, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jinhua Zhang
- Ministry of Education Key Laboratory of Photoelectronic Imaging Technology and System, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
29
|
Tahir W, Wang H, Tian L. Adaptive 3D descattering with a dynamic synthesis network. LIGHT, SCIENCE & APPLICATIONS 2022; 11:42. [PMID: 35210401 PMCID: PMC8873471 DOI: 10.1038/s41377-022-00730-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 01/22/2022] [Accepted: 02/02/2022] [Indexed: 05/11/2023]
Abstract
Deep learning has been broadly applied to imaging in scattering applications. A common framework is to train a descattering network for image recovery by removing scattering artifacts. To achieve the best results on a broad spectrum of scattering conditions, individual "expert" networks need to be trained for each condition. However, the expert's performance sharply degrades when the testing condition differs from the training. An alternative brute-force approach is to train a "generalist" network using data from diverse scattering conditions. It generally requires a larger network to encapsulate the diversity in the data and a sufficiently large training set to avoid overfitting. Here, we propose an adaptive learning framework, termed dynamic synthesis network (DSN), which dynamically adjusts the model weights and adapts to different scattering conditions. The adaptability is achieved by a novel "mixture of experts" architecture that enables dynamically synthesizing a network by blending multiple experts using a gating network. We demonstrate the DSN in holographic 3D particle imaging for a variety of scattering conditions. We show in simulation that our DSN provides generalization across a continuum of scattering conditions. In addition, we show that by training the DSN entirely on simulated data, the network can generalize to experiments and achieve robust 3D descattering. We expect the same concept can find many other applications, such as denoising and imaging in scattering media. Broadly, our dynamic synthesis framework opens up a new paradigm for designing highly adaptive deep learning and computational imaging techniques.
Collapse
Affiliation(s)
- Waleed Tahir
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Hao Wang
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA.
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA.
| |
Collapse
|
30
|
Li Y, Tian L. Computer-free computational imaging: optical computing for seeing through random media. LIGHT, SCIENCE & APPLICATIONS 2022; 11:37. [PMID: 35165255 PMCID: PMC8844051 DOI: 10.1038/s41377-022-00725-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Diffractive Deep Neural Network enables computer-free, all-optical "computational imaging" for seeing through unknown random diffusers at the speed of light.
Collapse
Affiliation(s)
- Yunzhe Li
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA.
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA.
| |
Collapse
|
31
|
Wang C, Hu M, Takashima Y, Schulz TJ, Brady DJ. Snapshot ptychography on array cameras. OPTICS EXPRESS 2022; 30:2585-2598. [PMID: 35209395 DOI: 10.1364/oe.447499] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 12/21/2021] [Indexed: 06/14/2023]
Abstract
We use convolutional neural networks to recover images optically down-sampled by 6.7 × using coherent aperture synthesis over a 16 camera array. Where conventional ptychography relies on scanning and oversampling, here we apply decompressive neural estimation to recover full resolution image from a single snapshot, although as shown in simulation multiple snapshots can be used to improve signal-to-noise ratio (SNR). In place training on experimental measurements eliminates the need to directly calibrate the measurement system. We also present simulations of diverse array camera sampling strategies to explore how snapshot compressive systems might be optimized.
Collapse
|
32
|
Guo Z, Levitan A, Barbastathis G, Comin R. Randomized probe imaging through deep k-learning. OPTICS EXPRESS 2022; 30:2247-2264. [PMID: 35209369 DOI: 10.1364/oe.445498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 12/22/2021] [Indexed: 06/14/2023]
Abstract
Randomized probe imaging (RPI) is a single-frame diffractive imaging method that uses highly randomized light to reconstruct the spatial features of a scattering object. The reconstruction process, known as phase retrieval, aims to recover a unique solution for the object without measuring the far-field phase information. Typically, reconstruction is done via time-consuming iterative algorithms. In this work, we propose a fast and efficient deep learning based method to reconstruct phase objects from RPI data. The method, which we call deep k-learning, applies the physical propagation operator to generate an approximation of the object as an input to the neural network. This way, the network no longer needs to parametrize the far-field diffraction physics, dramatically improving the results. Deep k-learning is shown to be computationally efficient and robust to Poisson noise. The advantages provided by our method may enable the analysis of far larger datasets in photon starved conditions, with important applications to the study of dynamic phenomena in physical science and biological engineering.
Collapse
|
33
|
Li B, Tan S, Dong J, Lian X, Zhang Y, Ji X, Veeraraghavan A. Deep-3D microscope: 3D volumetric microscopy of thick scattering samples using a wide-field microscope and machine learning. BIOMEDICAL OPTICS EXPRESS 2022; 13:284-299. [PMID: 35154871 PMCID: PMC8803017 DOI: 10.1364/boe.444488] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 12/02/2021] [Accepted: 12/03/2021] [Indexed: 06/14/2023]
Abstract
Confocal microscopy is a standard approach for obtaining volumetric images of a sample with high axial and lateral resolution, especially when dealing with scattering samples. Unfortunately, a confocal microscope is quite expensive compared to traditional microscopes. In addition, the point scanning in confocal microscopy leads to slow imaging speed and photobleaching due to the high dose of laser energy. In this paper, we demonstrate how the advances in machine learning can be exploited to "teach" a traditional wide-field microscope, one that's available in every lab, into producing 3D volumetric images like a confocal microscope. The key idea is to obtain multiple images with different focus settings using a wide-field microscope and use a 3D generative adversarial network (GAN) based neural network to learn the mapping between the blurry low-contrast image stacks obtained using a wide-field microscope and the sharp, high-contrast image stacks obtained using a confocal microscope. After training the network with widefield-confocal stack pairs, the network can reliably and accurately reconstruct 3D volumetric images that rival confocal images in terms of its lateral resolution, z-sectioning and image contrast. Our experimental results demonstrate generalization ability to handle unseen data, stability in the reconstruction results, high spatial resolution even when imaging thick (∼40 microns) highly-scattering samples. We believe that such learning-based microscopes have the potential to bring confocal imaging quality to every lab that has a wide-field microscope.
Collapse
Affiliation(s)
- Bowen Li
- Department of Automation & BNRist, Tsinghua University, Beijing, China
| | - Shiyu Tan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Jiuyang Dong
- Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Xiaocong Lian
- Department of Automation & BNRist, Tsinghua University, Beijing, China
| | - Yongbing Zhang
- Harbin Institute of Technology (Shenzhen), Shenzhen, China
| | - Xiangyang Ji
- Department of Automation & BNRist, Tsinghua University, Beijing, China
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| |
Collapse
|
34
|
Jo Y, Cho H, Park WS, Kim G, Ryu D, Kim YS, Lee M, Park S, Lee MJ, Joo H, Jo H, Lee S, Lee S, Min HS, Heo WD, Park Y. Label-free multiplexed microtomography of endogenous subcellular dynamics using generalizable deep learning. Nat Cell Biol 2021; 23:1329-1337. [PMID: 34876684 DOI: 10.1038/s41556-021-00802-x] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 10/25/2021] [Indexed: 02/07/2023]
Abstract
Simultaneous imaging of various facets of intact biological systems across multiple spatiotemporal scales is a long-standing goal in biology and medicine, for which progress is hindered by limits of conventional imaging modalities. Here we propose using the refractive index (RI), an intrinsic quantity governing light-matter interaction, as a means for such measurement. We show that major endogenous subcellular structures, which are conventionally accessed via exogenous fluorescence labelling, are encoded in three-dimensional (3D) RI tomograms. We decode this information in a data-driven manner, with a deep learning-based model that infers multiple 3D fluorescence tomograms from RI measurements of the corresponding subcellular targets, thereby achieving multiplexed microtomography. This approach, called RI2FL for refractive index to fluorescence, inherits the advantages of both high-specificity fluorescence imaging and label-free RI imaging. Importantly, full 3D modelling of absolute and unbiased RI improves generalization, such that the approach is applicable to a broad range of new samples without retraining to facilitate immediate applicability. The performance, reliability and scalability of this technology are extensively characterized, and its various applications within single-cell profiling at unprecedented scales (which can generate new experimentally testable hypotheses) are demonstrated.
Collapse
Affiliation(s)
- YoungJu Jo
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea.,Tomocube, Daejeon, Republic of Korea.,Departments of Applied Physics and of Biology, Stanford University, Stanford, CA, USA
| | | | - Wei Sun Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - Geon Kim
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - DongHun Ryu
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - Young Seo Kim
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea.,Graduate School of Medial Science and Engineering, KAIST, Daejeon, Republic of Korea
| | - Moosung Lee
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - Sangwoo Park
- Gwangju Center, Korea Basic Science Institute (KBSI), Gwangju, Republic of Korea
| | - Mahn Jae Lee
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea.,Graduate School of Medial Science and Engineering, KAIST, Daejeon, Republic of Korea
| | | | | | - Seongsoo Lee
- Gwangju Center, Korea Basic Science Institute (KBSI), Gwangju, Republic of Korea
| | - Sumin Lee
- Tomocube, Daejeon, Republic of Korea
| | | | - Won Do Heo
- Department of Biological Sciences, KAIST, Daejeon, Republic of Korea. .,KAIST Institute for the BioCentury, KAIST, Daejeon, Republic of Korea.
| | - YongKeun Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea. .,KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea. .,Tomocube, Daejeon, Republic of Korea.
| |
Collapse
|
35
|
Song B, Sunny S, Li S, Gurushanth K, Mendonca P, Mukhia N, Patrick S, Gurudath S, Raghavan S, Tsusennaro I, Leivon ST, Kolur T, Shetty V, Bushan VR, Ramesh R, Peterson T, Pillai V, Wilder-Smith P, Sigamani A, Suresh A, Kuriakose MA, Birur P, Liang R. Bayesian deep learning for reliable oral cancer image classification. BIOMEDICAL OPTICS EXPRESS 2021; 12:6422-6430. [PMID: 34745746 PMCID: PMC8547976 DOI: 10.1364/boe.432365] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 08/29/2021] [Accepted: 09/07/2021] [Indexed: 05/16/2023]
Abstract
In medical imaging, deep learning-based solutions have achieved state-of-the-art performance. However, reliability restricts the integration of deep learning into practical medical workflows since conventional deep learning frameworks cannot quantitatively assess model uncertainty. In this work, we propose to address this shortcoming by utilizing a Bayesian deep network capable of estimating uncertainty to assess oral cancer image classification reliability. We evaluate the model using a large intraoral cheek mucosa image dataset captured using our customized device from high-risk population to show that meaningful uncertainty information can be produced. In addition, our experiments show improved accuracy by uncertainty-informed referral. The accuracy of retained data reaches roughly 90% when referring either 10% of all cases or referring cases whose uncertainty value is greater than 0.3. The performance can be further improved by referring more patients. The experiments show the model is capable of identifying difficult cases needing further inspection.
Collapse
Affiliation(s)
- Bofan Song
- Wyant College of Optical Sciences, The University of Arizona, Tucson, Arizona 85721, USA
| | | | - Shaobai Li
- Wyant College of Optical Sciences, The University of Arizona, Tucson, Arizona 85721, USA
| | | | | | - Nirza Mukhia
- KLE Society Institute of Dental Sciences, Bangalore, India
| | | | | | | | | | - Shirley T Leivon
- Christian Institute of Health Sciences and Research, Dimapur, India
| | - Trupti Kolur
- Mazumdar Shaw Medical Foundation, Bangalore, India
| | - Vivek Shetty
- Mazumdar Shaw Medical Foundation, Bangalore, India
| | | | - Rohan Ramesh
- Christian Institute of Health Sciences and Research, Dimapur, India
| | - Tyler Peterson
- Wyant College of Optical Sciences, The University of Arizona, Tucson, Arizona 85721, USA
| | - Vijay Pillai
- Mazumdar Shaw Medical Foundation, Bangalore, India
| | - Petra Wilder-Smith
- Beckman Laser Institute and Medical Clinic, University of California, Irvine, California 92697, USA
| | | | - Amritha Suresh
- Mazumdar Shaw Medical Centre, Bangalore, India
- Mazumdar Shaw Medical Foundation, Bangalore, India
| | | | - Praveen Birur
- KLE Society Institute of Dental Sciences, Bangalore, India
- Mazumdar Shaw Medical Foundation, Bangalore, India
| | - Rongguang Liang
- Wyant College of Optical Sciences, The University of Arizona, Tucson, Arizona 85721, USA
| |
Collapse
|
36
|
Wang Y, Jiang F, Ju G, Xu B, An Q, Zhang C, Wang S, Xu S. Deep learning wavefront sensing for fine phasing of segmented mirrors. OPTICS EXPRESS 2021; 29:25960-25978. [PMID: 34614912 DOI: 10.1364/oe.434024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 07/10/2021] [Indexed: 06/13/2023]
Abstract
Segmented primary mirror provides many crucial important advantages for the construction of extra-large space telescopes. The imaging quality of this class of telescope is susceptible to phasing error between primary mirror segments. Deep learning has been widely applied in the field of optical imaging and wavefront sensing, including phasing segmented mirrors. Compared to other image-based phasing techniques, such as phase retrieval and phase diversity, deep learning has the advantage of high efficiency and free of stagnation problem. However, at present deep learning methods are mainly applied to coarse phasing and used to estimate piston error between segments. In this paper, deep Bi-GRU neural work is introduced to fine phasing of segmented mirrors, which not only has a much simpler structure than CNN or LSTM network, but also can effectively solve the gradient vanishing problem in training due to long term dependencies. By incorporating phasing errors (piston and tip-tilt errors), some low-order aberrations as well as other practical considerations, Bi-GRU neural work can effectively be used for fine phasing of segmented mirrors. Simulations and real experiments are used to demonstrate the accuracy and effectiveness of the proposed methods.
Collapse
|
37
|
Tian L, Hunt B, Bell MAL, Yi J, Smith JT, Ochoa M, Intes X, Durr NJ. Deep Learning in Biomedical Optics. Lasers Surg Med 2021; 53:748-775. [PMID: 34015146 PMCID: PMC8273152 DOI: 10.1002/lsm.23414] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 04/02/2021] [Accepted: 04/15/2021] [Indexed: 01/02/2023]
Abstract
This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.
Collapse
Affiliation(s)
- L. Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
| | - B. Hunt
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
| | - M. A. L. Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - J. Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Ophthalmology, Johns Hopkins University, Baltimore, MD, USA
| | - J. T. Smith
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - M. Ochoa
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - X. Intes
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - N. J. Durr
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
38
|
Park J, Brady DJ, Zheng G, Tian L, Gao L. Review of bio-optical imaging systems with a high space-bandwidth product. ADVANCED PHOTONICS 2021; 3:044001. [PMID: 35178513 PMCID: PMC8849623 DOI: 10.1117/1.ap.3.4.044001] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Optical imaging has served as a primary method to collect information about biosystems across scales-from functionalities of tissues to morphological structures of cells and even at biomolecular levels. However, to adequately characterize a complex biosystem, an imaging system with a number of resolvable points, referred to as a space-bandwidth product (SBP), in excess of one billion is typically needed. Since a gigapixel-scale far exceeds the capacity of current optical imagers, compromises must be made to obtain either a low spatial resolution or a narrow field-of-view (FOV). The problem originates from constituent refractive optics-the larger the aperture, the more challenging the correction of lens aberrations. Therefore, it is impractical for a conventional optical imaging system to achieve an SBP over hundreds of millions. To address this unmet need, a variety of high-SBP imagers have emerged over the past decade, enabling an unprecedented resolution and FOV beyond the limit of conventional optics. We provide a comprehensive survey of high-SBP imaging techniques, exploring their underlying principles and applications in bioimaging.
Collapse
Affiliation(s)
- Jongchan Park
- University of California, Department of Bioengineering, Los Angeles, California, United States
| | - David J. Brady
- University of Arizona, James C. Wyant College of Optical Sciences, Tucson, Arizona, United States
| | - Guoan Zheng
- University of Connecticut, Department of Biomedical Engineering, Storrs, Connecticut, United States
- University of Connecticut, Department of Electrical and Computer Engineering, Storrs, Connecticut, United States
| | - Lei Tian
- Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
| | - Liang Gao
- University of California, Department of Bioengineering, Los Angeles, California, United States
| |
Collapse
|
39
|
Nehme E, Ferdman B, Weiss LE, Naor T, Freedman D, Michaeli T, Shechtman Y. Learning Optimal Wavefront Shaping for Multi-Channel Imaging. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:2179-2192. [PMID: 34029185 DOI: 10.1109/tpami.2021.3076873] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Fast acquisition of depth information is crucial for accurate 3D tracking of moving objects. Snapshot depth sensing can be achieved by wavefront coding, in which the point-spread function (PSF) is engineered to vary distinctively with scene depth by altering the detection optics. In low-light applications, such as 3D localization microscopy, the prevailing approach is to condense signal photons into a single imaging channel with phase-only wavefront modulation to achieve a high pixel-wise signal to noise ratio. Here we show that this paradigm is generally suboptimal and can be significantly improved upon by employing multi-channel wavefront coding, even in low-light applications. We demonstrate our multi-channel optimization scheme on 3D localization microscopy in densely labelled live cells where detectability is limited by overlap of modulated PSFs. At extreme densities, we show that a split-signal system, with end-to-end learned phase masks, doubles the detection rate and reaches improved precision compared to the current state-of-the-art, single-channel design. We implement our method using a bifurcated optical system, experimentally validating our approach by snapshot volumetric imaging and 3D tracking of fluorescently labelled subcellular elements in dense environments.
Collapse
|
40
|
Zhang Y, Andreas Noack M, Vagovic P, Fezzaa K, Garcia-Moreno F, Ritschel T, Villanueva-Perez P. PhaseGAN: a deep-learning phase-retrieval approach for unpaired datasets. OPTICS EXPRESS 2021; 29:19593-19604. [PMID: 34266067 DOI: 10.1364/oe.423222] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 05/27/2021] [Indexed: 06/13/2023]
Abstract
Phase retrieval approaches based on deep learning (DL) provide a framework to obtain phase information from an intensity hologram or diffraction pattern in a robust manner and in real-time. However, current DL architectures applied to the phase problem rely on i) paired datasets, i. e., they are only applicable when a satisfactory solution of the phase problem has been found, and ii) the fact that most of them ignore the physics of the imaging process. Here, we present PhaseGAN, a new DL approach based on Generative Adversarial Networks, which allows the use of unpaired datasets and includes the physics of image formation. The performance of our approach is enhanced by including the image formation physics and a novel Fourier loss function, providing phase reconstructions when conventional phase retrieval algorithms fail, such as ultra-fast experiments. Thus, PhaseGAN offers the opportunity to address the phase problem in real-time when no phase reconstructions but good simulations or data from other experiments are available.
Collapse
|
41
|
Rawat S, Wang A. Accurate and practical feature extraction from noisy holograms. APPLIED OPTICS 2021; 60:4639-4646. [PMID: 34143020 DOI: 10.1364/ao.422479] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 05/04/2021] [Indexed: 06/12/2023]
Abstract
Quantitative phase imaging using holographic microscopy is a powerful and non-invasive imaging method, ideal for studying cells and quantifying their features such as size, thickness, and dry mass. However, biological materials scatter little light, and the resulting low signal-to-noise ratio in holograms complicates any downstream feature extraction and hence applications. More specifically, unwrapping phase maps from noisy holograms often fails or requires extensive computational resources. We present a strategy for overcoming the noise limitation: rather than a traditional phase-unwrapping method, we extract the continuous phase values from holograms by using a phase-generation technique based on conditional generative adversarial networks employing a Pix2Pix architecture. We demonstrate that a network trained on random surfaces can accurately generate phase maps for test objects such as dumbbells, spheres, and biconcave discoids. Furthermore, we show that even a rapidly trained network can generate faithful phase maps when trained on related objects. We are able to accurately extract both morphological and quantitative features from the noisy phase maps of human leukemia (HL-60) cells, where traditional phase unwrapping algorithms fail. We conclude that deep learning can decouple noise from signal, expanding potential applications to real-world systems that may be noisy.
Collapse
|
42
|
Shang R, Hoffer-Hawlik K, Wang F, Situ G, Luke GP. Two-step training deep learning framework for computational imaging without physics priors. OPTICS EXPRESS 2021; 29:15239-15254. [PMID: 33985227 PMCID: PMC8240457 DOI: 10.1364/oe.424165] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 04/21/2021] [Accepted: 04/23/2021] [Indexed: 05/20/2023]
Abstract
Deep learning (DL) is a powerful tool in computational imaging for many applications. A common strategy is to use a preprocessor to reconstruct a preliminary image as the input to a neural network to achieve an optimized image. Usually, the preprocessor incorporates knowledge of the physics priors in the imaging model. One outstanding challenge, however, is errors that arise from imperfections in the assumed model. Model mismatches degrade the quality of the preliminary image and therefore affect the DL predictions. Another main challenge is that many imaging inverse problems are ill-posed and the networks are over-parameterized; DL networks have flexibility to extract features from the data that are not directly related to the imaging model. This can lead to suboptimal training and poorer image reconstruction results. To solve these challenges, a two-step training DL (TST-DL) framework is proposed for computational imaging without physics priors. First, a single fully-connected layer (FCL) is trained to directly learn the inverse model with the raw measurement data as the inputs and the images as the outputs. Then, this pre-trained FCL is fixed and concatenated with an un-trained deep convolutional network with a U-Net architecture for a second-step training to optimize the output image. This approach has the advantage that does not rely on an accurate representation of the imaging physics since the first-step training directly learns the inverse model. Furthermore, the TST-DL approach mitigates network over-parameterization by separately training the FCL and U-Net. We demonstrate this framework using a linear single-pixel camera imaging model. The results are quantitatively compared with those from other frameworks. The TST-DL approach is shown to perform comparable to approaches which incorporate perfect knowledge of the imaging model, to be robust to noise and model ill-posedness, and to be more robust to model mismatch than approaches which incorporate imperfect knowledge of the imaging model. Furthermore, TST-DL yields better results than end-to-end training while suffering from less overfitting. Overall, this TST-DL framework is a flexible approach for image reconstruction without physics priors, applicable to diverse computational imaging systems.
Collapse
Affiliation(s)
- Ruibo Shang
- Thayer School of Engineering, Dartmouth College, 14 Engineering Dr., Hanover, NH 03755, USA
| | - Kevin Hoffer-Hawlik
- Thayer School of Engineering, Dartmouth College, 14 Engineering Dr., Hanover, NH 03755, USA
| | - Fei Wang
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guohai Situ
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
- Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou 310024, China
| | - Geoffrey P. Luke
- Thayer School of Engineering, Dartmouth College, 14 Engineering Dr., Hanover, NH 03755, USA
| |
Collapse
|
43
|
Godefroy G, Arnal B, Bossy E. Compensating for visibility artefacts in photoacoustic imaging with a deep learning approach providing prediction uncertainties. PHOTOACOUSTICS 2021; 21:100218. [PMID: 33364161 PMCID: PMC7750172 DOI: 10.1016/j.pacs.2020.100218] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 10/15/2020] [Accepted: 10/17/2020] [Indexed: 05/04/2023]
Abstract
Conventional photoacoustic imaging may suffer from the limited view and bandwidth of ultrasound transducers. A deep learning approach is proposed to handle these problems and is demonstrated both in simulations and in experiments on a multi-scale model of leaf skeleton. We employed an experimental approach to build the training and the test sets using photographs of the samples as ground truth images. Reconstructions produced by the neural network show a greatly improved image quality as compared to conventional approaches. In addition, this work aimed at quantifying the reliability of the neural network predictions. To achieve this, the dropout Monte-Carlo procedure is applied to estimate a pixel-wise degree of confidence on each predicted picture. Last, we address the possibility to use transfer learning with simulated data in order to drastically limit the size of the experimental dataset.
Collapse
Affiliation(s)
| | - Bastien Arnal
- Univ. Grenoble Alpes, CNRS, LIPhy, 38000 Grenoble, France
| | - Emmanuel Bossy
- Univ. Grenoble Alpes, CNRS, LIPhy, 38000 Grenoble, France
| |
Collapse
|
44
|
Li Y, Cheng S, Xue Y, Tian L. Displacement-agnostic coherent imaging through scatter with an interpretable deep neural network. OPTICS EXPRESS 2021; 29:2244-2257. [PMID: 33726423 DOI: 10.1364/oe.411291] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 12/26/2020] [Indexed: 06/12/2023]
Abstract
Coherent imaging through scatter is a challenging task. Both model-based and data-driven approaches have been explored to solve the inverse scattering problem. In our previous work, we have shown that a deep learning approach can make high-quality and highly generalizable predictions through unseen diffusers. Here, we propose a new deep neural network model that is agnostic to a broader class of perturbations including scatterer change, displacements, and system defocus up to 10× depth of field. In addition, we develop a new analysis framework for interpreting the mechanism of our deep learning model and visualizing its generalizability based on an unsupervised dimension reduction technique. We show that our model can unmix the scattering-specific information and extract the object-specific information and achieve generalization under different scattering conditions. Our work paves the way to a robust and interpretable deep learning approach to imaging through scattering media.
Collapse
|
45
|
Cheng S, Fu S, Kim YM, Song W, Li Y, Xue Y, Yi J, Tian L. Single-cell cytometry via multiplexed fluorescence prediction by label-free reflectance microscopy. SCIENCE ADVANCES 2021; 7:eabe0431. [PMID: 33523908 PMCID: PMC7810377 DOI: 10.1126/sciadv.abe0431] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 11/19/2020] [Indexed: 05/08/2023]
Abstract
Traditional imaging cytometry uses fluorescence markers to identify specific structures but is limited in throughput by the labeling process. We develop a label-free technique that alleviates the physical staining and provides multiplexed readouts via a deep learning-augmented digital labeling method. We leverage the rich structural information and superior sensitivity in reflectance microscopy and show that digital labeling predicts accurate subcellular features after training on immunofluorescence images. We demonstrate up to three times improvement in the prediction accuracy over the state of the art. Beyond fluorescence prediction, we demonstrate that single cell-level structural phenotypes of cell cycles are correctly reproduced by the digital multiplexed images, including Golgi twins, Golgi haze during mitosis, and DNA synthesis. We further show that the multiplexed readouts enable accurate multiparametric single-cell profiling across a large cell population. Our method can markedly improve the throughput for imaging cytometry toward applications for phenotyping, pathology, and high-content screening.
Collapse
Affiliation(s)
- Shiyi Cheng
- Department of Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA
| | - Sipei Fu
- Department of Biology, Boston University, Boston, MA 02215, USA
| | - Yumi Mun Kim
- Department of Philosophy & Neuroscience, Boston University, Boston, MA 02215, USA
| | - Weiye Song
- Department of Medicine, Boston University School of Medicine, Boston Medical Center, Boston, MA 02118, USA
| | - Yunzhe Li
- Department of Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA
| | - Yujia Xue
- Department of Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA
| | - Ji Yi
- Department of Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA.
- Department of Medicine, Boston University School of Medicine, Boston Medical Center, Boston, MA 02118, USA
- Department of Biomedical Engineering, Boston University, Boston, MA 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA.
| |
Collapse
|
46
|
Ströhl F, Jadhav S, Ahluwalia BS, Agarwal K, Prasad DK. Object detection neural network improves Fourier ptychography reconstruction. OPTICS EXPRESS 2020; 28:37199-37208. [PMID: 33379558 DOI: 10.1364/oe.409679] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 11/06/2020] [Indexed: 06/12/2023]
Abstract
High resolution microscopy is heavily dependent on superb optical elements and superresolution microscopy even more so. Correcting unavoidable optical aberrations during post-processing is an elegant method to reduce the optical system's complexity. A prime method that promises superresolution, aberration correction, and quantitative phase imaging is Fourier ptychography. This microscopy technique combines many images of the sample, recorded at differing illumination angles akin to computed tomography and uses error minimisation between the recorded images with those generated by a forward model. The more precise knowledge of those illumination angles is available for the image formation forward model, the better the result. Therefore, illumination estimation from the raw data is an important step and supports correct phase recovery and aberration correction. Here, we derive how illumination estimation can be cast as an object detection problem that permits the use of a fast convolutional neural network (CNN) for this task. We find that faster-RCNN delivers highly robust results and outperforms classical approaches by far with an up to 3-fold reduction in estimation errors. Intriguingly, we find that conventionally beneficial smoothing and filtering of raw data is counterproductive in this type of application. We present a detailed analysis of the network's performance and provide all our developed software openly.
Collapse
|
47
|
Butola A, Kanade SR, Bhatt S, Dubey VK, Kumar A, Ahmad A, Prasad DK, Senthilkumaran P, Ahluwalia BS, Mehta DS. High space-bandwidth in quantitative phase imaging using partially spatially coherent digital holographic microscopy and a deep neural network. OPTICS EXPRESS 2020; 28:36229-36244. [PMID: 33379722 DOI: 10.1364/oe.402666] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Accepted: 10/04/2020] [Indexed: 06/12/2023]
Abstract
Quantitative phase microscopy (QPM) is a label-free technique that enables monitoring of morphological changes at the subcellular level. The performance of the QPM system in terms of spatial sensitivity and resolution depends on the coherence properties of the light source and the numerical aperture (NA) of objective lenses. Here, we propose high space-bandwidth quantitative phase imaging using partially spatially coherent digital holographic microscopy (PSC-DHM) assisted with a deep neural network. The PSC source synthesized to improve the spatial sensitivity of the reconstructed phase map from the interferometric images. Further, compatible generative adversarial network (GAN) is used and trained with paired low-resolution (LR) and high-resolution (HR) datasets acquired from the PSC-DHM system. The training of the network is performed on two different types of samples, i.e. mostly homogenous human red blood cells (RBC), and on highly heterogeneous macrophages. The performance is evaluated by predicting the HR images from the datasets captured with a low NA lens and compared with the actual HR phase images. An improvement of 9× in the space-bandwidth product is demonstrated for both RBC and macrophages datasets. We believe that the PSC-DHM + GAN approach would be applicable in single-shot label free tissue imaging, disease classification and other high-resolution tomography applications by utilizing the longitudinal spatial coherence properties of the light source.
Collapse
|
48
|
Xue Y, Davison IG, Boas DA, Tian L. Single-shot 3D wide-field fluorescence imaging with a Computational Miniature Mesoscope. SCIENCE ADVANCES 2020; 6:eabb7508. [PMID: 33087364 PMCID: PMC7577725 DOI: 10.1126/sciadv.abb7508] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 09/09/2020] [Indexed: 05/20/2023]
Abstract
Fluorescence microscopes are indispensable to biology and neuroscience. The need for recording in freely behaving animals has further driven the development in miniaturized microscopes (miniscopes). However, conventional microscopes/miniscopes are inherently constrained by their limited space-bandwidth product, shallow depth of field (DOF), and inability to resolve three-dimensional (3D) distributed emitters. Here, we present a Computational Miniature Mesoscope (CM2) that overcomes these bottlenecks and enables single-shot 3D imaging across an 8 mm by 7 mm field of view and 2.5-mm DOF, achieving 7-μm lateral resolution and better than 200-μm axial resolution. The CM2 features a compact lightweight design that integrates a microlens array for imaging and a light-emitting diode array for excitation. Its expanded imaging capability is enabled by computational imaging that augments the optics by algorithms. We experimentally validate the mesoscopic imaging capability on 3D fluorescent samples. We further quantify the effects of scattering and background fluorescence on phantom experiments.
Collapse
Affiliation(s)
- Yujia Xue
- Department of Electrical and Computer Engineering, Boston University, MA 02215, USA
| | - Ian G Davison
- Department of Biology, Boston University, MA 02215, USA
- Neurophotonics Center, Boston University, MA 02215, USA
| | - David A Boas
- Department of Electrical and Computer Engineering, Boston University, MA 02215, USA
- Neurophotonics Center, Boston University, MA 02215, USA
- Department of Biomedical Engineering, Boston University, MA 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, MA 02215, USA.
- Neurophotonics Center, Boston University, MA 02215, USA
| |
Collapse
|
49
|
Pan A, Zuo C, Yao B. High-resolution and large field-of-view Fourier ptychographic microscopy and its applications in biomedicine. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2020; 83:096101. [PMID: 32679569 DOI: 10.1088/1361-6633/aba6f0] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Fourier ptychographic microscopy (FPM) is a promising and fast-growing computational imaging technique with high resolution, wide field-of-view (FOV) and quantitative phase recovery, which effectively tackles the problems of phase loss, aberration-introduced artifacts, narrow depth-of-field and the trade-off between resolution and FOV in conventional microscopy simultaneously. In this review, we provide a comprehensive roadmap of microscopy, the fundamental principles, advantages, and drawbacks of existing imaging techniques, and the significant roles that FPM plays in the development of science. Since FPM is an optimization problem in nature, we discuss the framework and related work. We also reveal the connection of Euler's formula between FPM and structured illumination microscopy. We review recent advances in FPM, including the implementation of high-precision quantitative phase imaging, high-throughput imaging, high-speed imaging, three-dimensional imaging, mixed-state decoupling, and introduce the prosperous biomedical applications. We conclude by discussing the challenging problems and future applications. FPM can be extended to a kind of framework to tackle the phase loss and system limits in the imaging system. This insight can be used easily in speckle imaging, incoherent imaging for retina imaging, large-FOV fluorescence imaging, etc.
Collapse
Affiliation(s)
- An Pan
- State Key Laboratory of Transient Optics and Photonics, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an 710119, People's Republic of China. University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
| | | | | |
Collapse
|
50
|
Machine learning-based design of meta-plasmonic biosensors with negative index metamaterials. Biosens Bioelectron 2020; 164:112335. [DOI: 10.1016/j.bios.2020.112335] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 05/24/2020] [Accepted: 05/26/2020] [Indexed: 12/19/2022]
|