1
|
Braeu FA, Chuangsuwanich T, Tun TA, Perera S, Husain R, Thiery AH, Aung T, Barbastathis G, Girard MJA. AI-based clinical assessment of optic nerve head robustness superseding biomechanical testing. Br J Ophthalmol 2024; 108:223-231. [PMID: 36627175 DOI: 10.1136/bjo-2022-322374] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 12/22/2022] [Indexed: 01/11/2023]
Abstract
BACKGROUND/AIMS To use artificial intelligence (AI) to: (1) exploit biomechanical knowledge of the optic nerve head (ONH) from a relatively large population; (2) assess ONH robustness (ie, sensitivity of the ONH to changes in intraocular pressure (IOP)) from a single optical coherence tomography (OCT) volume scan of the ONH without the need for biomechanical testing and (3) identify what critical three-dimensional (3D) structural features dictate ONH robustness. METHODS 316 subjects had their ONHs imaged with OCT before and after acute IOP elevation through ophthalmo-dynamometry. IOP-induced lamina cribrosa (LC) deformations were then mapped in 3D and used to classify ONHs. Those with an average effective LC strain superior to 4% were considered fragile, while those with a strain inferior to 4% robust. Learning from these data, we compared three AI algorithms to predict ONH robustness strictly from a baseline (undeformed) OCT volume: (1) a random forest classifier; (2) an autoencoder and (3) a dynamic graph convolutional neural network (DGCNN). The latter algorithm also allowed us to identify what critical 3D structural features make a given ONH robust. RESULTS All three methods were able to predict ONH robustness from a single OCT volume scan alone and without the need to perform biomechanical testing. The DGCNN (area under the curve (AUC): 0.76±0.08) outperformed the autoencoder (AUC: 0.72±0.09) and the random forest classifier (AUC: 0.69±0.05). Interestingly, to assess ONH robustness, the DGCNN mainly used information from the scleral canal and the LC insertion sites. CONCLUSIONS We propose an AI-driven approach that can assess the robustness of a given ONH solely from a single OCT volume scan of the ONH, and without the need to perform biomechanical testing. Longitudinal studies should establish whether ONH robustness could help us identify fast visual field loss progressors. PRECIS Using geometric deep learning, we can assess optic nerve head robustness (ie, sensitivity to a change in IOP) from a standard OCT scan that might help to identify fast visual field loss progressors.
Collapse
Affiliation(s)
- Fabian A Braeu
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Singapore-MIT Alliance for Research and Technology, Singapore
- Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore
| | - Thanadet Chuangsuwanich
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore
| | - Tin A Tun
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Centre, Singapore
| | - Shamira Perera
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Centre, Singapore
| | - Rahat Husain
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Centre, Singapore
| | - Alexandre H Thiery
- Statistics and Applied Probability, National University of Singapore, Singapore
| | - Tin Aung
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Centre, Singapore
- Duke-NUS Graduate Medical School, Singapore
| | - George Barbastathis
- Singapore-MIT Alliance for Research and Technology, Singapore
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Michaël J A Girard
- Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore
- Duke-NUS Graduate Medical School, Singapore
- Institute for Molecular and Clinical Ophthalmology, Basel, Switzerland
| |
Collapse
|
2
|
Wang K, Song L, Wang C, Ren Z, Zhao G, Dou J, Di J, Barbastathis G, Zhou R, Zhao J, Lam EY. On the use of deep learning for phase recovery. Light Sci Appl 2024; 13:4. [PMID: 38161203 PMCID: PMC10758000 DOI: 10.1038/s41377-023-01340-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 11/13/2023] [Accepted: 11/16/2023] [Indexed: 01/03/2024]
Abstract
Phase recovery (PR) refers to calculating the phase of the light field from its intensity measurements. As exemplified from quantitative phase imaging and coherent diffraction imaging to adaptive optics, PR is essential for reconstructing the refractive index distribution or topography of an object and correcting the aberration of an imaging system. In recent years, deep learning (DL), often implemented through deep neural networks, has provided unprecedented support for computational imaging, leading to more efficient solutions for various PR problems. In this review, we first briefly introduce conventional methods for PR. Then, we review how DL provides support for PR from the following three stages, namely, pre-processing, in-processing, and post-processing. We also review how DL is used in phase image processing. Finally, we summarize the work in DL for PR and provide an outlook on how to better use DL to improve the reliability and efficiency of PR. Furthermore, we present a live-updating resource ( https://github.com/kqwang/phase-recovery ) for readers to learn more about PR.
Collapse
Affiliation(s)
- Kaiqiang Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Li Song
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Chutian Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Zhenbo Ren
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China
| | - Guangyuan Zhao
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jiazhen Dou
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jianglei Di
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Renjie Zhou
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jianlin Zhao
- School of Physical Science and Technology, Northwestern Polytechnical University, Xi'an, China.
| | - Edmund Y Lam
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
3
|
Hussain S, Chua J, Wong D, Lo J, Kadziauskiene A, Asoklis R, Barbastathis G, Schmetterer L, Yong L. Predicting glaucoma progression using deep learning framework guided by generative algorithm. Sci Rep 2023; 13:19960. [PMID: 37968437 PMCID: PMC10651936 DOI: 10.1038/s41598-023-46253-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 10/30/2023] [Indexed: 11/17/2023] Open
Abstract
Glaucoma is a slowly progressing optic neuropathy that may eventually lead to blindness. To help patients receive customized treatment, predicting how quickly the disease will progress is important. Structural assessment using optical coherence tomography (OCT) can be used to visualize glaucomatous optic nerve and retinal damage, while functional visual field (VF) tests can be used to measure the extent of vision loss. However, VF testing is patient-dependent and highly inconsistent, making it difficult to track glaucoma progression. In this work, we developed a multimodal deep learning model comprising a convolutional neural network (CNN) and a long short-term memory (LSTM) network, for glaucoma progression prediction. We used OCT images, VF values, demographic and clinical data of 86 glaucoma patients with five visits over 12 months. The proposed method was used to predict VF changes 12 months after the first visit by combining past multimodal inputs with synthesized future images generated using generative adversarial network (GAN). The patients were classified into two classes based on their VF mean deviation (MD) decline: slow progressors (< 3 dB) and fast progressors (> 3 dB). We showed that our generative model-based novel approach can achieve the best AUC of 0.83 for predicting the progression 6 months earlier. Further, the use of synthetic future images enabled the model to accurately predict the vision loss even earlier (9 months earlier) with an AUC of 0.81, compared to using only structural (AUC = 0.68) or only functional measures (AUC = 0.72). This study provides valuable insights into the potential of using synthetic follow-up OCT images for early detection of glaucoma progression.
Collapse
Affiliation(s)
- Shaista Hussain
- Institute of High Performance Computing, A*STAR, Singapore, Singapore.
| | - Jacqueline Chua
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Damon Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | | | - Aiste Kadziauskiene
- Clinic of Ears, Nose, Throat and Eye Diseases, Institute of Clinical Medicine, Faculty of Medicine, Vilnius University, Vilnius, Lithuania
- Department of Eye Diseases, Vilnius University Hospital Santaros Klinikos, Vilnius, Lithuania
| | - Rimvydas Asoklis
- Clinic of Ears, Nose, Throat and Eye Diseases, Institute of Clinical Medicine, Faculty of Medicine, Vilnius University, Vilnius, Lithuania
- Department of Eye Diseases, Vilnius University Hospital Santaros Klinikos, Vilnius, Lithuania
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, Singapore, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland.
- Department of Ophthalmology, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore.
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore, Singapore.
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria.
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| | - Liu Yong
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| |
Collapse
|
4
|
Braeu FA, Chuangsuwanich T, Tun TA, Perera SA, Husain R, Kadziauskienė A, Schmetterer L, Thiéry AH, Barbastathis G, Aung T, Girard MJA. Three-Dimensional Structural Phenotype of the Optic Nerve Head as a Function of Glaucoma Severity. JAMA Ophthalmol 2023; 141:882-889. [PMID: 37589980 PMCID: PMC10436184 DOI: 10.1001/jamaophthalmol.2023.3315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 06/05/2023] [Indexed: 08/18/2023]
Abstract
Importance The 3-dimensional (3-D) structural phenotype of glaucoma as a function of severity was thoroughly described and analyzed, enhancing understanding of its intricate pathology beyond current clinical knowledge. Objective To describe the 3-D structural differences in both connective and neural tissues of the optic nerve head (ONH) between different glaucoma stages using traditional and artificial intelligence-driven approaches. Design, Setting, and Participants This cross-sectional, clinic-based study recruited 541 Chinese individuals receiving standard clinical care at Singapore National Eye Centre, Singapore, and 112 White participants of a prospective observational study at Vilnius University Hospital Santaros Klinikos, Vilnius, Lithuania. The study was conducted from May 2022 to January 2023. All participants had their ONH imaged using spectral-domain optical coherence tomography and had their visual field assessed by standard automated perimetry. Main Outcomes and Measures (1) Clinician-defined 3-D structural parameters of the ONH and (2) 3-D structural landmarks identified by geometric deep learning that differentiated ONHs among 4 groups: no glaucoma, mild glaucoma (mean deviation [MD], ≥-6.00 dB), moderate glaucoma (MD, -6.01 to -12.00 dB), and advanced glaucoma (MD, <-12.00 dB). Results Study participants included 213 individuals without glaucoma (mean age, 63.4 years; 95% CI, 62.5-64.3 years; 126 females [59.2%]; 213 Chinese [100%] and 0 White individuals), 204 with mild glaucoma (mean age, 66.9 years; 95% CI, 66.0-67.8 years; 91 females [44.6%]; 178 Chinese [87.3%] and 26 White [12.7%] individuals), 118 with moderate glaucoma (mean age, 68.1 years; 95% CI, 66.8-69.4 years; 49 females [41.5%]; 97 Chinese [82.2%] and 21 White [17.8%] individuals), and 118 with advanced glaucoma (mean age, 68.5 years; 95% CI, 67.1-69.9 years; 43 females [36.4%]; 53 Chinese [44.9%] and 65 White [55.1%] individuals). The majority of ONH structural differences occurred in the early glaucoma stage, followed by a plateau effect in the later stages. Using a deep neural network, 3-D ONH structural differences were found to be present in both neural and connective tissues. Specifically, a mean of 57.4% (95% CI, 54.9%-59.9%, for no to mild glaucoma), 38.7% (95% CI, 36.9%-40.5%, for mild to moderate glaucoma), and 53.1 (95% CI, 50.8%-55.4%, for moderate to advanced glaucoma) of ONH landmarks that showed major structural differences were located in neural tissues with the remaining located in connective tissues. Conclusions and Relevance This study uncovered complex 3-D structural differences of the ONH in both neural and connective tissues as a function of glaucoma severity. Future longitudinal studies should seek to establish a connection between specific 3-D ONH structural changes and fast visual field deterioration and aim to improve the early detection of patients with rapid visual field loss in routine clinical care.
Collapse
Affiliation(s)
- Fabian A. Braeu
- Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Singapore–MIT Alliance for Research and Technology, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Thanadet Chuangsuwanich
- Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tin A. Tun
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-NUS Graduate Medical School, Singapore
| | - Shamira A. Perera
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-NUS Graduate Medical School, Singapore
| | - Rahat Husain
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Aiste Kadziauskienė
- Clinic of Ears, Nose, Throat and Eye Diseases, Institute of Clinical Medicine, Faculty of Medicine, Vilnius University, Vilnius, Lithuania
- Center of Eye Diseases, Vilnius University Hospital Santaros Klinikos, Vilnius, Lithuania
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-NUS Graduate Medical School, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE), Singapore
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Alexandre H. Thiéry
- Department of Statistics and Applied Probability, National University of Singapore, Singapore
| | - George Barbastathis
- Singapore–MIT Alliance for Research and Technology, Singapore
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge
| | - Tin Aung
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-NUS Graduate Medical School, Singapore
| | - Michaël J. A. Girard
- Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-NUS Graduate Medical School, Singapore
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| |
Collapse
|
5
|
Kang I, Wu Z, Jiang Y, Yao Y, Deng J, Klug J, Vogt S, Barbastathis G. Attentional Ptycho-Tomography (APT) for three-dimensional nanoscale X-ray imaging with minimal data acquisition and computation time. Light Sci Appl 2023; 12:131. [PMID: 37248235 DOI: 10.1038/s41377-023-01181-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 05/09/2023] [Accepted: 05/10/2023] [Indexed: 05/31/2023]
Abstract
Noninvasive X-ray imaging of nanoscale three-dimensional objects, such as integrated circuits (ICs), generally requires two types of scanning: ptychographic, which is translational and returns estimates of the complex electromagnetic field through the IC; combined with a tomographic scan, which collects these complex field projections from multiple angles. Here, we present Attentional Ptycho-Tomography (APT), an approach to drastically reduce the amount of angular scanning, and thus the total acquisition time. APT is machine learning-based, utilizing axial self-Attention for Ptycho-Tomographic reconstruction. APT is trained to obtain accurate reconstructions of the ICs, despite the incompleteness of the measurements. The training process includes regularizing priors in the form of typical patterns found in IC interiors, and the physics of X-ray propagation through the IC. We show that APT with ×12 reduced angles achieves fidelity comparable to the gold standard Simultaneous Algebraic Reconstruction Technique (SART) with the original set of angles. When using the same set of reduced angles, then APT also outperforms Filtered Back Projection (FBP), Simultaneous Iterative Reconstruction Technique (SIRT) and SART. The time needed to compute the reconstruction is also reduced, because the trained neural network is a forward operation, unlike the iterative nature of these alternatives. Our experiments show that, without loss in quality, for a 4.48 × 93.2 × 3.92 µm3 IC (≃6 × 108 voxels), APT reduces the total data acquisition and computation time from 67.96 h to 38 min. We expect our physics-assisted and attention-utilizing machine learning framework to be applicable to other branches of nanoscale imaging, including materials science and biological imaging.
Collapse
Affiliation(s)
- Iksung Kang
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- Department of Molecular and Cell Biology, University of California, Berkeley, CA, 94720, USA
| | - Ziling Wu
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, 1 CREATE Way, Singapore, 138602, Singapore
| | - Yi Jiang
- Argonne National Laboratory, Lemont, IL, 60439, USA
| | - Yudong Yao
- Argonne National Laboratory, Lemont, IL, 60439, USA
- Center for Transformative Science, ShanghaiTech University, 201210, Shanghai, China
| | - Junjing Deng
- Argonne National Laboratory, Lemont, IL, 60439, USA
| | - Jeffrey Klug
- Argonne National Laboratory, Lemont, IL, 60439, USA
| | - Stefan Vogt
- Argonne National Laboratory, Lemont, IL, 60439, USA
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, 1 CREATE Way, Singapore, 138602, Singapore.
| |
Collapse
|
6
|
Guo Z, Liu Z, Barbastathis G, Zhang Q, Glinsky ME, Alpert BK, Levine ZH. Noise-resilient deep learning for integrated circuit tomography. Opt Express 2023; 31:15355-15371. [PMID: 37157639 DOI: 10.1364/oe.486213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
X-ray tomography is a non-destructive imaging technique that reveals the interior of an object from its projections at different angles. Under sparse-view and low-photon sampling, regularization priors are required to retrieve a high-fidelity reconstruction. Recently, deep learning has been used in X-ray tomography. The prior learned from training data replaces the general-purpose priors in iterative algorithms, achieving high-quality reconstructions with a neural network. Previous studies typically assume the noise statistics of test data are acquired a priori from training data, leaving the network susceptible to a change in the noise characteristics under practical imaging conditions. In this work, we propose a noise-resilient deep-reconstruction algorithm and apply it to integrated circuit tomography. By training the network with regularized reconstructions from a conventional algorithm, the learned prior shows strong noise resilience without the need for additional training with noisy examples, and allows us to obtain acceptable reconstructions with fewer photons in test data. The advantages of our framework may further enable low-photon tomographic imaging where long acquisition times limit the ability to acquire a large training set.
Collapse
|
7
|
Zhang Q, Gamekkanda JC, Pandit A, Tang W, Papageorgiou C, Mitchell C, Yang Y, Schwaerzler M, Oyetunde T, Braatz RD, Myerson AS, Barbastathis G. Extracting particle size distribution from laser speckle with a physics-enhanced autocorrelation-based estimator (PEACE). Nat Commun 2023; 14:1159. [PMID: 36859392 PMCID: PMC9977959 DOI: 10.1038/s41467-023-36816-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Accepted: 02/17/2023] [Indexed: 03/03/2023] Open
Abstract
Extracting quantitative information about highly scattering surfaces from an imaging system is challenging because the phase of the scattered light undergoes multiple folds upon propagation, resulting in complex speckle patterns. One specific application is the drying of wet powders in the pharmaceutical industry, where quantifying the particle size distribution (PSD) is of particular interest. A non-invasive and real-time monitoring probe in the drying process is required, but there is no suitable candidate for this purpose. In this report, we develop a theoretical relationship from the PSD to the speckle image and describe a physics-enhanced autocorrelation-based estimator (PEACE) machine learning algorithm for speckle analysis to measure the PSD of a powder surface. This method solves both the forward and inverse problems together and enjoys increased interpretability, since the machine learning approximator is regularized by the physical law.
Collapse
Affiliation(s)
- Qihang Zhang
- grid.116068.80000 0001 2341 2786Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Janaka C. Gamekkanda
- grid.116068.80000 0001 2341 2786Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Ajinkya Pandit
- grid.116068.80000 0001 2341 2786Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Wenlong Tang
- grid.419849.90000 0004 0447 7762Data Sciences Institutes, Takeda Pharmaceuticals International Co, 650 E Kendall St, Cambridge, MA 02142 USA
| | - Charles Papageorgiou
- grid.419849.90000 0004 0447 7762Process Chemistry Development, Takeda Pharmaceuticals International Co, 40 Landsdowne St, Cambridge, MA 02139 USA
| | - Chris Mitchell
- grid.419849.90000 0004 0447 7762Process Chemistry Development, Takeda Pharmaceuticals International Co, 40 Landsdowne St, Cambridge, MA 02139 USA
| | - Yihui Yang
- grid.419849.90000 0004 0447 7762Process Chemistry Development, Takeda Pharmaceuticals International Co, 40 Landsdowne St, Cambridge, MA 02139 USA
| | - Michael Schwaerzler
- Innovation and Technology Sciences, Takeda Pharmaceutical Company Limited, 200 Shire Way, Lexington, MA 02421 USA
| | - Tolutola Oyetunde
- Innovation and Technology Sciences, Takeda Pharmaceutical Company Limited, 200 Shire Way, Lexington, MA 02421 USA
| | - Richard D. Braatz
- grid.116068.80000 0001 2341 2786Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Allan S. Myerson
- grid.116068.80000 0001 2341 2786Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA. .,Singapore-MIT Alliance for Research and Technology (SMART) Centre, 1 Create Way, Singapore, 117543, Singapore.
| |
Collapse
|
8
|
Srisuma P, Pandit A, Zhang Q, Hong MS, Gamekkanda J, Fachin F, Moore N, Djordjevic D, Schwaerzler M, Oyetunde T, Tang W, Myerson AS, Barbastathis G, Braatz RD. Thermal imaging-based state estimation of a Stefan problem with application to cell thawing. Comput Chem Eng 2023. [DOI: 10.1016/j.compchemeng.2023.108179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
9
|
Braeu FA, Thiéry AH, Tun TA, Kadziauskiene A, Barbastathis G, Aung T, Girard MJA. Geometric Deep Learning to Identify the Critical 3D Structural Features of the Optic Nerve Head for Glaucoma Diagnosis. Am J Ophthalmol 2023; 250:38-48. [PMID: 36646242 DOI: 10.1016/j.ajo.2023.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 01/07/2023] [Accepted: 01/09/2023] [Indexed: 01/15/2023]
Abstract
PURPOSE To compare the performance of 2 relatively recent geometric deep learning techniques in diagnosing glaucoma from a single optical coherence tomographic (OCT) scan of the optic nerve head (ONH); and to identify the 3-dimensional (3D) structural features of the ONH that are critical for the diagnosis of glaucoma. DESIGN Comparison and evaluation of deep learning diagnostic algorithms. METHODS In this study, we included a total of 2247 nonglaucoma and 2259 glaucoma scans from 1725 participants. All participants had their ONHs imaged in 3D with Spectralis OCT. All OCT scans were automatically segmented using deep learning to identify major neural and connective tissues. Each ONH was then represented as a 3D point cloud. We used PointNet and dynamic graph convolutional neural network (DGCNN) to diagnose glaucoma from such 3D ONH point clouds and to identify the critical 3D structural features of the ONH for glaucoma diagnosis. RESULTS Both the DGCNN (area under the curve [AUC]: 0.97±0.01) and PointNet (AUC: 0.95±0.02) were able to accurately detect glaucoma from 3D ONH point clouds. The critical points (ie, critical structural features of the ONH) formed an hourglass pattern, with most of them located within the neuroretinal rim in the inferior and superior quadrant of the ONH. CONCLUSIONS The diagnostic accuracy of both geometric deep learning approaches was excellent. Moreover, we were able to identify the critical 3D structural features of the ONH for glaucoma diagnosis that tremendously improved the transparency and interpretability of our method. Consequently, our approach may have strong potential to be used in clinical applications for the diagnosis and prognosis of a wide range of ophthalmic disorders.
Collapse
Affiliation(s)
- Fabian A Braeu
- From the Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre (F.A.B., M.J.A.G.), Singapore; Singapore-MIT Alliance for Research and Technology (F.A.B., G.B.), Singapore; Yong Loo Lin School of Medicine, National University of Singapore (F.A.B., T.A.), Singapore
| | - Alexandre H Thiéry
- Department of Statistics and Applied Probability, National University of Singapore (A.H.T.), Singapore
| | - Tin A Tun
- Singapore Eye Research Institute, Singapore National Eye Centre (T.A.T., T.A.), Singapore; Duke-NUS Graduate Medical School (T.A.T., T.A., M.J.A.G.), Singapore
| | - Aiste Kadziauskiene
- Clinic of Ears, Nose, Throat and Eye Diseases, Institute of Clinical Medicine, Faculty of Medicine, Vilnius University (A.K.), Vilnius, Lithuania; Center of Eye diseases, Vilnius University Hospital Santaros Klinikos (A.K.), Vilnius, Lithuania
| | - George Barbastathis
- Singapore-MIT Alliance for Research and Technology (F.A.B., G.B.), Singapore; Department of Mechanical Engineering, Massachusetts Institute of Technology (G.B.), Cambridge, Massachusetts, USA
| | - Tin Aung
- Yong Loo Lin School of Medicine, National University of Singapore (F.A.B., T.A.), Singapore; Singapore Eye Research Institute, Singapore National Eye Centre (T.A.T., T.A.), Singapore; Duke-NUS Graduate Medical School (T.A.T., T.A., M.J.A.G.), Singapore
| | - Michaël J A Girard
- From the Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre (F.A.B., M.J.A.G.), Singapore; Duke-NUS Graduate Medical School (T.A.T., T.A., M.J.A.G.), Singapore; Institute for Molecular and Clinical Ophthalmology (M.J.A.G.), Basel, Switzerland.
| |
Collapse
|
10
|
Pang S, Barbastathis G. Unified treatment of exact and approximate scalar electromagnetic wave scattering. Phys Rev E 2022; 106:045301. [PMID: 36397470 DOI: 10.1103/physreve.106.045301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 09/02/2022] [Indexed: 06/16/2023]
Abstract
Under conditions of strong scattering, a dilemma often arises regarding the best numerical method to use. Main competitors are the Born series, the beam propagation method, and direct solution of the Lippmann-Schwinger equation. However, analytical relationships between the three methods have not yet, to our knowledge, been explicitly stated. Here, we bridge this gap in the literature. In addition to overall insight about aspects of optical scattering that are best numerically captured by each method, our approach allows us to derive approximate error bounds to be expected under various scattering conditions.
Collapse
Affiliation(s)
- Subeen Pang
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
11
|
Guo Z, Song JK, Barbastathis G, Glinsky ME, Vaughan CT, Larson KW, Alpert BK, Levine ZH. Physics-assisted generative adversarial network for X-ray tomography. Opt Express 2022; 30:23238-23259. [PMID: 36225009 DOI: 10.1364/oe.460208] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 05/31/2022] [Indexed: 06/16/2023]
Abstract
X-ray tomography is capable of imaging the interior of objects in three dimensions non-invasively, with applications in biomedical imaging, materials science, electronic inspection, and other fields. The reconstruction process can be an ill-conditioned inverse problem, requiring regularization to obtain satisfactory results. Recently, deep learning has been adopted for tomographic reconstruction. Unlike iterative algorithms which require a distribution that is known a priori, deep reconstruction networks can learn a prior distribution through sampling the training distributions. In this work, we develop a Physics-assisted Generative Adversarial Network (PGAN), a two-step algorithm for tomographic reconstruction. In contrast to previous efforts, our PGAN utilizes maximum-likelihood estimates derived from the measurements to regularize the reconstruction with both known physics and the learned prior. Compared with methods with less physics assisting in training, PGAN can reduce the photon requirement with limited projection angles to achieve a given error rate. The advantages of using a physics-assisted learned prior in X-ray tomography may further enable low-photon nanoscale imaging.
Collapse
|
12
|
Guo Z, Levitan A, Barbastathis G, Comin R. Randomized probe imaging through deep k-learning. Opt Express 2022; 30:2247-2264. [PMID: 35209369 DOI: 10.1364/oe.445498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 12/22/2021] [Indexed: 06/14/2023]
Abstract
Randomized probe imaging (RPI) is a single-frame diffractive imaging method that uses highly randomized light to reconstruct the spatial features of a scattering object. The reconstruction process, known as phase retrieval, aims to recover a unique solution for the object without measuring the far-field phase information. Typically, reconstruction is done via time-consuming iterative algorithms. In this work, we propose a fast and efficient deep learning based method to reconstruct phase objects from RPI data. The method, which we call deep k-learning, applies the physical propagation operator to generate an approximation of the object as an input to the neural network. This way, the network no longer needs to parametrize the far-field diffraction physics, dramatically improving the results. Deep k-learning is shown to be computationally efficient and robust to Poisson noise. The advantages provided by our method may enable the analysis of far larger datasets in photon starved conditions, with important applications to the study of dynamic phenomena in physical science and biological engineering.
Collapse
|
13
|
Dandekar R, Wang E, Barbastathis G, Rackauckas C. Implications of Delayed Reopening in Controlling the COVID-19 Surge in Southern and West-Central USA. Health Data Sci 2021; 2021:9798302. [PMID: 36405358 PMCID: PMC9629682 DOI: 10.34133/2021/9798302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 09/12/2021] [Indexed: 06/16/2023]
Abstract
In the wake of the rapid surge in the COVID-19-infected cases seen in Southern and West-Central USA in the period of June-July 2020, there is an urgent need to develop robust, data-driven models to quantify the effect which early reopening had on the infected case count increase. In particular, it is imperative to address the question: How many infected cases could have been prevented, had the worst affected states not reopened early? To address this question, we have developed a novel COVID-19 model by augmenting the classical SIR epidemiological model with a neural network module. The model decomposes the contribution of quarantine strength to the infection time series, allowing us to quantify the role of quarantine control and the associated reopening policies in the US states which showed a major surge in infections. We show that the upsurge in the infected cases seen in these states is strongly corelated with a drop in the quarantine/lockdown strength diagnosed by our model. Further, our results demonstrate that in the event of a stricter lockdown without early reopening, the number of active infected cases recorded on 14 July could have been reduced by more than 40% in all states considered, with the actual number of infections reduced being more than 100,000 for the states of Florida and Texas. As we continue our fight against COVID-19, our proposed model can be used as a valuable asset to simulate the effect of several reopening strategies on the infected count evolution, for any region under consideration.
Collapse
Affiliation(s)
- Raj Dandekar
- Department of Computational Science and Engineering, Massachusetts Institute of Technology, Cambridge, USA MA 02139
| | - Emma Wang
- Department of Electrical Engineering and Computer Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, Singapore 138602
| | - Chris Rackauckas
- Department of Applied Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
14
|
Javidi B, Carnicer A, Anand A, Barbastathis G, Chen W, Ferraro P, Goodman JW, Horisaki R, Khare K, Kujawinska M, Leitgeb RA, Marquet P, Nomura T, Ozcan A, Park Y, Pedrini G, Picart P, Rosen J, Saavedra G, Shaked NT, Stern A, Tajahuerce E, Tian L, Wetzstein G, Yamaguchi M. Roadmap on digital holography [Invited]. Opt Express 2021; 29:35078-35118. [PMID: 34808951 DOI: 10.1364/oe.435915] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 09/04/2021] [Indexed: 05/22/2023]
Abstract
This Roadmap article on digital holography provides an overview of a vast array of research activities in the field of digital holography. The paper consists of a series of 25 sections from the prominent experts in digital holography presenting various aspects of the field on sensing, 3D imaging and displays, virtual and augmented reality, microscopy, cell identification, tomography, label-free live cell imaging, and other applications. Each section represents the vision of its author to describe the significant progress, potential impact, important developments, and challenging issues in the field of digital holography.
Collapse
|
15
|
Kim S, Handler JJ, Cho YT, Barbastathis G, Fang NX. Scalable 3D printing of aperiodic cellular structures by rotational stacking of integral image formation. Sci Adv 2021; 7:eabh1200. [PMID: 34533994 PMCID: PMC8448457 DOI: 10.1126/sciadv.abh1200] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
The limitation of projection microstereolithography in additive manufacturing methods is that they typically use a single-aperture imaging configuration, which restricts their ability to produce microstructures in large volumes owing to the trade-off between image resolution and image field area. Here, we propose an integral lithography based on integral image reconstruction coupled with a planar lens array. The individual microlenses maintain a high numerical aperture and are used to create digital light patterns that can expand the printable area by the number of microlenses (103 to 104), thereby allowing for the scalable stereolithographic fabrication of 3D features that surpass the resolution-to-area scaling limit. We extend the capability of integral lithography for programmable printing of deterministic nonperiodic structures through the rotational overlapping or stacking of multiple exposures with controlled angular offsets. This printing platform provides new possibilities for producing periodic and aperiodic microarchitectures spanning four orders of magnitude from micrometers to centimeters.
Collapse
Affiliation(s)
- Seok Kim
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Department of Mechanical Engineering, Changwon National University, Changwon, South Korea
| | - Jordan J. Handler
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA 02142, USA
| | - Young Tae Cho
- Department of Mechanical Engineering, Changwon National University, Changwon, South Korea
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, 1 Create Way, Singapore 138602, Singapore
| | - Nicholas X. Fang
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Corresponding author.
| |
Collapse
|
16
|
Kang I, Goy A, Barbastathis G. Correction: Dynamical machine learning volumetric reconstruction of objects' interiors from limited angular views. Light Sci Appl 2021; 10:178. [PMID: 34480021 PMCID: PMC8417275 DOI: 10.1038/s41377-021-00615-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Affiliation(s)
- Iksung Kang
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA, USA.
| | - Alexandre Goy
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- Omnisens SA, Morges, 1110, Switzerland
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, 1 Create Way, Singapore, 117543, Singapore
| |
Collapse
|
17
|
Kang I, Goy A, Barbastathis G. Dynamical machine learning volumetric reconstruction of objects' interiors from limited angular views. Light Sci Appl 2021; 10:74. [PMID: 33828073 PMCID: PMC8027224 DOI: 10.1038/s41377-021-00512-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 03/04/2021] [Accepted: 03/13/2021] [Indexed: 05/26/2023]
Abstract
Limited-angle tomography of an interior volume is a challenging, highly ill-posed problem with practical implications in medical and biological imaging, manufacturing, automation, and environmental and food security. Regularizing priors are necessary to reduce artifacts by improving the condition of such problems. Recently, it was shown that one effective way to learn the priors for strongly scattering yet highly structured 3D objects, e.g. layered and Manhattan, is by a static neural network [Goy et al. Proc. Natl. Acad. Sci. 116, 19848-19856 (2019)]. Here, we present a radically different approach where the collection of raw images from multiple angles is viewed analogously to a dynamical system driven by the object-dependent forward scattering operator. The sequence index in the angle of illumination plays the role of discrete time in the dynamical system analogy. Thus, the imaging problem turns into a problem of nonlinear system identification, which also suggests dynamical learning as a better fit to regularize the reconstructions. We devised a Recurrent Neural Network (RNN) architecture with a novel Separable-Convolution Gated Recurrent Unit (SC-GRU) as the fundamental building block. Through a comprehensive comparison of several quantitative metrics, we show that the dynamic method is suitable for a generic interior-volumetric reconstruction under a limited-angle scheme. We show that this approach accurately reconstructs volume interiors under two conditions: weak scattering, when the Radon transform approximation is applicable and the forward operator well defined; and strong scattering, which is nonlinear with respect to the 3D refractive index distribution and includes uncertainty in the forward operator.
Collapse
Affiliation(s)
- Iksung Kang
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA, USA.
| | - Alexandre Goy
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- Omnisens SA, Morges, 1110, Switzerland
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, 1 Create Way, Singapore, 117543, Singapore
| |
Collapse
|
18
|
Zhang Z, Leong KW, Vliet KV, Barbastathis G, Ravasio A. Deep learning for label-free nuclei detection from implicit phase information of mesenchymal stem cells. Biomed Opt Express 2021; 12:1683-1706. [PMID: 33796381 PMCID: PMC7984805 DOI: 10.1364/boe.420266] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 02/21/2021] [Indexed: 05/13/2023]
Abstract
Monitoring of adherent cells in culture is routinely performed in biological and clinical laboratories, and it is crucial for large-scale manufacturing of cells needed in cell-based clinical trials and therapies. However, the lack of reliable and easily implementable label-free techniques makes this task laborious and prone to human subjectivity. We present a deep-learning-based processing pipeline that locates and characterizes mesenchymal stem cell nuclei from a few bright-field images captured at various levels of defocus under collimated illumination. Our approach builds upon phase-from-defocus methods in the optics literature and is easily applicable without the need for special microscopy hardware, for example, phase contrast objectives, or explicit phase reconstruction methods that rely on potentially bias-inducing priors. Experiments show that this label-free method can produce accurate cell counts as well as nuclei shape statistics without the need for invasive staining or ultraviolet radiation. We also provide detailed information on how the deep-learning pipeline was designed, built and validated, making it straightforward to adapt our methodology to different types of cells. Finally, we discuss the limitations of our technique and potential future avenues for exploration.
Collapse
Affiliation(s)
- Zhengyun Zhang
- BioSyM IRG, Singapore-MIT Alliance for Research and Technology (SMART) Centre, 1 CREATE Way, #04-13/14 Enterprise Wing, Singapore 138602, Singapore
| | - Kim Whye Leong
- Department of Biological Sciences, National University of Singapore, 16 Science Drive 4, Singapore 117558, Singapore
| | - Krystyn Van Vliet
- BioSyM IRG, Singapore-MIT Alliance for Research and Technology (SMART) Centre, 1 CREATE Way, #04-13/14 Enterprise Wing, Singapore 138602, Singapore
- Department of Materials Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA
| | - George Barbastathis
- BioSyM IRG, Singapore-MIT Alliance for Research and Technology (SMART) Centre, 1 CREATE Way, #04-13/14 Enterprise Wing, Singapore 138602, Singapore
- Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA
| | - Andrea Ravasio
- Institute of Biological and Medical Engineering, School of Engineering, Medicine and Biological Sciences, Pontificia Universidad Cátolica de Chile, Vicuña Makenna 4860, Macul, Santiago, Chile
| |
Collapse
|
19
|
Kang I, Pang S, Zhang Q, Fang N, Barbastathis G. Recurrent neural network reveals transparent objects through scattering media. Opt Express 2021; 29:5316-5326. [PMID: 33726070 DOI: 10.1364/oe.412890] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 01/29/2021] [Indexed: 06/12/2023]
Abstract
Scattering generally worsens the condition of inverse problems, with the severity depending on the statistics of the refractive index gradient and contrast. Removing scattering artifacts from images has attracted much work in the literature, including recently the use of static neural networks. S. Li et al. [Optica5(7), 803 (2018)10.1364/OPTICA.5.000803] trained a convolutional neural network to reveal amplitude objects hidden by a specific diffuser; whereas Y. Li et al. [Optica5(10), 1181 (2018)10.1364/OPTICA.5.001181] were able to deal with arbitrary diffusers, as long as certain statistical criteria were met. Here, we propose a novel dynamical machine learning approach for the case of imaging phase objects through arbitrary diffusers. The motivation is to strengthen the correlation among the patterns during the training and to reveal phase objects through scattering media. We utilize the on-axis rotation of a diffuser to impart dynamics and utilize multiple speckle measurements from different angles to form a sequence of images for training. Recurrent neural networks (RNN) embedded with the dynamics filter out useful information and discard the redundancies, thus quantitative phase information in presence of strong scattering. In other words, the RNN effectively averages out the effect of the dynamic random scattering media and learns more about the static pattern. The dynamical approach reveals transparent images behind the scattering media out of speckle correlation among adjacent measurements in a sequence. This method is also applicable to other imaging applications that involve any other spatiotemporal dynamics.
Collapse
|
20
|
Wu J, Cao L, Barbastathis G. DNN-FZA camera: a deep learning approach toward broadband FZA lensless imaging. Opt Lett 2021; 46:130-133. [PMID: 33362033 DOI: 10.1364/ol.411228] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 12/02/2020] [Indexed: 06/12/2023]
Abstract
In mask-based lensless imaging, iterative reconstruction methods based on the geometric optics model produce artifacts and are computationally expensive. We present a prototype of a lensless camera that uses a deep neural network (DNN) to realize rapid reconstruction for Fresnel zone aperture (FZA) imaging. A deep back-projection network (DBPN) is connected behind a U-Net providing an error feedback mechanism, which realizes the self-correction of features to recover the image detail. A diffraction model generates the training data under conditions of broadband incoherent imaging. In the reconstructed results, blur caused by diffraction is shown to have been ameliorated, while the computing time is 2 orders of magnitude faster than the traditional iterative image reconstruction algorithms. This strategy could drastically reduce the design and assembly costs of cameras, paving the way for integration of portable sensors and systems.
Collapse
|
21
|
Dandekar R, Rackauckas C, Barbastathis G. A Machine Learning-Aided Global Diagnostic and Comparative Tool to Assess Effect of Quarantine Control in COVID-19 Spread. Patterns (N Y) 2020; 1:100145. [PMID: 33225319 PMCID: PMC7671652 DOI: 10.1016/j.patter.2020.100145] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 09/17/2020] [Accepted: 10/21/2020] [Indexed: 11/24/2022]
Abstract
We have developed a globally applicable diagnostic COVID-19 model by augmenting the classical SIR epidemiological model with a neural network module. Our model does not rely upon previous epidemics like SARS/MERS and all parameters are optimized via machine learning algorithms used on publicly available COVID-19 data. The model decomposes the contributions to the infection time series to analyze and compare the role of quarantine control policies used in highly affected regions of Europe, North America, South America, and Asia in controlling the spread of the virus. For all continents considered, our results show a generally strong correlation between strengthening of the quarantine controls as learnt by the model and actions taken by the regions' respective governments. In addition, we have hosted our quarantine diagnosis results for the top 70 affected countries worldwide, on a public platform.
Collapse
Affiliation(s)
- Raj Dandekar
- Department of Computational Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Chris Rackauckas
- Department of Applied Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, Singapore 138602, Singapore
| |
Collapse
|
22
|
Allan G, Kang I, Douglas ES, Barbastathis G, Cahoy K. Deep residual learning for low-order wavefront sensing in high-contrast imaging systems. Opt Express 2020; 28:26267-26283. [PMID: 32906902 DOI: 10.1364/oe.397790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 08/04/2020] [Indexed: 06/11/2023]
Abstract
Sensing and correction of low-order wavefront aberrations is critical for high-contrast astronomical imaging. State of the art coronagraph systems typically use image-based sensing methods that exploit the rejected on-axis light, such as Lyot-based low order wavefront sensors (LLOWFS); these methods rely on linear least-squares fitting to recover Zernike basis coefficients from intensity data. However, the dynamic range of linear recovery is limited. We propose the use of deep neural networks with residual learning techniques for non-linear wavefront sensing. The deep residual learning approach extends the usable range of the LLOWFS sensor by more than an order of magnitude compared to the conventional methods, and can improve closed-loop control of systems with large initial wavefront error. We demonstrate that the deep learning approach performs well even in low-photon regimes common to coronagraphic imaging of exoplanets.
Collapse
|
23
|
Deng M, Li S, Zhang Z, Kang I, Fang NX, Barbastathis G. On the interplay between physical and content priors in deep learning for computational imaging. Opt Express 2020; 28:24152-24170. [PMID: 32752400 DOI: 10.1364/oe.395204] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 07/21/2020] [Indexed: 06/11/2023]
Abstract
Deep learning (DL) has been applied extensively in many computational imaging problems, often leading to superior performance over traditional iterative approaches. However, two important questions remain largely unanswered: first, how well can the trained neural network generalize to objects very different from the ones in training? This is particularly important in practice, since large-scale annotated examples similar to those of interest are often not available during training. Second, has the trained neural network learnt the underlying (inverse) physics model, or has it merely done something trivial, such as memorizing the examples or point-wise pattern matching? This pertains to the interpretability of machine-learning based algorithms. In this work, we use the Phase Extraction Neural Network (PhENN) [Optica 4, 1117-1125 (2017)], a deep neural network (DNN) for quantitative phase retrieval in a lensless phase imaging system as the standard platform and show that the two questions are related and share a common crux: the choice of the training examples. Moreover, we connect the strength of the regularization effect imposed by a training set to the training process with the Shannon entropy of images in the dataset. That is, the higher the entropy of the training images, the weaker the regularization effect can be imposed. We also discover that weaker regularization effect leads to better learning of the underlying propagation model, i.e. the weak object transfer function, applicable for weakly scattering objects under the weak object approximation. Finally, simulation and experimental results show that better cross-domain generalization performance can be achieved if DNN is trained on a higher-entropy database, e.g. the ImageNet, than if the same DNN is trained on a lower-entropy database, e.g. MNIST, as the former allows the underlying physics model be learned better than the latter.
Collapse
|
24
|
Kang I, Zhang F, Barbastathis G. Phase extraction neural network (PhENN) with coherent modulation imaging (CMI) for phase retrieval at low photon counts. Opt Express 2020; 28:21578-21600. [PMID: 32752433 DOI: 10.1364/oe.397430] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 06/19/2020] [Indexed: 06/11/2023]
Abstract
Imaging with low-dose light is of importance in various fields, especially when minimizing radiation-induced damage onto samples is desirable. The raw image captured at the detector plane is then predominantly a Poisson random process with Gaussian noise added due to the quantum nature of photo-electric conversion. Under such noisy conditions, highly ill-posed problems such as phase retrieval from raw intensity measurements become prone to strong artifacts in the reconstructions; a situation that deep neural networks (DNNs) have already been shown to be useful at improving. Here, we demonstrate that random phase modulation on the optical field, also known as coherent modulation imaging (CMI), in conjunction with the phase extraction neural network (PhENN) and a Gerchberg-Saxton-Fienup (GSF) approximant, further improves resilience to noise of the phase-from-intensity imaging problem. We offer design guidelines for implementing the CMI hardware with the proposed computational reconstruction scheme and quantify reconstruction improvement as function of photon count.
Collapse
|
25
|
Wang F, Bian Y, Wang H, Lyu M, Pedrini G, Osten W, Barbastathis G, Situ G. Phase imaging with an untrained neural network. Light Sci Appl 2020; 9:77. [PMID: 32411362 PMCID: PMC7200792 DOI: 10.1038/s41377-020-0302-3] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Revised: 03/17/2020] [Accepted: 03/23/2020] [Indexed: 05/11/2023]
Abstract
Most of the neural networks proposed so far for computational imaging (CI) in optics employ a supervised training strategy, and thus need a large training set to optimize their weights and biases. Setting aside the requirements of environmental and system stability during many hours of data acquisition, in many practical applications, it is unlikely to be possible to obtain sufficient numbers of ground-truth images for training. Here, we propose to overcome this limitation by incorporating into a conventional deep neural network a complete physical model that represents the process of image formation. The most significant advantage of the resulting physics-enhanced deep neural network (PhysenNet) is that it can be used without training beforehand, thus eliminating the need for tens of thousands of labeled data. We take single-beam phase imaging as an example for demonstration. We experimentally show that one needs only to feed PhysenNet a single diffraction pattern of a phase object, and it can automatically optimize the network and eventually produce the object phase through the interplay between the neural network and the physical model. This opens up a new paradigm of neural network design, in which the concept of incorporating a physical model into a neural network can be generalized to solve many other CI problems.
Collapse
Affiliation(s)
- Fei Wang
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, 201800 Shanghai, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, 100049 Beijing, China
| | - Yaoming Bian
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, 201800 Shanghai, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, 100049 Beijing, China
| | - Haichao Wang
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, 201800 Shanghai, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, 100049 Beijing, China
| | - Meng Lyu
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, 201800 Shanghai, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, 100049 Beijing, China
| | - Giancarlo Pedrini
- Institut für Technische Optik, Universität Stuttgart, Pfaffenwaldring 9, 70569 Stuttgart, Germany
| | - Wolfgang Osten
- Institut für Technische Optik, Universität Stuttgart, Pfaffenwaldring 9, 70569 Stuttgart, Germany
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139-4301 USA
| | - Guohai Situ
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, 201800 Shanghai, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, 100049 Beijing, China
- Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, 310024 Hangzhou, China
| |
Collapse
|
26
|
Komuro K, Nomura T, Barbastathis G. Deep ghost phase imaging. Appl Opt 2020; 59:3376-3382. [PMID: 32400448 DOI: 10.1364/ao.390256] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Accepted: 03/09/2020] [Indexed: 06/11/2023]
Abstract
Deep-learning-based single-pixel phase imaging is proposed. The method, termed deep ghost phase imaging (DGPI), succeeds the advantages of computational ghost imaging, i.e., has the phase imaging quality with high signal-to-noise ratio derived from the Fellgett's multiplex advantage and the point-like detection of diffracted light from objects. A deep convolutional neural network is learned to output a desired phase distribution from an input of a defocused intensity distribution reconstructed by the single-pixel imaging theory. Compared to the conventional interferometric and transport-of-intensity approaches to single-pixel phase imaging, the DGPI requires neither additional intensity measurements nor explicit approximations. The effects of defocus distance and light level are investigated by numerical simulation and an optical experiment confirms the feasibility of the DGPI.
Collapse
|
27
|
Wu J, Zhang H, Zhang W, Jin G, Cao L, Barbastathis G. Single-shot lensless imaging with fresnel zone aperture and incoherent illumination. Light Sci Appl 2020; 9:53. [PMID: 32284855 PMCID: PMC7138823 DOI: 10.1038/s41377-020-0289-9] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Revised: 03/12/2020] [Accepted: 03/15/2020] [Indexed: 05/12/2023]
Abstract
Lensless imaging eliminates the need for geometric isomorphism between a scene and an image while allowing the construction of compact, lightweight imaging systems. However, a challenging inverse problem remains due to the low reconstructed signal-to-noise ratio. Current implementations require multiple masks or multiple shots to denoise the reconstruction. We propose single-shot lensless imaging with a Fresnel zone aperture and incoherent illumination. By using the Fresnel zone aperture to encode the incoherent rays in wavefront-like form, the captured pattern has the same form as the inline hologram. Since conventional backpropagation reconstruction is troubled by the twin-image problem, we show that the compressive sensing algorithm is effective in removing this twin-image artifact due to the sparsity in natural scenes. The reconstruction with a significantly improved signal-to-noise ratio from a single-shot image promotes a camera architecture that is flat and reliable in its structure and free of the need for strict calibration.
Collapse
Affiliation(s)
- Jiachen Wu
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, 100084 Beijing, China
| | - Hua Zhang
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, 100084 Beijing, China
| | - Wenhui Zhang
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, 100084 Beijing, China
| | - Guofan Jin
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, 100084 Beijing, China
| | - Liangcai Cao
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, 100084 Beijing, China
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 USA
| |
Collapse
|
28
|
Deng M, Li S, Goy A, Kang I, Barbastathis G. Learning to synthesize: robust phase retrieval at low photon counts. Light Sci Appl 2020; 9:36. [PMID: 32194950 PMCID: PMC7062747 DOI: 10.1038/s41377-020-0267-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 02/04/2020] [Accepted: 02/19/2020] [Indexed: 05/13/2023]
Abstract
The quality of inverse problem solutions obtained through deep learning is limited by the nature of the priors learned from examples presented during the training phase. Particularly in the case of quantitative phase retrieval, spatial frequencies that are underrepresented in the training database, most often at the high band, tend to be suppressed in the reconstruction. Ad hoc solutions have been proposed, such as pre-amplifying the high spatial frequencies in the examples; however, while that strategy improves the resolution, it also leads to high-frequency artefacts, as well as low-frequency distortions in the reconstructions. Here, we present a new approach that learns separately how to handle the two frequency bands, low and high, and learns how to synthesize these two bands into full-band reconstructions. We show that this "learning to synthesize" (LS) method yields phase reconstructions of high spatial resolution and without artefacts and that it is resilient to high-noise conditions, e.g., in the case of very low photon flux. In addition to the problem of quantitative phase retrieval, the LS method is applicable, in principle, to any inverse problem where the forward operator treats different frequency bands unevenly, i.e., is ill-posed.
Collapse
Affiliation(s)
- Mo Deng
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Shuai Li
- Sensebrain Technology Limited LLC, 2550 N 1st Street, Suite 300, San Jose, CA 95131 USA
| | - Alexandre Goy
- Omnisens SA, Riond Bosson 3, 1110 Morges, VD Switzerland
| | - Iksung Kang
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, Singapore, 117543 Singapore
| |
Collapse
|
29
|
Deng M, Goy A, Li S, Arthur K, Barbastathis G. Probing shallower: perceptual loss trained Phase Extraction Neural Network (PLT-PhENN) for artifact-free reconstruction at low photon budget. Opt Express 2020; 28:2511-2535. [PMID: 32121939 DOI: 10.1364/oe.381301] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 12/29/2019] [Indexed: 06/10/2023]
Abstract
Deep neural networks (DNNs) are efficient solvers for ill-posed problems and have been shown to outperform classical optimization techniques in several computational imaging problems. In supervised mode, DNNs are trained by minimizing a measure of the difference between their actual output and their desired output; the choice of measure, referred to as "loss function," severely impacts performance and generalization ability. In a recent paper [A. Goy et al., Phys. Rev. Lett. 121(24), 243902 (2018)], we showed that DNNs trained with the negative Pearson correlation coefficient (NPCC) as the loss function are particularly fit for photon-starved phase-retrieval problems, though the reconstructions are manifestly deficient at high spatial frequencies. In this paper, we show that reconstructions by DNNs trained with default feature loss (defined at VGG layer ReLU-22) contain more fine details; however, grid-like artifacts appear and are enhanced as photon counts become very low. Two additional key findings related to these artifacts are presented here. First, the frequency signature of the artifacts depends on the VGG's inner layer that perceptual loss is defined upon, halving with each MaxPooling2D layer deeper in the VGG. Second, VGG ReLU-12 outperforms all other layers as the defining layer for the perceptual loss.
Collapse
|
30
|
Nagelberg S, Goodling A, Subramanian K, Barbastathis G, Kreysing M, Swager T, Zarzar L, Kolle M. Bi-phase emulsion droplets as dynamic fluid optical systems. EPJ Web Conf 2019. [DOI: 10.1051/epjconf/201921513003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Micro-scale optical components play a critical role in many applications, in particular when these components are capable of dynamically responding to different stimuli with a controlled variation of their optical behavior. Here, we discuss the potential of micro-scale bi-phase emulsion droplets as a material platform for dynamic fluid optical components. Such droplets act as liquid compound micro-lenses with dynamically tunable focal lengths. They can be reconfigured to focus or scatter light and form images. In addition, we discuss how these droplets can be used to create iridescent structural color with large angular spectral separation. Experimental demonstrations of the emulsion droplet optics are complemented by theoretical analysis and wave-optical modelling. Finally, we provide evidence of the droplets utility as fluidic optical elements in potential application scenarios.
Collapse
|
31
|
Abstract
Imaging systems' performance at low light intensity is affected by shot noise, which becomes increasingly strong as the power of the light source decreases. In this Letter, we experimentally demonstrate the use of deep neural networks to recover objects illuminated with weak light and demonstrate better performance than with the classical Gerchberg-Saxton phase retrieval algorithm for equivalent signal over noise ratio. The prior contained in the training image set can be leveraged by the deep neural network to detect features with a signal over noise ratio close to one. We apply this principle to a phase retrieval problem and show successful recovery of the object's most salient features with as little as one photon per detector pixel on average in the illumination beam. We also show that the phase reconstruction is significantly improved by training the neural network with an initial estimate of the object, as opposed to training it with the raw intensity measurement.
Collapse
Affiliation(s)
- Alexandre Goy
- Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Kwabena Arthur
- Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Shuai Li
- Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - George Barbastathis
- Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
32
|
Zhang Z, Li WN, Asundi A, Barbastathis G. Simultaneous measurement and reconstruction tailoring for quantitative phase imaging. Opt Express 2018; 26:32532-32553. [PMID: 30645419 DOI: 10.1364/oe.26.032532] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 10/04/2018] [Indexed: 05/24/2023]
Abstract
We propose simultaneous measurement and reconstruction tailoring (SMaRT) for quantitative phase imaging; it is a joint optimization approach to inverse problems wherein minimizing the expected end-to-end error yields optimal design parameters for both the measurement and reconstruction processes. Using simulated and experimentally-collected data for a specific scenario, we demonstrate that optimizing the design of the two processes together reduces phase reconstruction error over past techniques that consider these two design problems separately. Our results suggest at times surprising design principles, and our approach can potentially inspire improved solution methods for other inverse problems in optics as well as the natural sciences.
Collapse
|
33
|
Li S, Barbastathis G. Spectral pre-modulation of training examples enhances the spatial resolution of the phase extraction neural network (PhENN). Opt Express 2018; 26:29340-29352. [PMID: 30470099 DOI: 10.1364/oe.26.029340] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Accepted: 09/21/2018] [Indexed: 05/27/2023]
Abstract
The phase extraction neural network (PhENN) [Optica 4, 1117 (2017)] is a computational architecture, based on deep machine learning, for lens-less quantitative phase retrieval from raw intensity data. PhENN is a deep convolutional neural network trained through examples consisting of pairs of true phase objects and their corresponding intensity diffraction patterns; thereafter, given a test raw intensity pattern, PhENN is capable of reconstructing the original phase object robustly, in many cases even for objects outside the database where the training examples were drawn from. Here, we show that the spatial frequency content of the training examples is an important factor limiting PhENN's spatial frequency response. For example, if the training database is relatively sparse in high spatial frequencies, as most natural scenes are, PhENN's ability to resolve fine spatial features in test patterns will be correspondingly limited. To combat this issue, we propose "flattening" the power spectral density of the training examples before presenting them to PhENN. For phase objects following the statistics of natural scenes, we demonstrate experimentally that the spectral pre-modulation method enhances the spatial resolution of PhENN by a factor of 2.
Collapse
|
34
|
Wen Ng X, Barbastathis G, Wohland T. Studying Protein Dynamics and Organization in Live Cell Membranes by Imaging FCS and SOFI/SRRF Analyses. Biophys J 2018. [DOI: 10.1016/j.bpj.2017.11.2946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
35
|
Zhang Z, Bao C, Ji H, Shen Z, Barbastathis G. Apparent coherence loss in phase space tomography. J Opt Soc Am A Opt Image Sci Vis 2017; 34:2025-2033. [PMID: 29091654 DOI: 10.1364/josaa.34.002025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Accepted: 09/29/2017] [Indexed: 06/07/2023]
Abstract
A sensor pixel integrates optical intensity across its extent, and we explore the role that this integration plays in phase space tomography. The literature is inconsistent in its treatment of this integration-some approaches model this integration explicitly, some approaches are ambiguous about whether this integration is taken into account, and still some approaches assume pixel values to be point samples of the optical intensity. We show that making a point-sample assumption results in apodization of and thus systematic error in the recovered ambiguity function, leading to underestimating the overall degree of coherence. We explore the severity of this effect using a Gaussian Schell-model source and discuss when this effect, as opposed to noise, is the dominant source of error in the retrieved state of coherence.
Collapse
|
36
|
Baranski M, Rehman S, Muttikulangara SS, Barbastathis G, Miao J. Computational integral field spectroscopy with diverse imaging. J Opt Soc Am A Opt Image Sci Vis 2017; 34:1711-1719. [PMID: 29036145 DOI: 10.1364/josaa.34.001711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Accepted: 08/09/2017] [Indexed: 06/07/2023]
Abstract
Integral field spectroscopy (IFS) is a well-established method for measuring spectral intensity data of the form s(x,y,λ), where x, y are spatial coordinates and λ is the wavelength. In most flavors of IFS, there is a trade-off between sampling (x,y) and the measured wavelength band Δλ. Here we present the first, to our knowledge, attempt to overcome this trade-off by use of computational imaging and measurement diversity. We implement diversity by including a grating in our design, which allows rotation of the dispersed spectra between measurements. The raw intensity data captured from the rotated grating positions are then processed by an inverse algorithm that utilizes sparsity in the data. We present simulated results from spatial-spectral data in the experimental dataset. We used non-overlapping portions of the dataset to train our sparsity priors in the form of the dictionary, and to test the reconstruction quality. We found that, depending on the level of noise in the measurement, diversity up to a maximum number of measurements is beneficial in terms of reducing error, and yields diminishing returns if even more measurements are taken.
Collapse
|
37
|
Hoang TX, Nagelberg SN, Kolle M, Barbastathis G. Fano resonances from coupled whispering-gallery modes in photonic molecules. Opt Express 2017; 25:13125-13144. [PMID: 28788849 DOI: 10.1364/oe.25.013125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Accepted: 05/10/2017] [Indexed: 06/07/2023]
Abstract
We present a rigorous investigation of resonant coupling between microspheres based on multipole expansions. The microspheres have diameters in the range of several micrometers and can be used to realize various photonic molecule configurations. We reveal and quantify the interactions between the whispering gallery modes inside individual microspheres and the propagation modes of the entire photonic molecule structures. We show that Fano-like resonances in photonic molecules can be engineered by tuning the coupling between the resonant and radiative modes when the structures are illuminated with simple dipole radiation.
Collapse
|
38
|
Choi HJ, Park KC, Lee H, Crouzier T, Rubner MF, Cohen RE, Barbastathis G, McKinley GH. Superoleophilic Titania Nanoparticle Coatings with Fast Fingerprint Decomposition and High Transparency. ACS Appl Mater Interfaces 2017; 9:8354-8360. [PMID: 28164702 DOI: 10.1021/acsami.6b14631] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Low surface tension sebaceous liquids such as human fingerprint oils are readily deposited on high energy surfaces such as clean glass, leaving smudges that significantly lower transparency. There have been several attempts to prevent formation of these dactylograms on glass by employing oil-repellent textured surfaces. However, nanotextured superoleophobic coatings typically scatter visible light, and the intrinsic thermodynamic metastability of the composite superoleophobic state can result in failure of the oil repellency under moderate contact pressure. We develop titania-based porous nanoparticle coatings that are superoleophilic and highly transparent and which exhibit short time scales for decomposition of fingerprint oils under ultraviolet light. The mechanism by which a typical dactylogram is consumed combines wicking of the sebum into the nanoporous titania structure followed by photocatalytic degradation. We envision a wide range of applications because these TiO2 nanostructured surfaces remain photocatalytically active against fingerprint oils in natural sunlight and are also compatible with flexible glass substrates.
Collapse
Affiliation(s)
| | | | | | | | | | | | - George Barbastathis
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, Singapore
| | | |
Collapse
|
39
|
Abstract
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object's phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phase retrieval such as ptychography. Our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.
Collapse
|
40
|
Kuang C, Ma Y, Zhou R, Lee J, Barbastathis G, Dasari RR, Yaqoob Z, So PTC. Digital micromirror device-based laser-illumination Fourier ptychographic microscopy. Opt Express 2015; 23:26999-7010. [PMID: 26480361 PMCID: PMC4646516 DOI: 10.1364/oe.23.026999] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
We report a novel approach to Fourier ptychographic microscopy (FPM) by using a digital micromirror device (DMD) and a coherent laser source (532 nm) for generating spatially modulated sample illumination. Previously demonstrated FPM systems are all based on partially-coherent illumination, which offers limited throughput due to insufficient brightness. Our FPM employs a high power coherent laser source to enable shot-noise limited high-speed imaging. For the first time, a digital micromirror device (DMD), imaged onto the back focal plane of the illumination objective, is used to generate spatially modulated sample illumination field for ptychography. By coding the on/off states of the micromirrors, the illumination plane wave angle can be varied at speeds more than 4 kHz. A set of intensity images, resulting from different oblique illuminations, are used to numerically reconstruct one high-resolution image without obvious laser speckle. Experiments were conducted using a USAF resolution target and a fiber sample, demonstrating high-resolution imaging capability of our system. We envision that our approach, if combined with a coded-aperture compressive-sensing algorithm, will further improve the imaging speed in DMD-based FPM systems.
Collapse
Affiliation(s)
- Cuifang Kuang
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
- State Key Laboratory of Modern Optical Instrumentation, Department of Optical Engineering, Zhejiang University, Hangzhou 310027, China
| | - Ye Ma
- Department of Biological Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
- State Key Laboratory of Modern Optical Instrumentation, Department of Optical Engineering, Zhejiang University, Hangzhou 310027, China
| | - Renjie Zhou
- Laser Biomedical Research Center, G. R. Harrison Spectroscopy Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Justin Lee
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Ramachandra R. Dasari
- Laser Biomedical Research Center, G. R. Harrison Spectroscopy Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Zahid Yaqoob
- Laser Biomedical Research Center, G. R. Harrison Spectroscopy Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Peter T. C. So
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
- Department of Biological Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
- Laser Biomedical Research Center, G. R. Harrison Spectroscopy Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
41
|
Kim JG, Hsieh CH, Choi HJ, Gardener J, Singh B, Knapitsch A, Lecoq P, Barbastathis G. Conical photonic crystals for enhancing light extraction efficiency from high refractive index materials. Opt Express 2015; 23:22730-22739. [PMID: 26368241 DOI: 10.1364/oe.23.022730] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We propose, analyze and optimize a two-dimensional conical photonic crystal geometry to enhance light extraction from a high refractive index material, such as an inorganic scintillator. The conical geometry suppresses Fresnel reflections at an optical interface due to adiabatic impedance matching from a gradient index effect. The periodic array of cone structures with a pitch larger than the wavelength of light diffracts light into higher-order modes with different propagating angles, enabling certain photons to overcome total internal reflection (TIR). The numerical simulation shows simultaneous light yield gains relative to a flat surface both below and above the critical angle and how key parameters affect the light extraction efficiency. Our optimized design provides a 46% gain in light yield when the conical photonic crystals are coated on an LSO (cerium-doped lutetium oxyorthosilicate) scintillator.
Collapse
|
42
|
Abstract
Microsphere-based microscopy systems have garnered lots of recent interest, mainly due to their capacity in focusing light and imaging beyond the diffraction limit. In this paper, we present theoretical foundations for studying the optical performance of such systems by developing a complete theoretical model encompassing the aspects of illumination, sample interaction and imaging/collection. Using this model, we show that surface waves play a significant role in focusing and imaging with the microsphere. We also show that by designing a radially polarized convergent beam, we can focus to a spot smaller than the diffraction limit. By exploiting surface waves, we are able to resolve two dipoles spaced 98 nm apart in simulation using light at a wavelength of 402.292 nm. Using our model, we also explore the effect of beam geometry and polarization on optical resolution and focal spot size, showing that both geometry and polarization greatly affect the shape of the spot.
Collapse
|
43
|
Li S, Zhou C, Barbastathis G. Polarization-independent Talbot effect. Opt Lett 2015; 40:1988-1991. [PMID: 25927765 DOI: 10.1364/ol.40.001988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We report the first observation of polarization-independent Talbot effect with a high-density grating for TE and TM polarizations, which is attributed to the identical phases and diffraction efficiencies of the diffraction orders for both polarizations. We introduce the simplified modal method that provides an insightful physical description for explanation of the diffraction efficiency and phase of the polarization-independent Talbot effect. Only two even grating modes can be excited, which determines the diffraction properties of the near-field image. We expect that this theoretical work will be helpful for the tremendous potential applications of the Talbot effect.
Collapse
|
44
|
Abstract
A three dimensional (3D) pupil is an optical element, most commonly implemented on a volume hologram, that processes the incident optical field on a 3D fashion. Here we analyze the diffraction properties of a 3D pupil with finite lateral aperture in the 4-f imaging system configuration, using the Wigner Distribution Function (WDF) formulation. Since 3D imaging pupil is finite in both lateral and longitudinal directions, the WDF of the volume holographic 4-f imager theoretically predicts distinct Bragg diffraction patterns in phase space. These result in asymmetric profiles of diffracted coherent point spread function between degenerate diffraction and Bragg diffraction, elucidating the fundamental performance of volume holographic imaging. Experimental measurements are also presented, confirming the theoretical predictions.
Collapse
Affiliation(s)
- Hsi-Hsun Chen
- Center for Optoelectronic Medicine, National Taiwan University, Taipei 10051,
Taiwan
| | - Se Baek Oh
- KLA-Tencor Corporation, Milpitas, California 95035,
USA
| | - Xiaomin Zhai
- Center for Optoelectronic Medicine, National Taiwan University, Taipei 10051,
Taiwan
| | - Jui-Chang Tsai
- Center for Optoelectronic Medicine, National Taiwan University, Taipei 10051,
Taiwan
- Institute of Medical Devices and Imaging system, National Taiwan University, Taipei 10051,
Taiwan
| | - Liang-Cai Cao
- Department of Precision Instrument, Tsinghua University, Beijing, 100084,
China
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139,
USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, 1 CREATE Way, #10-01 CREATE Tower, 138602,
Singapore
| | - Yuan Luo
- Center for Optoelectronic Medicine, National Taiwan University, Taipei 10051,
Taiwan
- Institute of Medical Devices and Imaging system, National Taiwan University, Taipei 10051,
Taiwan
- Molecular Imaging Center, National Taiwan University, Taipei, 10672,
Taiwan
| |
Collapse
|
45
|
Chen W, Tian L, Rehman S, Zhang Z, Lee HP, Barbastathis G. Empirical concentration bounds for compressive holographic bubble imaging based on a Mie scattering model. Opt Express 2015; 23:4715-4725. [PMID: 25836508 DOI: 10.1364/oe.23.004715] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We use compressive in-line holography to image air bubbles in water and investigate the effect of bubble concentration on reconstruction performance by simulation. Our forward model treats bubbles as finite spheres and uses Mie scattering to compute the scattered field in a physically rigorous manner. Although no simple analytical bounds on maximum concentration can be derived within the classical compressed sensing framework due to the complexity of the forward model, the receiver operating characteristic (ROC) curves in our simulation provide an empirical concentration bound for accurate bubble detection by compressive holography at different noise levels, resulting in a maximum tolerable concentration much higher than the traditional back-propagation method.
Collapse
|
46
|
Sheppard CJR, Kou SS, Lin J, Sharma M, Barbastathis G. Temporal reshaping of two-dimensional pulses. Opt Express 2014; 22:32016-32025. [PMID: 25607169 DOI: 10.1364/oe.22.032016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
An analytic study of complete cylindrical focusing of pulses in two dimensions is presented, and compared with the analogous three-dimensional case of focusing over a complete sphere. Such behavior is relevant for understanding the limiting performance of ultrafast, planar photonic and plasmonic devices. A particular spectral distribution is assumed that contains finite energy. Separate ingoing and outgoing pulsed waves are considered, along with the combination that would be generated in free space by an ingoing wave. It is shown that for the two dimensional case, in order to produce a temporally symmetrical pulse at the focus, an asymmetric pulse must be launched. A symmetrical outgoing pulse is generated from a source with asymmetric time behavior, or an anti-symmetric input pulse. These results are very different from the corresponding three-dimensional case, and imply fundamental limitations on the performance of ultrafast, tightly focused, two-dimensional devices.
Collapse
|
47
|
Chen Z, Gao H, Barbastathis G. Background suppression in long-distance imaging using volume hologram filters. Opt Express 2014; 22:31123-31130. [PMID: 25607061 DOI: 10.1364/oe.22.031123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We performed experiments using a volume hologram filter (VHF) coupled with a telephoto objective lens to detect weak distant signals masked by strong background noise. The VHF was able to selectively pass light originating from a certain distance while attenuating background noise contributions from other distances, resulting in a higher signal-to-noise ratio (SNR). The proposed method is useful in remote sensing applications such as daytime artificial satellite and space debris detection.
Collapse
|
48
|
Zhu Y, Zhang Z, Barbastathis G. Phase imaging for absorptive phase objects using hybrid uniform and structured illumination transport of intensity equation. Opt Express 2014; 22:28966-28976. [PMID: 25402135 DOI: 10.1364/oe.22.028966] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Transport of intensity equation (TIE) has been a popular and convenient phase imaging method that retrieves phase profile from the measurement of intensity differentials. Conventional 2-shot uniform illumination TIE can give reliable inversion of the phase from intensity in many situations of practical interest; however, it has a null space consisting of fields with non-zero circulation of the Poynting vector. Here, we propose the hybrid illumination TIE method to disambiguate such objects. By comparing the diffraction signals using uniform and structured (sinusoidal) illumination patterns, we obtain a modulation-induced signal that depends solely on the phase gradient. In this way, we also increase signal sensitivity in the low spatial frequency region.
Collapse
|
49
|
Zhu Y, Shanker A, Tian L, Waller L, Barbastathis G. Low-noise phase imaging by hybrid uniform and structured illumination transport of intensity equation. Opt Express 2014; 22:26696-711. [PMID: 25401819 DOI: 10.1364/oe.22.026696] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
We demonstrate a new approach to the transport of intensity equation (TIE) phase retrieval method which uses structured illumination to improve low-frequency noise performance. In the hybrid scheme, two phase images are acquired: one with uniform and one with sinusoidal grating illumination intensity. The former preserves the high spatial frequency features of the phase best, whereas the latter dramatically increase the response at low spatial frequencies (where traditional TIE notoriously suffers). We then theoretically prove the design of a spectral filter that optimally combines the two phase results while suppressing noise. The combination of uniformly and structured illuminated TIE (hybrid TIE) phase imaging is experimentally demonstrated optically with a calibrated pure phase object.
Collapse
|
50
|
Abstract
We propose a new approach to the complete retrieval of a coherent field (amplitude and phase) using the same hardware configuration as a Shack-Hartmann sensor but with two modifications: first, we add a transversally shifted measurement to resolve ambiguities in the measured phase; and second, we employ factored form descent (FFD), an inverse algorithm for coherence retrieval, with a hard rank constraint. We verified the proposed approach using both numerical simulations and experiments.
Collapse
|