1
|
Alido J, Greene J, Xue Y, Hu G, Gilmore M, Monk KJ, DiBenedictis BT, Davison IG, Tian L, Li Y. Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network. OPTICS EXPRESS 2024; 32:6241-6257. [PMID: 38439332 PMCID: PMC11018337 DOI: 10.1364/oe.514072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 01/16/2024] [Accepted: 01/16/2024] [Indexed: 03/06/2024]
Abstract
Imaging through scattering is a pervasive and difficult problem in many biological applications. The high background and the exponentially attenuated target signals due to scattering fundamentally limits the imaging depth of fluorescence microscopy. Light-field systems are favorable for high-speed volumetric imaging, but the 2D-to-3D reconstruction is fundamentally ill-posed, and scattering exacerbates the condition of the inverse problem. Here, we develop a scattering simulator that models low-contrast target signals buried in heterogeneous strong background. We then train a deep neural network solely on synthetic data to descatter and reconstruct a 3D volume from a single-shot light-field measurement with low signal-to-background ratio (SBR). We apply this network to our previously developed computational miniature mesoscope and demonstrate the robustness of our deep learning algorithm on scattering phantoms with different scattering conditions. The network can robustly reconstruct emitters in 3D with a 2D measurement of SBR as low as 1.05 and as deep as a scattering length. We analyze fundamental tradeoffs based on network design factors and out-of-distribution data that affect the deep learning model's generalizability to real experimental data. Broadly, we believe that our simulator-based deep learning approach can be applied to a wide range of imaging through scattering techniques where experimental paired training data is lacking.
Collapse
Affiliation(s)
- Jeffrey Alido
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Joseph Greene
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Yujia Xue
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Guorong Hu
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Mitchell Gilmore
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Kevin J. Monk
- Department of Biology, Boston University, Boston, Massachusetts 02215, USA
| | - Brett T. DiBenedictis
- Department of Psychology and Brain Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Ian G. Davison
- Department of Biology, Boston University, Boston, Massachusetts 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Yunzhe Li
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
- Current address: Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, California, 94720, USA
| |
Collapse
|
2
|
Mazlin V. Optical tomography in a single camera frame using fringe-encoded deep-learning full-field OCT. BIOMEDICAL OPTICS EXPRESS 2024; 15:222-236. [PMID: 38223177 PMCID: PMC10783898 DOI: 10.1364/boe.506664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 11/29/2023] [Accepted: 12/03/2023] [Indexed: 01/16/2024]
Abstract
Optical coherence tomography is a valuable tool for in vivo examination thanks to its superior combination of axial resolution, field-of-view and working distance. OCT images are reconstructed from several phases that are obtained by modulation/multiplexing of light wavelength or optical path. This paper shows that only one phase (and one camera frame) is sufficient for en face tomography. The idea is to encode a high-frequency fringe patterns into the selected layer of the sample using low-coherence interferometry. These patterns can then be efficiently extracted with a high-pass filter enhanced via deep learning networks to create the tomographic full-field OCT view. This brings 10-fold improvement in imaging speed, considerably reducing the phase errors and incoherent light artifacts related to in vivo movements. Moreover, this work opens a path for low-cost tomography with slow consumer cameras. Optically, the device resembles the conventional time-domain full-field OCT without incurring additional costs or a field-of-view/resolution reduction. The approach is validated by imaging in vivo cornea in human subjects. Open-source and easy-to-follow codes for data generation/training/inference with U-Net/Pix2Pix networks are provided to be used in a variety of image-to-image translation tasks.
Collapse
Affiliation(s)
- Viacheslav Mazlin
- Institut Langevin, ESPCI Paris, PSL University, CNRS, 1 rue Jussieu, 75005 Paris, France
- Quinze-Vingts National Eye Hospital, 28 Rue de Charenton, 75012 Paris, France
| |
Collapse
|
3
|
Alido J, Greene J, Xue Y, Hu G, Li Y, Gilmore M, Monk KJ, Dibenedictis BT, Davison IG, Tian L. Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network. ARXIV 2023:arXiv:2303.12573v2. [PMID: 36994164 PMCID: PMC10055497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
Imaging through scattering is a pervasive and difficult problem in many biological applications. The high background and the exponentially attenuated target signals due to scattering fundamentally limits the imaging depth of fluorescence microscopy. Light-field systems are favorable for high-speed volumetric imaging, but the 2D-to-3D reconstruction is fundamentally ill-posed, and scattering exacerbates the condition of the inverse problem. Here, we develop a scattering simulator that models low-contrast target signals buried in heterogeneous strong background. We then train a deep neural network solely on synthetic data to descatter and reconstruct a 3D volume from a single-shot light-field measurement with low signal-to-background ratio (SBR). We apply this network to our previously developed Computational Miniature Mesoscope and demonstrate the robustness of our deep learning algorithm on scattering phantoms with different scattering conditions. The network can robustly reconstruct emitters in 3D with a 2D measurement of SBR as low as 1.05 and as deep as a scattering length. We analyze fundamental tradeoffs based on network design factors and out-of-distribution data that affect the deep learning model's generalizability to real experimental data. Broadly, we believe that our simulator-based deep learning approach can be applied to a wide range of imaging through scattering techniques where experimental paired training data is lacking.
Collapse
Affiliation(s)
- Jeffrey Alido
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Joseph Greene
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Yujia Xue
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Guorong Hu
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Yunzhe Li
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Mitchell Gilmore
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Kevin J. Monk
- Department of Biology, Boston University, Boston, MA 02215, USA
| | - Brett T. Dibenedictis
- Department of Psychology and Brain Sciences, Boston University, Boston, MA 02215, USA
| | - Ian G. Davison
- Department of Biology, Boston University, Boston, MA 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
- Department of Psychology and Brain Sciences, Boston University, Boston, MA 02215, USA
| |
Collapse
|
4
|
Yang B, Liu W, Chen X, Chen G, Zhu X. A novel multi-frame wavelet generative adversarial network for scattering reconstruction of structured illumination microscopy. Phys Med Biol 2023; 68:185016. [PMID: 37619594 DOI: 10.1088/1361-6560/acf3cb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 08/24/2023] [Indexed: 08/26/2023]
Abstract
Objective. Structured illumination microscopy (SIM) is widely used in various fields of life science research. In clinical practice, it has low phototoxicity, fast imaging speed and no special fluorescent markers. However, SIM is still affected by the scattering medium of biological tissues, resulting in insufficient resolution of the obtained images, which limits the development of life sciences. A novel multi-frame wavelet generation adversarial network (MWGAN) is proposed to improve the scattering reconstruction capability of SIM.Approach. MWGAN is based on two components derived from the original image. A generative adversarial network constructed by wavelet transform is trained to reconstruct some complex details in the cell structure. Multi-frame adversarial network is used to obtain the inter-frame information of the image and use the complementary information of the before and after frames to improve the quality of the model reconstruction.Results. To demonstrate the robustness of MWGAN, multiple low-quality SIM image datasets are tested. Compared with the state-of-the-art methods, the proposed method achieves superior performance in both of the subjective and objective evaluation.Conclusion. MWGAN is effective for improving the clarity of SIM images. Meanwhile, the SIM images reconstructed by multiple frames improve the reconstruction quality of complex regions and allow clearer and dynamic observation of cellular functions.
Collapse
Affiliation(s)
- Bin Yang
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Weiping Liu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Xinghong Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Guannan Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Xiaoqin Zhu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
| |
Collapse
|
5
|
Späth M, Romboy A, Nzenwata I, Rohde M, Ni D, Ackermann L, Stelzle F, Hohmann M, Klämpfl F. Experimental Validation of Shifted Position-Diffuse Reflectance Imaging (SP-DRI) on Optical Phantoms. SENSORS (BASEL, SWITZERLAND) 2022; 22:9880. [PMID: 36560250 PMCID: PMC9783365 DOI: 10.3390/s22249880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 12/07/2022] [Accepted: 12/13/2022] [Indexed: 06/17/2023]
Abstract
Numerous diseases such as hemorrhage, sepsis or cardiogenic shock induce a heterogeneous perfusion of the capillaries. To detect such alterations in the human blood flow pattern, diagnostic devices must provide an appropriately high spatial resolution. Shifted position-diffuse reflectance imaging (SP-DRI) has the potential to do so; it is an all-optical diagnostic technique. So far, SP-DRI has mainly been developed using Monte Carlo simulations. The present study is therefore validating this algorithm experimentally on realistic optical phantoms with thread structures down to 10 μm in diameter; a SP-DRI sensor prototype was developed and realized by means of additive manufacturing. SP-DRI turned out to be functional within this experimental framework. The position of the structures within the optical phantoms become clearly visible using SP-DRI, and the structure thickness is reflected as modulation in the SP-DRI signal amplitude; this performed well for a shift along the x axis as well as along the y axis. Moreover, SP-DRI successfully masked the pronounced influence of the illumination cone on the data. The algorithm showed significantly superior to a mere raw data inspection. Within the scope of the study, the constructive design of the SP-DRI sensor prototype is discussed and potential for improvement is explored.
Collapse
Affiliation(s)
- Moritz Späth
- Institute of Photonic Technologies, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
- Erlangen Graduate School in Advanced Optical Technologies, 91052 Erlangen, Germany
| | - Alexander Romboy
- Institute of Photonic Technologies, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
| | - Ijeoma Nzenwata
- Institute of Photonic Technologies, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
| | - Maximilian Rohde
- Erlangen Graduate School in Advanced Optical Technologies, 91052 Erlangen, Germany
- Department of Oral and Maxillofacial Surgery, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Dongqin Ni
- Institute of Photonic Technologies, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
- Erlangen Graduate School in Advanced Optical Technologies, 91052 Erlangen, Germany
| | - Lisa Ackermann
- Institute of Photonic Technologies, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
- Erlangen Graduate School in Advanced Optical Technologies, 91052 Erlangen, Germany
| | - Florian Stelzle
- Erlangen Graduate School in Advanced Optical Technologies, 91052 Erlangen, Germany
- Department of Oral and Maxillofacial Surgery, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Martin Hohmann
- Institute of Photonic Technologies, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
- Erlangen Graduate School in Advanced Optical Technologies, 91052 Erlangen, Germany
| | - Florian Klämpfl
- Institute of Photonic Technologies, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
- Erlangen Graduate School in Advanced Optical Technologies, 91052 Erlangen, Germany
| |
Collapse
|
6
|
Morales-Curiel LF, Gonzalez AC, Castro-Olvera G, Lin LC(L, El-Quessny M, Porta-de-la-Riva M, Severino J, Morera LB, Venturini V, Ruprecht V, Ramallo D, Loza-Alvarez P, Krieg M. Volumetric imaging of fast cellular dynamics with deep learning enhanced bioluminescence microscopy. Commun Biol 2022; 5:1330. [PMID: 36463346 PMCID: PMC9719505 DOI: 10.1038/s42003-022-04292-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 11/23/2022] [Indexed: 12/05/2022] Open
Abstract
Bioluminescence microscopy is an appealing alternative to fluorescence microscopy, because it does not depend on external illumination, and consequently does neither produce spurious background autofluorescence, nor perturb intrinsically photosensitive processes in living cells and animals. The low photon emission of known luciferases, however, demands long exposure times that are prohibitive for imaging fast biological dynamics. To increase the versatility of bioluminescence microscopy, we present an improved low-light microscope in combination with deep learning methods to image extremely photon-starved samples enabling subsecond exposures for timelapse and volumetric imaging. We apply our method to image subcellular dynamics in mouse embryonic stem cells, epithelial morphology during zebrafish development, and DAF-16 FoxO transcription factor shuttling from the cytoplasm to the nucleus under external stress. Finally, we concatenate neural networks for denoising and light-field deconvolution to resolve intracellular calcium dynamics in three dimensions of freely moving Caenorhabditis elegans.
Collapse
Affiliation(s)
| | | | - Gustavo Castro-Olvera
- grid.5853.b0000 0004 1757 1854ICFO, Institut de Ciencies Fotòniques, Castelldefels, Spain
| | - Li-Chun (Lynn) Lin
- grid.5853.b0000 0004 1757 1854ICFO, Institut de Ciencies Fotòniques, Castelldefels, Spain
| | - Malak El-Quessny
- grid.5853.b0000 0004 1757 1854ICFO, Institut de Ciencies Fotòniques, Castelldefels, Spain
| | | | - Jacqueline Severino
- grid.473715.30000 0004 6475 7299Center for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Laura Battle Morera
- grid.473715.30000 0004 6475 7299Center for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Valeria Venturini
- grid.473715.30000 0004 6475 7299Center for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain ,grid.5612.00000 0001 2172 2676Universitat Pompeu Fabra (UPF), Barcelona, Spain
| | - Verena Ruprecht
- grid.473715.30000 0004 6475 7299Center for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain ,grid.5612.00000 0001 2172 2676Universitat Pompeu Fabra (UPF), Barcelona, Spain ,grid.425902.80000 0000 9601 989XICREA, Pg. Lluis Companys 23, 08010 Barcelona, Spain
| | - Diego Ramallo
- grid.5853.b0000 0004 1757 1854ICFO, Institut de Ciencies Fotòniques, Castelldefels, Spain
| | - Pablo Loza-Alvarez
- grid.5853.b0000 0004 1757 1854ICFO, Institut de Ciencies Fotòniques, Castelldefels, Spain
| | - Michael Krieg
- grid.5853.b0000 0004 1757 1854ICFO, Institut de Ciencies Fotòniques, Castelldefels, Spain
| |
Collapse
|