1
|
Guo R, Yang Q, Chang AS, Hu G, Greene J, Gabel CV, You S, Tian L. EventLFM: event camera integrated Fourier light field microscopy for ultrafast 3D imaging. LIGHT, SCIENCE & APPLICATIONS 2024; 13:144. [PMID: 38918363 PMCID: PMC11199625 DOI: 10.1038/s41377-024-01502-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 05/27/2024] [Accepted: 06/09/2024] [Indexed: 06/27/2024]
Abstract
Ultrafast 3D imaging is indispensable for visualizing complex and dynamic biological processes. Conventional scanning-based techniques necessitate an inherent trade-off between acquisition speed and space-bandwidth product (SBP). Emerging single-shot 3D wide-field techniques offer a promising alternative but are bottlenecked by the synchronous readout constraints of conventional CMOS systems, thus restricting data throughput to maintain high SBP at limited frame rates. To address this, we introduce EventLFM, a straightforward and cost-effective system that overcomes these challenges by integrating an event camera with Fourier light field microscopy (LFM), a state-of-the-art single-shot 3D wide-field imaging technique. The event camera operates on a novel asynchronous readout architecture, thereby bypassing the frame rate limitations inherent to conventional CMOS systems. We further develop a simple and robust event-driven LFM reconstruction algorithm that can reliably reconstruct 3D dynamics from the unique spatiotemporal measurements captured by EventLFM. Experimental results demonstrate that EventLFM can robustly reconstruct fast-moving and rapidly blinking 3D fluorescent samples at kHz frame rates. Furthermore, we highlight EventLFM's capability for imaging of blinking neuronal signals in scattering mouse brain tissues and 3D tracking of GFP-labeled neurons in freely moving C. elegans. We believe that the combined ultrafast speed and large 3D SBP offered by EventLFM may open up new possibilities across many biomedical applications.
Collapse
Affiliation(s)
- Ruipeng Guo
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Qianwan Yang
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Andrew S Chang
- Department of Physiology and Biophysics, Boston University, Boston, MA, 02215, USA
| | - Guorong Hu
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Joseph Greene
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Christopher V Gabel
- Department of Physiology and Biophysics, Boston University, Boston, MA, 02215, USA
- Neurophotonics Center, Boston University, Boston, MA, 02215, USA
| | - Sixian You
- Research Laboratory of Electronics (RLE) in the Department of Electrical Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA.
- Neurophotonics Center, Boston University, Boston, MA, 02215, USA.
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA.
| |
Collapse
|
2
|
Zhang Y, Yuan L, Zhu Q, Wu J, Nöbauer T, Zhang R, Xiao G, Wang M, Xie H, Guo Z, Dai Q, Vaziri A. A miniaturized mesoscope for the large-scale single-neuron-resolved imaging of neuronal activity in freely behaving mice. Nat Biomed Eng 2024:10.1038/s41551-024-01226-2. [PMID: 38902522 DOI: 10.1038/s41551-024-01226-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 04/03/2024] [Indexed: 06/22/2024]
Abstract
Exploring the relationship between neuronal dynamics and ethologically relevant behaviour involves recording neuronal-population activity using technologies that are compatible with unrestricted animal behaviour. However, head-mounted microscopes that accommodate weight limits to allow for free animal behaviour typically compromise field of view, resolution or depth range, and are susceptible to movement-induced artefacts. Here we report a miniaturized head-mounted fluorescent mesoscope that we systematically optimized for calcium imaging at single-neuron resolution, for increased fields of view and depth of field, and for robustness against motion-generated artefacts. Weighing less than 2.5 g, the mesoscope enabled recordings of neuronal-population activity at up to 16 Hz, with 4 μm resolution over 300 μm depth-of-field across a field of view of 3.6 × 3.6 mm2 in the cortex of freely moving mice. We used the mesoscope to record large-scale neuronal-population activity in socially interacting mice during free exploration and during fear-conditioning experiments, and to investigate neurovascular coupling across multiple cortical regions.
Collapse
Affiliation(s)
- Yuanlong Zhang
- Department of Automation, Tsinghua University, Beijing, China
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Lekang Yuan
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, China
| | - Qiyu Zhu
- School of Medicine, Tsinghua University, Beijing, China
- Tsinghua-Peking Joint Center for Life Sciences, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China
| | - Tobias Nöbauer
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Rujin Zhang
- Department of Anesthesiology, the First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Guihua Xiao
- Department of Automation, Tsinghua University, Beijing, China
| | - Mingrui Wang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, China
| | - Hao Xie
- Department of Automation, Tsinghua University, Beijing, China
| | - Zengcai Guo
- School of Medicine, Tsinghua University, Beijing, China
- Tsinghua-Peking Joint Center for Life Sciences, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA.
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY, USA.
| |
Collapse
|
3
|
Song P, Jadan HV, Howe CL, Foust AJ, Dragotti PL. Model-Based Explainable Deep Learning for Light-Field Microscopy Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:3059-3074. [PMID: 38656840 PMCID: PMC11100862 DOI: 10.1109/tip.2024.3387297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 01/27/2024] [Accepted: 03/12/2024] [Indexed: 04/26/2024]
Abstract
In modern neuroscience, observing the dynamics of large populations of neurons is a critical step of understanding how networks of neurons process information. Light-field microscopy (LFM) has emerged as a type of scanless, high-speed, three-dimensional (3D) imaging tool, particularly attractive for this purpose. Imaging neuronal activity using LFM calls for the development of novel computational approaches that fully exploit domain knowledge embedded in physics and optics models, as well as enabling high interpretability and transparency. To this end, we propose a model-based explainable deep learning approach for LFM. Different from purely data-driven methods, the proposed approach integrates wave-optics theory, sparse representation and non-linear optimization with the artificial neural network. In particular, the architecture of the proposed neural network is designed following precise signal and optimization models. Moreover, the network's parameters are learned from a training dataset using a novel training strategy that integrates layer-wise training with tailored knowledge distillation. Such design allows the network to take advantage of domain knowledge and learned new features. It combines the benefit of both model-based and learning-based methods, thereby contributing to superior interpretability, transparency and performance. By evaluating on both structural and functional LFM data obtained from scattering mammalian brain tissues, we demonstrate the capabilities of the proposed approach to achieve fast, robust 3D localization of neuron sources and accurate neural activity identification.
Collapse
Affiliation(s)
- Pingfan Song
- Department of EngineeringUniversity of CambridgeCB2 1PZCambridgeU.K
| | - Herman Verinaz Jadan
- Faculty of Electrical and Computer EngineeringEscuela Superior Politécnica del Litoral (ESPOL)GuayaquilEC090903Ecuador
| | - Carmel L. Howe
- Department of Chemical Physiology and BiochemistryOregon Health and Science UniversityPortlandOR97239USA
| | - Amanda J. Foust
- Center for NeurotechnologyDepartment of BioengineeringImperial College LondonSW7 2AZLondonU.K
| | - Pier Luigi Dragotti
- Department of Electronic and Electrical EngineeringImperial College LondonSW7 2AZLondonUK
| |
Collapse
|
4
|
Xiao D, Kedem Orange R, Opatovski N, Parizat A, Nehme E, Alalouf O, Shechtman Y. Large-FOV 3D localization microscopy by spatially variant point spread function generation. SCIENCE ADVANCES 2024; 10:eadj3656. [PMID: 38457497 PMCID: PMC10923516 DOI: 10.1126/sciadv.adj3656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 02/05/2024] [Indexed: 03/10/2024]
Abstract
Accurate characterization of the microscopic point spread function (PSF) is crucial for achieving high-performance localization microscopy (LM). Traditionally, LM assumes a spatially invariant PSF to simplify the modeling of the imaging system. However, for large fields of view (FOV) imaging, it becomes important to account for the spatially variant nature of the PSF. Here, we propose an accurate and fast principal components analysis-based field-dependent 3D PSF generator (PPG3D) and localizer for LM. Through simulations and experimental three-dimensional (3D) single-molecule localization microscopy (SMLM), we demonstrate the effectiveness of PPG3D, enabling super-resolution imaging of mitochondria and microtubules with high fidelity over a large FOV. A comparison of PPG3D with a shift-variant PSF generator for 3D LM reveals a threefold improvement in accuracy. Moreover, PPG3D is approximately 100 times faster than existing PSF generators, when used in image plane-based interpolation mode. Given its user-friendliness, we believe that PPG3D holds great potential for widespread application in SMLM and other imaging modalities.
Collapse
Affiliation(s)
- Dafei Xiao
- Russell Berrie Nanotechnology Institute, Technion—Israel Institute of Technology, Haifa, Israel
| | - Reut Kedem Orange
- Russell Berrie Nanotechnology Institute, Technion—Israel Institute of Technology, Haifa, Israel
| | - Nadav Opatovski
- Russell Berrie Nanotechnology Institute, Technion—Israel Institute of Technology, Haifa, Israel
| | - Amit Parizat
- Department of Biomedical Engineering, Technion—Israel Institute of Technology, Haifa, Israel
| | - Elias Nehme
- Department of Biomedical Engineering, Technion—Israel Institute of Technology, Haifa, Israel
- Department of Electrical and Computer Engineering, Technion—Israel Institute of Technology, Haifa, Israel
| | - Onit Alalouf
- Department of Biomedical Engineering, Technion—Israel Institute of Technology, Haifa, Israel
| | - Yoav Shechtman
- Russell Berrie Nanotechnology Institute, Technion—Israel Institute of Technology, Haifa, Israel
- Department of Biomedical Engineering, Technion—Israel Institute of Technology, Haifa, Israel
- Walker Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
5
|
Alido J, Greene J, Xue Y, Hu G, Gilmore M, Monk KJ, DiBenedictis BT, Davison IG, Tian L, Li Y. Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network. OPTICS EXPRESS 2024; 32:6241-6257. [PMID: 38439332 PMCID: PMC11018337 DOI: 10.1364/oe.514072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 01/16/2024] [Accepted: 01/16/2024] [Indexed: 03/06/2024]
Abstract
Imaging through scattering is a pervasive and difficult problem in many biological applications. The high background and the exponentially attenuated target signals due to scattering fundamentally limits the imaging depth of fluorescence microscopy. Light-field systems are favorable for high-speed volumetric imaging, but the 2D-to-3D reconstruction is fundamentally ill-posed, and scattering exacerbates the condition of the inverse problem. Here, we develop a scattering simulator that models low-contrast target signals buried in heterogeneous strong background. We then train a deep neural network solely on synthetic data to descatter and reconstruct a 3D volume from a single-shot light-field measurement with low signal-to-background ratio (SBR). We apply this network to our previously developed computational miniature mesoscope and demonstrate the robustness of our deep learning algorithm on scattering phantoms with different scattering conditions. The network can robustly reconstruct emitters in 3D with a 2D measurement of SBR as low as 1.05 and as deep as a scattering length. We analyze fundamental tradeoffs based on network design factors and out-of-distribution data that affect the deep learning model's generalizability to real experimental data. Broadly, we believe that our simulator-based deep learning approach can be applied to a wide range of imaging through scattering techniques where experimental paired training data is lacking.
Collapse
Affiliation(s)
- Jeffrey Alido
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Joseph Greene
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Yujia Xue
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Guorong Hu
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Mitchell Gilmore
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Kevin J. Monk
- Department of Biology, Boston University, Boston, Massachusetts 02215, USA
| | - Brett T. DiBenedictis
- Department of Psychology and Brain Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Ian G. Davison
- Department of Biology, Boston University, Boston, Massachusetts 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, 02215, USA
| | - Yunzhe Li
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, 02215, USA
- Current address: Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, California, 94720, USA
| |
Collapse
|
6
|
Wu J, Chen Y, Veeraraghavan A, Seidemann E, Robinson JT. Mesoscopic calcium imaging in a head-unrestrained male non-human primate using a lensless microscope. Nat Commun 2024; 15:1271. [PMID: 38341403 PMCID: PMC10858944 DOI: 10.1038/s41467-024-45417-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 01/22/2024] [Indexed: 02/12/2024] Open
Abstract
Mesoscopic calcium imaging enables studies of cell-type specific neural activity over large areas. A growing body of literature suggests that neural activity can be different when animals are free to move compared to when they are restrained. Unfortunately, existing systems for imaging calcium dynamics over large areas in non-human primates (NHPs) are table-top devices that require restraint of the animal's head. Here, we demonstrate an imaging device capable of imaging mesoscale calcium activity in a head-unrestrained male non-human primate. We successfully miniaturize our system by replacing lenses with an optical mask and computational algorithms. The resulting lensless microscope can fit comfortably on an NHP, allowing its head to move freely while imaging. We are able to measure orientation columns maps over a 20 mm2 field-of-view in a head-unrestrained macaque. Our work establishes mesoscopic imaging using a lensless microscope as a powerful approach for studying neural activity under more naturalistic conditions.
Collapse
Affiliation(s)
- Jimin Wu
- Department of Bioengineering, Rice University, 6100 Main Street, Houston, TX, 77005, USA
| | - Yuzhi Chen
- Department of Neuroscience, University of Texas at Austin, 100 E 24th St., Austin, TX, 78712, USA
- Department of Psychology, University of Texas at Austin, 108 E Dean Keeton St., Austin, TX, 78712, USA
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, 6100 Main Street, Houston, TX, 77005, USA
- Department of Computer Science, Rice University, 6100 Main Street, Houston, TX, 77005, USA
| | - Eyal Seidemann
- Department of Neuroscience, University of Texas at Austin, 100 E 24th St., Austin, TX, 78712, USA.
- Department of Psychology, University of Texas at Austin, 108 E Dean Keeton St., Austin, TX, 78712, USA.
| | - Jacob T Robinson
- Department of Bioengineering, Rice University, 6100 Main Street, Houston, TX, 77005, USA.
- Department of Electrical and Computer Engineering, Rice University, 6100 Main Street, Houston, TX, 77005, USA.
- Department of Neuroscience, Baylor College of Medicine, One Baylor Plaza, Houston, TX, 77030, USA.
| |
Collapse
|
7
|
Zhu X, Gu L, Li R, Chen L, Chen J, Zhou N, Ren W. MiniMounter: A low-cost miniaturized microscopy development toolkit for image quality control and enhancement. JOURNAL OF BIOPHOTONICS 2024; 17:e202300214. [PMID: 37877307 DOI: 10.1002/jbio.202300214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 08/15/2023] [Accepted: 10/19/2023] [Indexed: 10/26/2023]
Abstract
Head-mounted miniaturized fluorescence microscopy (Miniscope) has emerged as a significant tool in neuroscience, particularly for behavioral studies in awake rodents. However, the challenges of image quality control and standardization persist for both Miniscope users and developers. In this study, we propose a cost-effective and comprehensive toolkit named MiniMounter. This toolkit comprises a hardware platform that offers customized grippers and four-degree-of-freedom adjustment for Miniscope, along with software that integrates displacement control, image quality evaluation, and enhancement of 3D visualization. Our toolkit makes it feasible to accurately characterize Miniscope. Furthermore, MiniMounter enables auto-focusing and 3D imaging for Miniscope prototypes that possess solely a 2D imaging function, as demonstrated in phantom and animal experiments. Overall, the implementation of MiniMounter effectively enhances image quality, reduces the time required for experimental operations and image evaluation, and consequently accelerates the development and research cycle for both users and developers within the Miniscope community.
Collapse
Affiliation(s)
- Xinyi Zhu
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Liangtao Gu
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Rui Li
- iHuman Institute, ShanghaiTech University, Shanghai, China
- School of Life Science and Technology, ShanghaiTech University, Shanghai, China
| | - Liang Chen
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Jingying Chen
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Ning Zhou
- iHuman Institute, ShanghaiTech University, Shanghai, China
| | - Wuwei Ren
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| |
Collapse
|
8
|
Alido J, Greene J, Xue Y, Hu G, Li Y, Gilmore M, Monk KJ, Dibenedictis BT, Davison IG, Tian L. Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network. ARXIV 2023:arXiv:2303.12573v2. [PMID: 36994164 PMCID: PMC10055497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
Imaging through scattering is a pervasive and difficult problem in many biological applications. The high background and the exponentially attenuated target signals due to scattering fundamentally limits the imaging depth of fluorescence microscopy. Light-field systems are favorable for high-speed volumetric imaging, but the 2D-to-3D reconstruction is fundamentally ill-posed, and scattering exacerbates the condition of the inverse problem. Here, we develop a scattering simulator that models low-contrast target signals buried in heterogeneous strong background. We then train a deep neural network solely on synthetic data to descatter and reconstruct a 3D volume from a single-shot light-field measurement with low signal-to-background ratio (SBR). We apply this network to our previously developed Computational Miniature Mesoscope and demonstrate the robustness of our deep learning algorithm on scattering phantoms with different scattering conditions. The network can robustly reconstruct emitters in 3D with a 2D measurement of SBR as low as 1.05 and as deep as a scattering length. We analyze fundamental tradeoffs based on network design factors and out-of-distribution data that affect the deep learning model's generalizability to real experimental data. Broadly, we believe that our simulator-based deep learning approach can be applied to a wide range of imaging through scattering techniques where experimental paired training data is lacking.
Collapse
Affiliation(s)
- Jeffrey Alido
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Joseph Greene
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Yujia Xue
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Guorong Hu
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Yunzhe Li
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Mitchell Gilmore
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
| | - Kevin J. Monk
- Department of Biology, Boston University, Boston, MA 02215, USA
| | - Brett T. Dibenedictis
- Department of Psychology and Brain Sciences, Boston University, Boston, MA 02215, USA
| | - Ian G. Davison
- Department of Biology, Boston University, Boston, MA 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA
- Department of Psychology and Brain Sciences, Boston University, Boston, MA 02215, USA
| |
Collapse
|
9
|
Greene J, Xue Y, Alido J, Matlock A, Hu G, Kiliç K, Davison I, Tian L. Pupil engineering for extended depth-of-field imaging in a fluorescence miniscope. NEUROPHOTONICS 2023; 10:044302. [PMID: 37215637 PMCID: PMC10197144 DOI: 10.1117/1.nph.10.4.044302] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 04/12/2023] [Accepted: 04/18/2023] [Indexed: 05/24/2023]
Abstract
Significance Fluorescence head-mounted microscopes, i.e., miniscopes, have emerged as powerful tools to analyze in-vivo neural populations but exhibit a limited depth-of-field (DoF) due to the use of high numerical aperture (NA) gradient refractive index (GRIN) objective lenses. Aim We present extended depth-of-field (EDoF) miniscope, which integrates an optimized thin and lightweight binary diffractive optical element (DOE) onto the GRIN lens of a miniscope to extend the DoF by 2.8× between twin foci in fixed scattering samples. Approach We use a genetic algorithm that considers the GRIN lens' aberration and intensity loss from scattering in a Fourier optics-forward model to optimize a DOE and manufacture the DOE through single-step photolithography. We integrate the DOE into EDoF-Miniscope with a lateral accuracy of 70 μm to produce high-contrast signals without compromising the speed, spatial resolution, size, or weight. Results We characterize the performance of EDoF-Miniscope across 5- and 10-μm fluorescent beads embedded in scattering phantoms and demonstrate that EDoF-Miniscope facilitates deeper interrogations of neuronal populations in a 100-μm-thick mouse brain sample and vessels in a whole mouse brain sample. Conclusions Built from off-the-shelf components and augmented by a customizable DOE, we expect that this low-cost EDoF-Miniscope may find utility in a wide range of neural recording applications.
Collapse
Affiliation(s)
- Joseph Greene
- Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
| | - Yujia Xue
- Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
| | - Jeffrey Alido
- Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
| | - Alex Matlock
- Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
| | - Guorong Hu
- Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
| | - Kivilcim Kiliç
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
- Boston University, Neurophotonics Center, Boston, Massachusetts, United States
| | - Ian Davison
- Boston University, Neurophotonics Center, Boston, Massachusetts, United States
- Boston University, Department of Biology, Boston, Massachusetts, United States
| | - Lei Tian
- Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
- Boston University, Neurophotonics Center, Boston, Massachusetts, United States
| |
Collapse
|
10
|
Wu J, Boominathan V, Veeraraghavan A, Robinson JT. Real-time, deep-learning aided lensless microscope. BIOMEDICAL OPTICS EXPRESS 2023; 14:4037-4051. [PMID: 37799697 PMCID: PMC10549754 DOI: 10.1364/boe.490199] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 06/28/2023] [Accepted: 06/29/2023] [Indexed: 10/07/2023]
Abstract
Traditional miniaturized fluorescence microscopes are critical tools for modern biology. Invariably, they struggle to simultaneously image with a high spatial resolution and a large field of view (FOV). Lensless microscopes offer a solution to this limitation. However, real-time visualization of samples is not possible with lensless imaging, as image reconstruction can take minutes to complete. This poses a challenge for usability, as real-time visualization is a crucial feature that assists users in identifying and locating the imaging target. The issue is particularly pronounced in lensless microscopes that operate at close imaging distances. Imaging at close distances requires shift-varying deconvolution to account for the variation of the point spread function (PSF) across the FOV. Here, we present a lensless microscope that achieves real-time image reconstruction by eliminating the use of an iterative reconstruction algorithm. The neural network-based reconstruction method we show here, achieves more than 10000 times increase in reconstruction speed compared to iterative reconstruction. The increased reconstruction speed allows us to visualize the results of our lensless microscope at more than 25 frames per second (fps), while achieving better than 7 µm resolution over a FOV of 10 mm2. This ability to reconstruct and visualize samples in real-time empowers a more user-friendly interaction with lensless microscopes. The users are able to use these microscopes much like they currently do with conventional microscopes.
Collapse
Affiliation(s)
- Jimin Wu
- Department of Bioengineering,
Rice University, Houston, Texas 77005, USA
| | - Vivek Boominathan
- Department of Electrical and Computer Engineering,
Rice University, Houston, Texas 77005, USA
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering,
Rice University, Houston, Texas 77005, USA
- Department of Computer Science, Rice University, Houston, Texas 77005, USA
| | - Jacob T. Robinson
- Department of Bioengineering,
Rice University, Houston, Texas 77005, USA
- Department of Electrical and Computer Engineering,
Rice University, Houston, Texas 77005, USA
- Department of Neuroscience, Baylor College of Medicine, One Baylor Plaza, Houston, Texas 77030, USA
| |
Collapse
|
11
|
Feshki M, Martel S, De Koninck Y, Gosselin B. Improving flat fluorescence microscopy in scattering tissue through deep learning strategies. OPTICS EXPRESS 2023; 31:23008-23026. [PMID: 37475396 DOI: 10.1364/oe.489677] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 05/24/2023] [Indexed: 07/22/2023]
Abstract
Intravital microscopy in small animals growingly contributes to the visualization of short- and long-term mammalian biological processes. Miniaturized fluorescence microscopy has revolutionized the observation of live animals' neural circuits. The technology's ability to further miniaturize to improve freely moving experimental settings is limited by its standard lens-based layout. Typical miniature microscope designs contain a stack of heavy and bulky optical components adjusted at relatively long distances. Computational lensless microscopy can overcome this limitation by replacing the lenses with a simple thin mask. Among other critical applications, Flat Fluorescence Microscope (FFM) holds promise to allow for real-time brain circuits imaging in freely moving animals, but recent research reports show that the quality needs to be improved, compared with imaging in clear tissue, for instance. Although promising results were reported with mask-based fluorescence microscopes in clear tissues, the impact of light scattering in biological tissue remains a major challenge. The outstanding performance of deep learning (DL) networks in computational flat cameras and imaging through scattering media studies motivates the development of deep learning models for FFMs. Our holistic ray-tracing and Monte Carlo FFM computational model assisted us in evaluating deep scattering medium imaging with DL techniques. We demonstrate that physics-based DL models combined with the classical reconstruction technique of the alternating direction method of multipliers (ADMM) perform a fast and robust image reconstruction, particularly in the scattering medium. The structural similarity indexes of the reconstructed images in scattering media recordings were increased by up to 20% compared with the prevalent iterative models. We also introduce and discuss the challenges of DL approaches for FFMs under physics-informed supervised and unsupervised learning.
Collapse
|
12
|
Jia D, Zhang Y, Yang Q, Xue Y, Tan Y, Guo Z, Zhang M, Tian L, Cheng JX. 3D Chemical Imaging by Fluorescence-detected Mid-Infrared Photothermal Fourier Light Field Microscopy. CHEMICAL & BIOMEDICAL IMAGING 2023; 1:260-267. [PMID: 37388959 PMCID: PMC10302888 DOI: 10.1021/cbmi.3c00022] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Revised: 03/04/2023] [Accepted: 03/08/2023] [Indexed: 07/01/2023]
Abstract
Three-dimensional molecular imaging of living organisms and cells plays a significant role in modern biology. Yet, current volumetric imaging modalities are largely fluorescence-based and thus lack chemical content information. Mid-infrared photothermal microscopy as a chemical imaging technology provides infrared spectroscopic information at submicrometer spatial resolution. Here, by harnessing thermosensitive fluorescent dyes to sense the mid-infrared photothermal effect, we demonstrate 3D fluorescence-detected mid-infrared photothermal Fourier light field (FMIP-FLF) microscopy at the speed of 8 volumes per second and submicron spatial resolution. Protein contents in bacteria and lipid droplets in living pancreatic cancer cells are visualized. Altered lipid metabolism in drug-resistant pancreatic cancer cells is observed with the FMIP-FLF microscope.
Collapse
Affiliation(s)
- Danchen Jia
- Department
of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, United States
| | - Yi Zhang
- Department
of Physics, Boston University, Boston, Massachusetts 02215, United States
| | - Qianwan Yang
- Department
of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, United States
| | - Yujia Xue
- Department
of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, United States
| | - Yuying Tan
- Department
of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, United States
| | - Zhongyue Guo
- Department
of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, United States
| | - Meng Zhang
- Department
of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, United States
| | - Lei Tian
- Department
of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, United States
- Department
of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, United States
| | - Ji-Xin Cheng
- Department
of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, United States
- Department
of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, United States
| |
Collapse
|
13
|
Ma Y, Gao Y, Wu J, Cao L. Toward a see-through camera via AR lightguide. OPTICS LETTERS 2023; 48:2809-2812. [PMID: 37262216 DOI: 10.1364/ol.492370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 04/29/2023] [Indexed: 06/03/2023]
Abstract
As the foundation of virtual content generation, cameras are crucial for augmented reality (AR) applications, yet their integration with transparent displays has remained a challenge. Prior efforts to develop see-through cameras have struggled to achieve high resolution and seamless integration with AR displays. In this work, we present LightguideCam, a compact and flexible see-through camera based on an AR lightguide. To address the overlapping artifacts in measurement, we present a compressive sensing algorithm based on an equivalent imaging model that minimizes computational consumption and calibration complexity. We validate our design using a commercial AR lightguide and demonstrate a field of view of 23.1° and an angular resolution of 0.1° in the prototype. Our LightguideCam has great potential as a plug-and-play extensional imaging component in AR head-mounted displays, with promising applications for eye-gaze tracking, eye-position perspective photography, and improved human-computer interaction devices, such as full-screen mobile phones.
Collapse
|
14
|
Matlock A, Zhu J, Tian L. Multiple-scattering simulator-trained neural network for intensity diffraction tomography. OPTICS EXPRESS 2023; 31:4094-4107. [PMID: 36785385 DOI: 10.1364/oe.477396] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 12/29/2022] [Indexed: 06/18/2023]
Abstract
Recovering 3D phase features of complex biological samples traditionally sacrifices computational efficiency and processing time for physical model accuracy and reconstruction quality. Here, we overcome this challenge using an approximant-guided deep learning framework in a high-speed intensity diffraction tomography system. Applying a physics model simulator-based learning strategy trained entirely on natural image datasets, we show our network can robustly reconstruct complex 3D biological samples. To achieve highly efficient training and prediction, we implement a lightweight 2D network structure that utilizes a multi-channel input for encoding the axial information. We demonstrate this framework on experimental measurements of weakly scattering epithelial buccal cells and strongly scattering C. elegans worms. We benchmark the network's performance against a state-of-the-art multiple-scattering model-based iterative reconstruction algorithm. We highlight the network's robustness by reconstructing dynamic samples from a living worm video. We further emphasize the network's generalization capabilities by recovering algae samples imaged from different experimental setups. To assess the prediction quality, we develop a quantitative evaluation metric to show that our predictions are consistent with both multiple-scattering physics and experimental measurements.
Collapse
|
15
|
Fu Q, Yan DM, Heidrich W. Diffractive lensless imaging with optimized Voronoi-Fresnel phase. OPTICS EXPRESS 2022; 30:45807-45823. [PMID: 36522977 DOI: 10.1364/oe.475004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 11/03/2022] [Indexed: 06/17/2023]
Abstract
Lensless cameras are a class of imaging devices that shrink the physical dimensions to the very close vicinity of the image sensor by replacing conventional compound lenses with integrated flat optics and computational algorithms. Here we report a diffractive lensless camera with spatially-coded Voronoi-Fresnel phase to achieve superior image quality. We propose a design principle of maximizing the acquired information in optics to facilitate the computational reconstruction. By introducing an easy-to-optimize Fourier domain metric, Modulation Transfer Function volume (MTFv), which is related to the Strehl ratio, we devise an optimization framework to guide the optimization of the diffractive optical element. The resulting Voronoi-Fresnel phase features an irregular array of quasi-Centroidal Voronoi cells containing a base first-order Fresnel phase function. We demonstrate and verify the imaging performance for photography applications with a prototype Voronoi-Fresnel lensless camera on a 1.6-megapixel image sensor in various illumination conditions. Results show that the proposed design outperforms existing lensless cameras, and could benefit the development of compact imaging systems that work in extreme physical conditions.
Collapse
|