1
|
Chu CH, Chia YH, Hsu HC, Vyas S, Tsai CM, Yamaguchi T, Tanaka T, Chen HW, Luo Y, Yang PC, Tsai DP. Intelligent Phase Contrast Meta-Microscope System. NANO LETTERS 2023; 23:11630-11637. [PMID: 38038680 DOI: 10.1021/acs.nanolett.3c03484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2023]
Abstract
Phase contrast imaging techniques enable the visualization of disparities in the refractive index among various materials. However, these techniques usually come with a cost: the need for bulky, inflexible, and complicated configurations. Here, we propose and experimentally demonstrate an ultracompact meta-microscope, a novel imaging platform designed to accomplish both optical and digital phase contrast imaging. The optical phase contrast imaging system is composed of a pair of metalenses and an intermediate spiral phase metasurface located at the Fourier plane. The performance of the system in generating edge-enhanced images is validated by imaging a variety of human cells, including lung cell lines BEAS-2B, CLY1, and H1299 and other types. Additionally, we integrate the ResNet deep learning model into the meta-microscope to transform bright-field images into edge-enhanced images with high contrast accuracy. This technology promises to aid in the development of innovative miniature optical systems for biomedical and clinical applications.
Collapse
Affiliation(s)
- Cheng Hung Chu
- YongLin Institute of Health, National Taiwan University, Taipei 10672, Taiwan
| | - Yu-Hsin Chia
- Institute of Medical Device and Imaging, National Taiwan University, Taipei 10051, Taiwan
- Department of Biomedical Engineering, National Taiwan University, Taipei 10051, Taiwan
| | - Hung-Chuan Hsu
- Department of Mechanical Engineering, National Taiwan University, Taipei 10617, Taiwan
| | - Sunil Vyas
- Institute of Medical Device and Imaging, National Taiwan University, Taipei 10051, Taiwan
| | - Chen-Ming Tsai
- Institute of Medical Device and Imaging, National Taiwan University, Taipei 10051, Taiwan
| | - Takeshi Yamaguchi
- Innovative Photon Manipulation Research Team, RIKEN Center for Advanced Photonics, Saitama 351-0198, Japan
| | - Takuo Tanaka
- Innovative Photon Manipulation Research Team, RIKEN Center for Advanced Photonics, Saitama 351-0198, Japan
| | - Huei-Wen Chen
- Graduate Institute of Toxicology, College of Medicine, National Taiwan University, Taipei 100, Taiwan
- Genome and Systems Biology Degree Program, National Taiwan University and Academia Sinica, Taipei 100, Taiwan
| | - Yuan Luo
- YongLin Institute of Health, National Taiwan University, Taipei 10672, Taiwan
- Institute of Medical Device and Imaging, National Taiwan University, Taipei 10051, Taiwan
- Department of Biomedical Engineering, National Taiwan University, Taipei 10051, Taiwan
- Program for Precision Health and Intelligent Medicine, National Taiwan University, Taipei 106319, Taiwan, R.O.C
| | - Pan-Chyr Yang
- YongLin Institute of Health, National Taiwan University, Taipei 10672, Taiwan
- Program for Precision Health and Intelligent Medicine, National Taiwan University, Taipei 106319, Taiwan, R.O.C
- Department of Internal Medicine, National Taiwan University Hospital, National Taiwan University, Taipei 10002, Taiwan
- Institute of Biomedical Sciences, Academia Sinica, Taipei 11529, Taiwan
| | - Din Ping Tsai
- Department of Electrical Engineering, City University of Hong Kong, Kowloon 999077, Hong Kong
- Centre for Biosystems, Neuroscience, and Nanotechnology, City University of Hong Kong, Kowloon 999077, Hong Kong
- The State Key Laboratory of Terahertz and Millimeter Waves, City University of Hong Kong, Kowloon 99907, Hong Kong
| |
Collapse
|
2
|
Jiang T, Gong H, Yuan J. Whole-brain Optical Imaging: A Powerful Tool for Precise Brain Mapping at the Mesoscopic Level. Neurosci Bull 2023; 39:1840-1858. [PMID: 37715920 PMCID: PMC10661546 DOI: 10.1007/s12264-023-01112-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 05/08/2023] [Indexed: 09/18/2023] Open
Abstract
The mammalian brain is a highly complex network that consists of millions to billions of densely-interconnected neurons. Precise dissection of neural circuits at the mesoscopic level can provide important structural information for understanding the brain. Optical approaches can achieve submicron lateral resolution and achieve "optical sectioning" by a variety of means, which has the natural advantage of allowing the observation of neural circuits at the mesoscopic level. Automated whole-brain optical imaging methods based on tissue clearing or histological sectioning surpass the limitation of optical imaging depth in biological tissues and can provide delicate structural information in a large volume of tissues. Combined with various fluorescent labeling techniques, whole-brain optical imaging methods have shown great potential in the brain-wide quantitative profiling of cells, circuits, and blood vessels. In this review, we summarize the principles and implementations of various whole-brain optical imaging methods and provide some concepts regarding their future development.
Collapse
Affiliation(s)
- Tao Jiang
- Huazhong University of Science and Technology-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute, Suzhou, 215123, China
| | - Hui Gong
- Huazhong University of Science and Technology-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute, Suzhou, 215123, China
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jing Yuan
- Huazhong University of Science and Technology-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute, Suzhou, 215123, China.
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China.
| |
Collapse
|
3
|
Ding Z, Zhao J, Luo T, Lu B, Zhang X, Chen S, Li A, Jia X, Zhang J, Chen W, Chen J, Sun Q, Li X, Gong H, Yuan J. Multicolor high-resolution whole-brain imaging for acquiring and comparing the brain-wide distributions of type-specific and projection-specific neurons with anatomical annotation in the same brain. Front Neurosci 2022; 16:1033880. [PMID: 36278018 PMCID: PMC9583816 DOI: 10.3389/fnins.2022.1033880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 09/21/2022] [Indexed: 12/02/2022] Open
Abstract
Visualizing the relationships and interactions among different biological components in the whole brain is crucial to our understanding of brain structures and functions. However, an automatic multicolor whole-brain imaging technique is still lacking. Here, we developed a multicolor wide-field large-volume tomography (multicolor WVT) to simultaneously acquire fluorescent signals in blue, green, and red channels in the whole brain. To facilitate the segmentation of brain regions and anatomical annotation, we used 4′, 6-diamidino-2-phenylindole (DAPI) to provide cytoarchitecture through real-time counterstaining. We optimized the imaging planes and modes of three channels to overcome the axial chromatic aberration of the illumination path and avoid the crosstalk from DAPI to the green channel without the modification of system configuration. We also developed an automatic contour recognition algorithm based on DAPI-staining cytoarchitecture to shorten data acquisition time and reduce data redundancy. To demonstrate the potential of our system in deciphering the relationship of the multiple components of neural circuits, we acquired and quantified the brain-wide distributions of cholinergic neurons and input of ventral Caudoputamen (CP) with the anatomical annotation in the same brain. We further identified the cholinergic type of upstream neurons projecting to CP through the triple-color collocated analysis and quantified its proportions in the two brain-wide distributions. Both accounted for 0.22%, implying CP might be modulated by non-cholinergic neurons. Our method provides a new research tool for studying the different biological components in the same organ and potentially facilitates the understanding of the processing mechanism of neural circuits and other biological activities.
Collapse
Affiliation(s)
- Zhangheng Ding
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Research Unit of Multimodal Cross Scale Neural Signal Detection and Imaging, HUST-Suzhou Institute for Brainsmatics, Chinese Academy of Medical Sciences, Suzhou, China
| | - Jiangjiang Zhao
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Tianpeng Luo
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Bolin Lu
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaoyu Zhang
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Siqi Chen
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Research Unit of Multimodal Cross Scale Neural Signal Detection and Imaging, HUST-Suzhou Institute for Brainsmatics, Chinese Academy of Medical Sciences, Suzhou, China
| | - Xueyan Jia
- Research Unit of Multimodal Cross Scale Neural Signal Detection and Imaging, HUST-Suzhou Institute for Brainsmatics, Chinese Academy of Medical Sciences, Suzhou, China
| | - Jianmin Zhang
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Wu Chen
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Jianwei Chen
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Research Unit of Multimodal Cross Scale Neural Signal Detection and Imaging, HUST-Suzhou Institute for Brainsmatics, Chinese Academy of Medical Sciences, Suzhou, China
| | - Qingtao Sun
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Xiangning Li
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Research Unit of Multimodal Cross Scale Neural Signal Detection and Imaging, HUST-Suzhou Institute for Brainsmatics, Chinese Academy of Medical Sciences, Suzhou, China
| | - Hui Gong
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Research Unit of Multimodal Cross Scale Neural Signal Detection and Imaging, HUST-Suzhou Institute for Brainsmatics, Chinese Academy of Medical Sciences, Suzhou, China
| | - Jing Yuan
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- Research Unit of Multimodal Cross Scale Neural Signal Detection and Imaging, HUST-Suzhou Institute for Brainsmatics, Chinese Academy of Medical Sciences, Suzhou, China
- *Correspondence: Jing Yuan,
| |
Collapse
|
4
|
Yolalmaz A, Yüce E. Comprehensive deep learning model for 3D color holography. Sci Rep 2022; 12:2487. [PMID: 35169161 PMCID: PMC8847588 DOI: 10.1038/s41598-022-06190-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 01/20/2022] [Indexed: 12/04/2022] Open
Abstract
Holography is a vital tool used in various applications from microscopy, solar energy, imaging, display to information encryption. Generation of a holographic image and reconstruction of object/hologram information from a holographic image using the current algorithms are time-consuming processes. Versatile, fast in the meantime, accurate methodologies are required to compute holograms performing color imaging at multiple observation planes and reconstruct object/sample information from a holographic image for widely accommodating optical holograms. Here, we focus on design of optical holograms for generation of holographic images at multiple observation planes and colors via a deep learning model, the CHoloNet. The CHoloNet produces optical holograms which show multitasking performance as multiplexing color holographic image planes by tuning holographic structures. Furthermore, our deep learning model retrieves an object/hologram information from an intensity holographic image without requiring phase and amplitude information from the intensity image. We show that reconstructed objects/holograms show excellent agreement with the ground-truth images. The CHoloNet does not need iteratively reconstruction of object/hologram information while conventional object/hologram recovery methods rely on multiple holographic images at various observation planes along with the iterative algorithms. We openly share the fast and efficient framework that we develop in order to contribute to the design and implementation of optical holograms, and we believe that the CHoloNet based object/hologram reconstruction and generation of holographic images will speed up wide-area implementation of optical holography in microscopy, data encryption, and communication technologies.
Collapse
Affiliation(s)
- Alim Yolalmaz
- Programmable Photonics Group, Department of Physics, Middle East Technical University, 06800, Ankara, Turkey. .,Micro and Nanotechnology Program, Middle East Technical University, 06800, Ankara, Turkey.
| | - Emre Yüce
- Programmable Photonics Group, Department of Physics, Middle East Technical University, 06800, Ankara, Turkey.,Micro and Nanotechnology Program, Middle East Technical University, 06800, Ankara, Turkey
| |
Collapse
|
5
|
Newmaster KT, Kronman FA, Wu YT, Kim Y. Seeing the Forest and Its Trees Together: Implementing 3D Light Microscopy Pipelines for Cell Type Mapping in the Mouse Brain. Front Neuroanat 2022; 15:787601. [PMID: 35095432 PMCID: PMC8794814 DOI: 10.3389/fnana.2021.787601] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 12/02/2021] [Indexed: 12/14/2022] Open
Abstract
The brain is composed of diverse neuronal and non-neuronal cell types with complex regional connectivity patterns that create the anatomical infrastructure underlying cognition. Remarkable advances in neuroscience techniques enable labeling and imaging of these individual cell types and their interactions throughout intact mammalian brains at a cellular resolution allowing neuroscientists to examine microscopic details in macroscopic brain circuits. Nevertheless, implementing these tools is fraught with many technical and analytical challenges with a need for high-level data analysis. Here we review key technical considerations for implementing a brain mapping pipeline using the mouse brain as a primary model system. Specifically, we provide practical details for choosing methods including cell type specific labeling, sample preparation (e.g., tissue clearing), microscopy modalities, image processing, and data analysis (e.g., image registration to standard atlases). We also highlight the need to develop better 3D atlases with standardized anatomical labels and nomenclature across species and developmental time points to extend the mapping to other species including humans and to facilitate data sharing, confederation, and integrative analysis. In summary, this review provides key elements and currently available resources to consider while developing and implementing high-resolution mapping methods.
Collapse
Affiliation(s)
- Kyra T Newmaster
- Department of Neural and Behavioral Sciences, The Pennsylvania State University, Hershey, PA, United States
| | - Fae A Kronman
- Department of Neural and Behavioral Sciences, The Pennsylvania State University, Hershey, PA, United States
| | - Yuan-Ting Wu
- Department of Neural and Behavioral Sciences, The Pennsylvania State University, Hershey, PA, United States
| | - Yongsoo Kim
- Department of Neural and Behavioral Sciences, The Pennsylvania State University, Hershey, PA, United States
| |
Collapse
|
6
|
Li B, Tan S, Dong J, Lian X, Zhang Y, Ji X, Veeraraghavan A. Deep-3D microscope: 3D volumetric microscopy of thick scattering samples using a wide-field microscope and machine learning. BIOMEDICAL OPTICS EXPRESS 2022; 13:284-299. [PMID: 35154871 PMCID: PMC8803017 DOI: 10.1364/boe.444488] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 12/02/2021] [Accepted: 12/03/2021] [Indexed: 06/14/2023]
Abstract
Confocal microscopy is a standard approach for obtaining volumetric images of a sample with high axial and lateral resolution, especially when dealing with scattering samples. Unfortunately, a confocal microscope is quite expensive compared to traditional microscopes. In addition, the point scanning in confocal microscopy leads to slow imaging speed and photobleaching due to the high dose of laser energy. In this paper, we demonstrate how the advances in machine learning can be exploited to "teach" a traditional wide-field microscope, one that's available in every lab, into producing 3D volumetric images like a confocal microscope. The key idea is to obtain multiple images with different focus settings using a wide-field microscope and use a 3D generative adversarial network (GAN) based neural network to learn the mapping between the blurry low-contrast image stacks obtained using a wide-field microscope and the sharp, high-contrast image stacks obtained using a confocal microscope. After training the network with widefield-confocal stack pairs, the network can reliably and accurately reconstruct 3D volumetric images that rival confocal images in terms of its lateral resolution, z-sectioning and image contrast. Our experimental results demonstrate generalization ability to handle unseen data, stability in the reconstruction results, high spatial resolution even when imaging thick (∼40 microns) highly-scattering samples. We believe that such learning-based microscopes have the potential to bring confocal imaging quality to every lab that has a wide-field microscope.
Collapse
Affiliation(s)
- Bowen Li
- Department of Automation & BNRist, Tsinghua University, Beijing, China
| | - Shiyu Tan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Jiuyang Dong
- Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Xiaocong Lian
- Department of Automation & BNRist, Tsinghua University, Beijing, China
| | - Yongbing Zhang
- Harbin Institute of Technology (Shenzhen), Shenzhen, China
| | - Xiangyang Ji
- Department of Automation & BNRist, Tsinghua University, Beijing, China
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| |
Collapse
|
7
|
Zhuge H, Summa B, Hamm J, Brown JQ. Deep learning 2D and 3D optical sectioning microscopy using cross-modality Pix2Pix cGAN image translation. BIOMEDICAL OPTICS EXPRESS 2021; 12:7526-7543. [PMID: 35003850 PMCID: PMC8713683 DOI: 10.1364/boe.439894] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 11/01/2021] [Accepted: 11/07/2021] [Indexed: 06/14/2023]
Abstract
Structured illumination microscopy (SIM) reconstructs optically-sectioned images of a sample from multiple spatially-patterned wide-field images, but the traditional single non-patterned wide-field images are more inexpensively obtained since they do not require generation of specialized illumination patterns. In this work, we translated wide-field fluorescence microscopy images to optically-sectioned SIM images by a Pix2Pix conditional generative adversarial network (cGAN). Our model shows the capability of both 2D cross-modality image translation from wide-field images to optical sections, and further demonstrates potential to recover 3D optically-sectioned volumes from wide-field image stacks. The utility of the model was tested on a variety of samples including fluorescent beads and fresh human tissue samples.
Collapse
Affiliation(s)
- Huimin Zhuge
- Department of Biomedical Engineering, Tulane University, 500 Lindy Boggs Center, New Orleans, LA 70118, USA
| | - Brian Summa
- Department of Computer Science, Tulane University, New Orleans, LA 70118, USA
| | - Jihun Hamm
- Department of Computer Science, Tulane University, New Orleans, LA 70118, USA
| | - J. Quincy Brown
- Department of Biomedical Engineering, Tulane University, 500 Lindy Boggs Center, New Orleans, LA 70118, USA
| |
Collapse
|
8
|
Yuval O, Iosilevskii Y, Meledin A, Podbilewicz B, Shemesh T. Neuron tracing and quantitative analyses of dendritic architecture reveal symmetrical three-way-junctions and phenotypes of git-1 in C. elegans. PLoS Comput Biol 2021; 17:e1009185. [PMID: 34280180 PMCID: PMC8321406 DOI: 10.1371/journal.pcbi.1009185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2020] [Revised: 07/29/2021] [Accepted: 06/15/2021] [Indexed: 11/18/2022] Open
Abstract
Complex dendritic trees are a distinctive feature of neurons. Alterations to dendritic morphology are associated with developmental, behavioral and neurodegenerative changes. The highly-arborized PVD neuron of C. elegans serves as a model to study dendritic patterning; however, quantitative, objective and automated analyses of PVD morphology are missing. Here, we present a method for neuronal feature extraction, based on deep-learning and fitting algorithms. The extracted neuronal architecture is represented by a database of structural elements for abstracted analysis. We obtain excellent automatic tracing of PVD trees and uncover that dendritic junctions are unevenly distributed. Surprisingly, these junctions are three-way-symmetrical on average, while dendritic processes are arranged orthogonally. We quantify the effect of mutation in git-1, a regulator of dendritic spine formation, on PVD morphology and discover a localized reduction in junctions. Our findings shed new light on PVD architecture, demonstrating the effectiveness of our objective analyses of dendritic morphology and suggest molecular control mechanisms.
Collapse
Affiliation(s)
- Omer Yuval
- Faculty of Biology, Technion–Israel Institute of Technology, Haifa, Israel
- School of Computing, Faculty of Engineering and Physical Sciences, University of Leeds, Leeds, United Kingdom
| | - Yael Iosilevskii
- Faculty of Biology, Technion–Israel Institute of Technology, Haifa, Israel
| | - Anna Meledin
- Faculty of Biology, Technion–Israel Institute of Technology, Haifa, Israel
| | | | - Tom Shemesh
- Faculty of Biology, Technion–Israel Institute of Technology, Haifa, Israel
| |
Collapse
|