1
|
Rehman A, Zhovmer A, Sato R, Mukouyama YS, Chen J, Rissone A, Puertollano R, Liu J, Vishwasrao HD, Shroff H, Combs CA, Xue H. Convolutional neural network transformer (CNNT) for fluorescence microscopy image denoising with improved generalization and fast adaptation. Sci Rep 2024; 14:18184. [PMID: 39107416 PMCID: PMC11303381 DOI: 10.1038/s41598-024-68918-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 07/30/2024] [Indexed: 08/10/2024] Open
Abstract
Deep neural networks can improve the quality of fluorescence microscopy images. Previous methods, based on Convolutional Neural Networks (CNNs), require time-consuming training of individual models for each experiment, impairing their applicability and generalization. In this study, we propose a novel imaging-transformer based model, Convolutional Neural Network Transformer (CNNT), that outperforms CNN based networks for image denoising. We train a general CNNT based backbone model from pairwise high-low Signal-to-Noise Ratio (SNR) image volumes, gathered from a single type of fluorescence microscope, an instant Structured Illumination Microscope. Fast adaptation to new microscopes is achieved by fine-tuning the backbone on only 5-10 image volume pairs per new experiment. Results show that the CNNT backbone and fine-tuning scheme significantly reduces training time and improves image quality, outperforming models trained using only CNNs such as 3D-RCAN and Noise2Fast. We show three examples of efficacy of this approach in wide-field, two-photon, and confocal fluorescence microscopy.
Collapse
Affiliation(s)
- Azaan Rehman
- Office of AI Research, National Heart, Lung and Blood Institute (NHLBI), National Institutes of Health (NIH), Bethesda, MD, 20892, USA
| | - Alexander Zhovmer
- Center for Biologics Evaluation and Research, U.S. Food and Drug Administration (FDA), Silver Spring, MD, 20903, USA
| | - Ryo Sato
- Laboratory of Stem Cell and Neurovascular Research, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Yoh-Suke Mukouyama
- Laboratory of Stem Cell and Neurovascular Research, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Jiji Chen
- Advanced Imaging and Microscopy Resource, NIBIB, NIH, Bethesda, MD, 20892, USA
| | - Alberto Rissone
- Laboratory of Protein Trafficking and Organelle Biology, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Rosa Puertollano
- Laboratory of Protein Trafficking and Organelle Biology, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Jiamin Liu
- Advanced Imaging and Microscopy Resource, NIBIB, NIH, Bethesda, MD, 20892, USA
| | | | - Hari Shroff
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Christian A Combs
- Light Microscopy Core, National Heart, Lung, and Blood Institute, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD, 20892, USA.
| | - Hui Xue
- Office of AI Research, National Heart, Lung and Blood Institute (NHLBI), National Institutes of Health (NIH), Bethesda, MD, 20892, USA
- Health Futures, Microsoft Research, Redmond, Washington, 98052, USA
| |
Collapse
|
2
|
Umney O, Leng J, Canettieri G, Galdo NARD, Slaney H, Quirke P, Peckham M, Curd A. Annotation and automated segmentation of single-molecule localisation microscopy data. J Microsc 2024. [PMID: 39092628 DOI: 10.1111/jmi.13349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 07/05/2024] [Accepted: 07/25/2024] [Indexed: 08/04/2024]
Abstract
Single Molecule Localisation Microscopy (SMLM) is becoming a widely used technique in cell biology. After processing the images, the molecular localisations are typically stored in a table as xy (or xyz) coordinates, with additional information, such as number of photons, etc. This set of coordinates can be used to generate an image to visualise the molecular distribution, for example, a 2D or 3D histogram of localisations. Many different methods have been devised to analyse SMLM data, among which cluster analysis of the localisations is popular. However, it can be useful to first segment the data, to extract the localisations in a specific region of a cell or in individual cells, prior to downstream analysis. Here we describe a pipeline for annotating localisations in an SMLM dataset in which we compared membrane segmentation approaches, including Otsu thresholding and machine learning models, and subsequent cell segmentation. We used an SMLM dataset derived from dSTORM images of sectioned cell pellets, stained for the membrane proteins EGFR (epidermal growth factor receptor) and EREG (epiregulin) as a test dataset. We found that a Cellpose model retrained on our data performed the best in the membrane segmentation task, allowing us to perform downstream cluster analysis of membrane versus cell interior localisations. We anticipate this will be generally useful for SMLM analysis.
Collapse
Affiliation(s)
- Oliver Umney
- Faculty of Engineering and Physical Sciences, School of Computing, University of Leeds, Leeds, UK
| | - Joanna Leng
- Faculty of Engineering and Physical Sciences, School of Computing, University of Leeds, Leeds, UK
| | - Gianluca Canettieri
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- Institute Pasteur Italy - Cenci Bolognetti Foundation, Sapienza University of Rome, Rome, Italy
| | - Natalia A Riobo-Del Galdo
- Faculty of Biological Sciences, School of Molecular and Cellular Biology, University of Leeds, Leeds, UK
- School of Medicine, Leeds Institute for Medical Research, University of Leeds, Leeds, UK
- Astbury Centre for Structural and Molecular Biology, University of Leeds, Leeds, UK
| | - Hayley Slaney
- Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
| | - Philip Quirke
- Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
| | - Michelle Peckham
- Faculty of Biological Sciences, School of Molecular and Cellular Biology, University of Leeds, Leeds, UK
- Astbury Centre for Structural and Molecular Biology, University of Leeds, Leeds, UK
| | - Alistair Curd
- Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
- Faculty of Engineering and Physical Sciences, School of Physics, University of Leeds, Leeds, UK
| |
Collapse
|
3
|
Elmalam N, Ben Nedava L, Zaritsky A. In silico labeling in cell biology: Potential and limitations. Curr Opin Cell Biol 2024; 89:102378. [PMID: 38838549 DOI: 10.1016/j.ceb.2024.102378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 05/16/2024] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
In silico labeling is the computational cross-modality image translation where the output modality is a subcellular marker that is not specifically encoded in the input image, for example, in silico localization of organelles from transmitted light images. In principle, in silico labeling has the potential to facilitate rapid live imaging of multiple organelles with reduced photobleaching and phototoxicity, a technology enabling a major leap toward understanding the cell as an integrated complex system. However, five years have passed since feasibility was attained, without any demonstration of using in silico labeling to uncover new biological insight. In here, we discuss the current state of in silico labeling, the limitations preventing it from becoming a practical tool, and how we can overcome these limitations to reach its full potential.
Collapse
Affiliation(s)
- Nitsan Elmalam
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Lion Ben Nedava
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel.
| |
Collapse
|
4
|
Zhang G, Hu X, Ren X, Zhou B, Li B, Li Y, Luo J, Liu X, Ta D. In vivo ultrasound localization microscopy for high-density microbubbles. ULTRASONICS 2024; 143:107410. [PMID: 39084108 DOI: 10.1016/j.ultras.2024.107410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 07/04/2024] [Accepted: 07/18/2024] [Indexed: 08/02/2024]
Abstract
Ultrasound Localization Microscopy (ULM) surpasses the constraints imposed by acoustic diffraction, achieving sub-wavelength resolution visualization of microvasculature through the precise localization of minute microbubbles (MBs). Nonetheless, the analysis of densely populated regions with overlapping MB point spread responses introduces significant localization errors, limiting the use of technique to low-concentration conditions. This raises a trade-off issue between localization efficiency and MB density. In this work, we present a new deep learning framework that combines Transformer and U-Net architectures, termed ULM-TransUNet. As a non-linear model, it is able to learn the complex data patterns of overlapping MBs in dense conditions for accurate localization. To evaluate the performance of ULM-TransUNet, a series of numerical simulations and in vivo experiments are carried out. Numerical simulation results indicate that ULM-TransUNet achieves high-quality ULM imaging, with improvements of 21.93 % in detection rate, 17.36 % in detection precision, and 20.53 % in detection sensitivity, compared to previous state-of-the-art deep learning (DL) method (e.g., ULM-UNet). For the in vivo experiments, ULM-TransUNet achieves the highest spatial resolution (9.4 μm) and rapid inference speed (26.04 ms/frame). Furthermore, it consistently detects more small vessels and resolves closely spaced vessels more effectively. The outcomes of this work imply that ULM-TransUNet can potentially enhance the microvascular imaging performance on high-density MB conditions.
Collapse
Affiliation(s)
- Gaobo Zhang
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, China
| | - Xing Hu
- Department of Ultrasound, Huashan Hospital, Fudan University, Shanghai 201907, China
| | - Xuan Ren
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Boqian Zhou
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Boyi Li
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Yifang Li
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Jianwen Luo
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China; State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai 200032, China.
| | - Dean Ta
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, China; Academy for Engineering and Technology, Fudan University, Shanghai 200438, China.
| |
Collapse
|
5
|
Liu J, Li Y, Chen T, Zhang F, Xu F. Machine Learning for Single-Molecule Localization Microscopy: From Data Analysis to Quantification. Anal Chem 2024; 96:11103-11114. [PMID: 38946062 DOI: 10.1021/acs.analchem.3c05857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Single-molecule localization microscopy (SMLM) is a versatile tool for realizing nanoscale imaging with visible light and providing unprecedented opportunities to observe bioprocesses. The integration of machine learning with SMLM enhances data analysis by improving efficiency and accuracy. This tutorial aims to provide a comprehensive overview of the data analysis process and theoretical aspects of SMLM, while also highlighting the typical applications of machine learning in this field. By leveraging advanced analytical techniques, SMLM is becoming a powerful quantitative analysis tool for biological research.
Collapse
Affiliation(s)
- Jianli Liu
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Yumian Li
- Advanced Research Institute of Multidisciplinary Science, Beijing Institute of Technology, Beijing 100081, China
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Tailong Chen
- Advanced Research Institute of Multidisciplinary Science, Beijing Institute of Technology, Beijing 100081, China
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Fa Zhang
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Fan Xu
- Advanced Research Institute of Multidisciplinary Science, Beijing Institute of Technology, Beijing 100081, China
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
6
|
Shroff H, Testa I, Jug F, Manley S. Live-cell imaging powered by computation. Nat Rev Mol Cell Biol 2024; 25:443-463. [PMID: 38378991 DOI: 10.1038/s41580-024-00702-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2024] [Indexed: 02/22/2024]
Abstract
The proliferation of microscopy methods for live-cell imaging offers many new possibilities for users but can also be challenging to navigate. The prevailing challenge in live-cell fluorescence microscopy is capturing intra-cellular dynamics while preserving cell viability. Computational methods can help to address this challenge and are now shifting the boundaries of what is possible to capture in living systems. In this Review, we discuss these computational methods focusing on artificial intelligence-based approaches that can be layered on top of commonly used existing microscopies as well as hybrid methods that integrate computation and microscope hardware. We specifically discuss how computational approaches can improve the signal-to-noise ratio, spatial resolution, temporal resolution and multi-colour capacity of live-cell imaging.
Collapse
Affiliation(s)
- Hari Shroff
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Ilaria Testa
- Department of Applied Physics and Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Florian Jug
- Fondazione Human Technopole (HT), Milan, Italy
| | - Suliana Manley
- Institute of Physics, School of Basic Sciences, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
7
|
Lu C, Chen K, Qiu H, Chen X, Chen G, Qi X, Jiang H. Diffusion-based deep learning method for augmenting ultrastructural imaging and volume electron microscopy. Nat Commun 2024; 15:4677. [PMID: 38824146 PMCID: PMC11144272 DOI: 10.1038/s41467-024-49125-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 05/20/2024] [Indexed: 06/03/2024] Open
Abstract
Electron microscopy (EM) revolutionized the way to visualize cellular ultrastructure. Volume EM (vEM) has further broadened its three-dimensional nanoscale imaging capacity. However, intrinsic trade-offs between imaging speed and quality of EM restrict the attainable imaging area and volume. Isotropic imaging with vEM for large biological volumes remains unachievable. Here, we developed EMDiffuse, a suite of algorithms designed to enhance EM and vEM capabilities, leveraging the cutting-edge image generation diffusion model. EMDiffuse generates realistic predictions with high resolution ultrastructural details and exhibits robust transferability by taking only one pair of images of 3 megapixels to fine-tune in denoising and super-resolution tasks. EMDiffuse also demonstrated proficiency in the isotropic vEM reconstruction task, generating isotropic volume even in the absence of isotropic training data. We demonstrated the robustness of EMDiffuse by generating isotropic volumes from seven public datasets obtained from different vEM techniques and instruments. The generated isotropic volume enables accurate three-dimensional nanoscale ultrastructure analysis. EMDiffuse also features self-assessment functionalities on predictions' reliability. We envision EMDiffuse to pave the way for investigations of the intricate subcellular nanoscale ultrastructure within large volumes of biological systems.
Collapse
Affiliation(s)
- Chixiang Lu
- Department of Chemistry, The University of Hong Kong, Hong Kong, China
| | - Kai Chen
- Department of Chemistry, The University of Hong Kong, Hong Kong, China
- School of Molecular Sciences, The University of Western Australia, Perth, WA, Australia
| | - Heng Qiu
- Department of Chemistry, The University of Hong Kong, Hong Kong, China
| | - Xiaojun Chen
- School of Molecular Sciences, The University of Western Australia, Perth, WA, Australia
| | - Gu Chen
- Department of Chemistry, The University of Hong Kong, Hong Kong, China
| | - Xiaojuan Qi
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China.
| | - Haibo Jiang
- Department of Chemistry, The University of Hong Kong, Hong Kong, China.
| |
Collapse
|
8
|
Gaire SK, Daneshkhah A, Flowerday E, Gong R, Frederick J, Backman V. Deep learning-based spectroscopic single-molecule localization microscopy. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:066501. [PMID: 38799979 PMCID: PMC11122423 DOI: 10.1117/1.jbo.29.6.066501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 05/03/2024] [Accepted: 05/09/2024] [Indexed: 05/29/2024]
Abstract
Significance Spectroscopic single-molecule localization microscopy (sSMLM) takes advantage of nanoscopy and spectroscopy, enabling sub-10 nm resolution as well as simultaneous multicolor imaging of multi-labeled samples. Reconstruction of raw sSMLM data using deep learning is a promising approach for visualizing the subcellular structures at the nanoscale. Aim Develop a novel computational approach leveraging deep learning to reconstruct both label-free and fluorescence-labeled sSMLM imaging data. Approach We developed a two-network-model based deep learning algorithm, termed DsSMLM, to reconstruct sSMLM data. The effectiveness of DsSMLM was assessed by conducting imaging experiments on diverse samples, including label-free single-stranded DNA (ssDNA) fiber, fluorescence-labeled histone markers on COS-7 and U2OS cells, and simultaneous multicolor imaging of synthetic DNA origami nanoruler. Results For label-free imaging, a spatial resolution of 6.22 nm was achieved on ssDNA fiber; for fluorescence-labeled imaging, DsSMLM revealed the distribution of chromatin-rich and chromatin-poor regions defined by histone markers on the cell nucleus and also offered simultaneous multicolor imaging of nanoruler samples, distinguishing two dyes labeled in three emitting points with a separation distance of 40 nm. With DsSMLM, we observed enhanced spectral profiles with 8.8% higher localization detection for single-color imaging and up to 5.05% higher localization detection for simultaneous two-color imaging. Conclusions We demonstrate the feasibility of deep learning-based reconstruction for sSMLM imaging applicable to label-free and fluorescence-labeled sSMLM imaging data. We anticipate our technique will be a valuable tool for high-quality super-resolution imaging for a deeper understanding of DNA molecules' photophysics and will facilitate the investigation of multiple nanoscopic cellular structures and their interactions.
Collapse
Affiliation(s)
- Sunil Kumar Gaire
- North Carolina Agricultural and Technical State University, Department of Electrical and Computer Engineering, Greensboro, North Carolina, United States
| | - Ali Daneshkhah
- Northwestern University, Department of Biomedical Engineering, Evanston, Illinois, United States
| | - Ethan Flowerday
- University of Tulsa, Department of Computer Science and Cyber Security, Tulsa, Oklahoma, United States
| | - Ruyi Gong
- Northwestern University, Department of Biomedical Engineering, Evanston, Illinois, United States
| | - Jane Frederick
- Northwestern University, Department of Biomedical Engineering, Evanston, Illinois, United States
| | - Vadim Backman
- Northwestern University, Department of Biomedical Engineering, Evanston, Illinois, United States
| |
Collapse
|
9
|
Liu S, Weng X, Gao X, Xu X, Zhou L. A Residual Dense Attention Generative Adversarial Network for Microscopic Image Super-Resolution. SENSORS (BASEL, SWITZERLAND) 2024; 24:3560. [PMID: 38894350 PMCID: PMC11175225 DOI: 10.3390/s24113560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 05/27/2024] [Accepted: 05/29/2024] [Indexed: 06/21/2024]
Abstract
With the development of deep learning, the Super-Resolution (SR) reconstruction of microscopic images has improved significantly. However, the scarcity of microscopic images for training, the underutilization of hierarchical features in original Low-Resolution (LR) images, and the high-frequency noise unrelated with the image structure generated during the reconstruction process are still challenges in the Single Image Super-Resolution (SISR) field. Faced with these issues, we first collected sufficient microscopic images through Motic, a company engaged in the design and production of optical and digital microscopes, to establish a dataset. Secondly, we proposed a Residual Dense Attention Generative Adversarial Network (RDAGAN). The network comprises a generator, an image discriminator, and a feature discriminator. The generator includes a Residual Dense Block (RDB) and a Convolutional Block Attention Module (CBAM), focusing on extracting the hierarchical features of the original LR image. Simultaneously, the added feature discriminator enables the network to generate high-frequency features pertinent to the image's structure. Finally, we conducted experimental analysis and compared our model with six classic models. Compared with the best model, our model improved PSNR and SSIM by about 1.5 dB and 0.2, respectively.
Collapse
Affiliation(s)
- Sanya Liu
- Xiamen Key Laboratory of Mobile Multimedia Communications, College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China; (S.L.); (X.W.)
| | - Xiao Weng
- Xiamen Key Laboratory of Mobile Multimedia Communications, College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China; (S.L.); (X.W.)
| | - Xingen Gao
- School of Opto-Electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China;
| | - Xiaoxin Xu
- Institute of Microelectronics Chinese Academy of Sciences, Beijing 100029, China;
| | - Lin Zhou
- Xiamen Key Laboratory of Mobile Multimedia Communications, College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China; (S.L.); (X.W.)
| |
Collapse
|
10
|
Qiao C, Zeng Y, Meng Q, Chen X, Chen H, Jiang T, Wei R, Guo J, Fu W, Lu H, Li D, Wang Y, Qiao H, Wu J, Li D, Dai Q. Zero-shot learning enables instant denoising and super-resolution in optical fluorescence microscopy. Nat Commun 2024; 15:4180. [PMID: 38755148 PMCID: PMC11099110 DOI: 10.1038/s41467-024-48575-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 05/07/2024] [Indexed: 05/18/2024] Open
Abstract
Computational super-resolution methods, including conventional analytical algorithms and deep learning models, have substantially improved optical microscopy. Among them, supervised deep neural networks have demonstrated outstanding performance, however, demanding abundant high-quality training data, which are laborious and even impractical to acquire due to the high dynamics of living cells. Here, we develop zero-shot deconvolution networks (ZS-DeconvNet) that instantly enhance the resolution of microscope images by more than 1.5-fold over the diffraction limit with 10-fold lower fluorescence than ordinary super-resolution imaging conditions, in an unsupervised manner without the need for either ground truths or additional data acquisition. We demonstrate the versatile applicability of ZS-DeconvNet on multiple imaging modalities, including total internal reflection fluorescence microscopy, three-dimensional wide-field microscopy, confocal microscopy, two-photon microscopy, lattice light-sheet microscopy, and multimodal structured illumination microscopy, which enables multi-color, long-term, super-resolution 2D/3D imaging of subcellular bioprocesses from mitotic single cells to multicellular embryos of mouse and C. elegans.
Collapse
Affiliation(s)
- Chang Qiao
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Yunmin Zeng
- Department of Automation, Tsinghua University, 100084, Beijing, China
| | - Quan Meng
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Xingye Chen
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
- Research Institute for Frontier Science, Beihang University, 100191, Beijing, China
| | - Haoyu Chen
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Tao Jiang
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Rongfei Wei
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
| | - Jiabao Guo
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Wenfeng Fu
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Huaide Lu
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Di Li
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
| | - Yuwang Wang
- Beijing National Research Center for Information Science and Technology, Tsinghua University, 100084, Beijing, China
| | - Hui Qiao
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Dong Li
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China.
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, 100084, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China.
| |
Collapse
|
11
|
Ibrahim KA, Naidu AS, Miljkovic H, Radenovic A, Yang W. Label-Free Techniques for Probing Biomolecular Condensates. ACS NANO 2024; 18:10738-10757. [PMID: 38609349 DOI: 10.1021/acsnano.4c01534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/14/2024]
Abstract
Biomolecular condensates play important roles in a wide array of fundamental biological processes, such as cellular compartmentalization, cellular regulation, and other biochemical reactions. Since their discovery and first observations, an extensive and expansive library of tools has been developed to investigate various aspects and properties, encompassing structural and compositional information, material properties, and their evolution throughout the life cycle from formation to eventual dissolution. This Review presents an overview of the expanded set of tools and methods that researchers use to probe the properties of biomolecular condensates across diverse scales of length, concentration, stiffness, and time. In particular, we review recent years' exciting development of label-free techniques and methodologies. We broadly organize the set of tools into 3 categories: (1) imaging-based techniques, such as transmitted-light microscopy (TLM) and Brillouin microscopy (BM), (2) force spectroscopy techniques, such as atomic force microscopy (AFM) and the optical tweezer (OT), and (3) microfluidic platforms and emerging technologies. We point out the tools' key opportunities, challenges, and future perspectives and analyze their correlative potential as well as compatibility with other techniques. Additionally, we review emerging techniques, namely, differential dynamic microscopy (DDM) and interferometric scattering microscopy (iSCAT), that have huge potential for future applications in studying biomolecular condensates. Finally, we highlight how some of these techniques can be translated for diagnostics and therapy purposes. We hope this Review serves as a useful guide for new researchers in this field and aids in advancing the development of new biophysical tools to study biomolecular condensates.
Collapse
|
12
|
Geng Z, Sun Z, Chen Y, Lu X, Tian T, Cheng G, Li X. Multi-input mutual supervision network for single-pixel computational imaging. OPTICS EXPRESS 2024; 32:13224-13234. [PMID: 38859298 DOI: 10.1364/oe.510683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 02/02/2024] [Indexed: 06/12/2024]
Abstract
In this study, we propose a single-pixel computational imaging method based on a multi-input mutual supervision network (MIMSN). We input one-dimensional (1D) light intensity signals and two-dimensional (2D) random image signal into MIMSN, enabling the network to learn the correlation between the two signals and achieve information complementarity. The 2D signal provides spatial information to the reconstruction process, reducing the uncertainty of the reconstructed image. The mutual supervision of the reconstruction results for these two signals brings the reconstruction objective closer to the ground truth image. The 2D images generated by the MIMSN can be used as inputs for subsequent iterations, continuously merging prior information to ensure high-quality imaging at low sampling rates. The reconstruction network does not require pretraining, and 1D signals collected by a single-pixel detector serve as labels for the network, enabling high-quality image reconstruction in unfamiliar environments. Especially in scattering environments, it holds significant potential for applications.
Collapse
|
13
|
Chen R, Xu J, Wang B, Ding Y, Abdulla A, Li Y, Jiang L, Ding X. SpiDe-Sr: blind super-resolution network for precise cell segmentation and clustering in spatial proteomics imaging. Nat Commun 2024; 15:2708. [PMID: 38548720 PMCID: PMC10978886 DOI: 10.1038/s41467-024-46989-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 03/15/2024] [Indexed: 04/01/2024] Open
Abstract
Spatial proteomics elucidates cellular biochemical changes with unprecedented topological level. Imaging mass cytometry (IMC) is a high-dimensional single-cell resolution platform for targeted spatial proteomics. However, the precision of subsequent clinical analysis is constrained by imaging noise and resolution. Here, we propose SpiDe-Sr, a super-resolution network embedded with a denoising module for IMC spatial resolution enhancement. SpiDe-Sr effectively resists noise and improves resolution by 4 times. We demonstrate SpiDe-Sr respectively with cells, mouse and human tissues, resulting 18.95%/27.27%/21.16% increase in peak signal-to-noise ratio and 15.95%/31.63%/15.52% increase in cell extraction accuracy. We further apply SpiDe-Sr to study the tumor microenvironment of a 20-patient clinical breast cancer cohort with 269,556 single cells, and discover the invasion of Gram-negative bacteria is positively correlated with carcinogenesis markers and negatively correlated with immunological markers. Additionally, SpiDe-Sr is also compatible with fluorescence microscopy imaging, suggesting SpiDe-Sr an alternative tool for microscopy image super-resolution.
Collapse
Grants
- This work was supported by National Key R&D Program of China (2022YFC2601700, 2022YFF0710202) and NSFC Projects (T2122002, 22077079, 81871448), Shanghai Municipal Science and Technology Project(22Z510202478), Shanghai Municipal Education Commission Project(21SG10), Shanghai Jiao Tong University Projects (YG2021ZD19, Agri-X20200101, 2020 SJTU-HUJI), Shanghai Municipal Health Commission Project (2019CXJQ03). Thanks for AEMD SJTU, Shanghai Jiao Tong University Laboratory Animal Center for the supporting.
Collapse
Affiliation(s)
- Rui Chen
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jiasu Xu
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Boqian Wang
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yi Ding
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Aynur Abdulla
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiyang Li
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Lai Jiang
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xianting Ding
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
14
|
Tabata K, Kawagoe H, Taylor JN, Mochizuki K, Kubo T, Clement JE, Kumamoto Y, Harada Y, Nakamura A, Fujita K, Komatsuzaki T. On-the-fly Raman microscopy guaranteeing the accuracy of discrimination. Proc Natl Acad Sci U S A 2024; 121:e2304866121. [PMID: 38483992 PMCID: PMC10962959 DOI: 10.1073/pnas.2304866121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 12/15/2023] [Indexed: 03/19/2024] Open
Abstract
Accelerating the measurement for discrimination of samples, such as classification of cell phenotype, is crucial when faced with significant time and cost constraints. Spontaneous Raman microscopy offers label-free, rich chemical information but suffers from long acquisition time due to extremely small scattering cross-sections. One possible approach to accelerate the measurement is by measuring necessary parts with a suitable number of illumination points. However, how to design these points during measurement remains a challenge. To address this, we developed an imaging technique based on a reinforcement learning in machine learning (ML). This ML approach adaptively feeds back "optimal" illumination pattern during the measurement to detect the existence of specific characteristics of interest, allowing faster measurements while guaranteeing discrimination accuracy. Using a set of Raman images of human follicular thyroid and follicular thyroid carcinoma cells, we showed that our technique requires 3,333 to 31,683 times smaller number of illuminations for discriminating the phenotypes than raster scanning. To quantitatively evaluate the number of illuminations depending on the requisite discrimination accuracy, we prepared a set of polymer bead mixture samples to model anomalous and normal tissues. We then applied a home-built programmable-illumination microscope equipped with our algorithm, and confirmed that the system can discriminate the sample conditions with 104 to 4,350 times smaller number of illuminations compared to standard point illumination Raman microscopy. The proposed algorithm can be applied to other types of microscopy that can control measurement condition on the fly, offering an approach for the acceleration of accurate measurements in various applications including medical diagnosis.
Collapse
Affiliation(s)
- Koji Tabata
- Research Center of Mathematics for Social Creativity, Research Institute for Electronic Science, Hokkaido University, Sapporo001–0020, Hokkaido, Japan
- Institute for Chemical Reaction Design and Discovery, Hokkaido University, Sapporo001–0021, Hokkaido, Japan
| | - Hiroyuki Kawagoe
- Department of Applied Physics, Osaka University, Suita565–0871, Osaka, Japan
| | - J. Nicholas Taylor
- Research Center of Mathematics for Social Creativity, Research Institute for Electronic Science, Hokkaido University, Sapporo001–0020, Hokkaido, Japan
| | - Kentaro Mochizuki
- Department of Pathology and Cell Regulation, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto602–8566, Kyoto, Japan
| | - Toshiki Kubo
- Department of Applied Physics, Osaka University, Suita565–0871, Osaka, Japan
| | - Jean-Emmanuel Clement
- Institute for Chemical Reaction Design and Discovery, Hokkaido University, Sapporo001–0021, Hokkaido, Japan
| | - Yasuaki Kumamoto
- Department of Applied Physics, Osaka University, Suita565–0871, Osaka, Japan
- Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita565–0871, Osaka, Japan
| | - Yoshinori Harada
- Department of Pathology and Cell Regulation, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto602–8566, Kyoto, Japan
| | - Atsuyoshi Nakamura
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo060–0814, Hokkaido, Japan
| | - Katsumasa Fujita
- Department of Applied Physics, Osaka University, Suita565–0871, Osaka, Japan
- Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita565–0871, Osaka, Japan
- Advanced Photonics and Biosensing Open Innovation Laboratory, AIST-Osaka University, Suita565–0871, Osaka, Japan
| | - Tamiki Komatsuzaki
- Research Center of Mathematics for Social Creativity, Research Institute for Electronic Science, Hokkaido University, Sapporo001–0020, Hokkaido, Japan
- Institute for Chemical Reaction Design and Discovery, Hokkaido University, Sapporo001–0021, Hokkaido, Japan
- Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita565–0871, Osaka, Japan
- Graduate School of Chemical Sciences and Engineering Materials Chemistry, and Engineering Course, Hokkaido University, Sapporo060–0812, Hokkaido, Japan
- The Institute of Scientific and Industrial Research, Osaka University, Ibaraki567-0047, Osaka, Japan
| |
Collapse
|
15
|
Song D, Zhang X, Li B, Sun Y, Mei H, Cheng X, Li J, Cheng X, Fang N. Deep Learning-Assisted Automated Multidimensional Single Particle Tracking in Living Cells. NANO LETTERS 2024; 24:3082-3088. [PMID: 38416583 DOI: 10.1021/acs.nanolett.3c04870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/01/2024]
Abstract
The translational and rotational dynamics of anisotropic optical nanoprobes revealed in single particle tracking (SPT) experiments offer molecular-level information about cellular activities. Here, we report an automated high-speed multidimensional SPT system integrated with a deep learning algorithm for tracking the 3D orientation of anisotropic gold nanoparticle probes in living cells with high localization precision (<10 nm) and temporal resolution (0.9 ms), overcoming the limitations of rotational tracking under low signal-to-noise ratio (S/N) conditions. This method can resolve the azimuth (0°-360°) and polar angles (0°-90°) with errors of less than 2° on the experimental and simulated data under S/N of ∼4. Even when the S/N approaches the limit of 1, this method still maintains better robustness and noise resistance than the conventional pattern matching methods. The usefulness of this multidimensional SPT system has been demonstrated with a study of the motions of cargos transported along the microtubules within living cells.
Collapse
Affiliation(s)
- Dongliang Song
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Innovation Laboratory for Sciences and Technologies of Energy Materials of Fujian Province (IKKEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, China, 361005
| | - Xin Zhang
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Innovation Laboratory for Sciences and Technologies of Energy Materials of Fujian Province (IKKEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, China, 361005
| | - Baoyun Li
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Innovation Laboratory for Sciences and Technologies of Energy Materials of Fujian Province (IKKEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, China, 361005
| | - Yuanfang Sun
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Innovation Laboratory for Sciences and Technologies of Energy Materials of Fujian Province (IKKEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, China, 361005
| | - Huihui Mei
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Innovation Laboratory for Sciences and Technologies of Energy Materials of Fujian Province (IKKEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, China, 361005
| | - Xiaojuan Cheng
- School of Pharmaceutical Sciences, Wenzhou Medical University, Wenzhou, China, 325035
| | - Jieming Li
- Bristol Myers Squibb Company, New Brunswick, New Jersey 08901, United States
| | - Xiaodong Cheng
- School of Pharmaceutical Sciences, Wenzhou Medical University, Wenzhou, China, 325035
| | - Ning Fang
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Innovation Laboratory for Sciences and Technologies of Energy Materials of Fujian Province (IKKEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, China, 361005
| |
Collapse
|
16
|
Xiao D, Kedem Orange R, Opatovski N, Parizat A, Nehme E, Alalouf O, Shechtman Y. Large-FOV 3D localization microscopy by spatially variant point spread function generation. SCIENCE ADVANCES 2024; 10:eadj3656. [PMID: 38457497 PMCID: PMC10923516 DOI: 10.1126/sciadv.adj3656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 02/05/2024] [Indexed: 03/10/2024]
Abstract
Accurate characterization of the microscopic point spread function (PSF) is crucial for achieving high-performance localization microscopy (LM). Traditionally, LM assumes a spatially invariant PSF to simplify the modeling of the imaging system. However, for large fields of view (FOV) imaging, it becomes important to account for the spatially variant nature of the PSF. Here, we propose an accurate and fast principal components analysis-based field-dependent 3D PSF generator (PPG3D) and localizer for LM. Through simulations and experimental three-dimensional (3D) single-molecule localization microscopy (SMLM), we demonstrate the effectiveness of PPG3D, enabling super-resolution imaging of mitochondria and microtubules with high fidelity over a large FOV. A comparison of PPG3D with a shift-variant PSF generator for 3D LM reveals a threefold improvement in accuracy. Moreover, PPG3D is approximately 100 times faster than existing PSF generators, when used in image plane-based interpolation mode. Given its user-friendliness, we believe that PPG3D holds great potential for widespread application in SMLM and other imaging modalities.
Collapse
Affiliation(s)
- Dafei Xiao
- Russell Berrie Nanotechnology Institute, Technion—Israel Institute of Technology, Haifa, Israel
| | - Reut Kedem Orange
- Russell Berrie Nanotechnology Institute, Technion—Israel Institute of Technology, Haifa, Israel
| | - Nadav Opatovski
- Russell Berrie Nanotechnology Institute, Technion—Israel Institute of Technology, Haifa, Israel
| | - Amit Parizat
- Department of Biomedical Engineering, Technion—Israel Institute of Technology, Haifa, Israel
| | - Elias Nehme
- Department of Biomedical Engineering, Technion—Israel Institute of Technology, Haifa, Israel
- Department of Electrical and Computer Engineering, Technion—Israel Institute of Technology, Haifa, Israel
| | - Onit Alalouf
- Department of Biomedical Engineering, Technion—Israel Institute of Technology, Haifa, Israel
| | - Yoav Shechtman
- Russell Berrie Nanotechnology Institute, Technion—Israel Institute of Technology, Haifa, Israel
- Department of Biomedical Engineering, Technion—Israel Institute of Technology, Haifa, Israel
- Walker Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
17
|
Trieu Q, Nehmetallah G. Deep learning based coherence holography reconstruction of 3D objects. APPLIED OPTICS 2024; 63:B1-B15. [PMID: 38437250 DOI: 10.1364/ao.503034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 10/12/2023] [Indexed: 03/06/2024]
Abstract
We propose a reconstruction method for coherence holography using deep neural networks. cGAN and U-NET models were developed to reconstruct 3D complex objects from recorded interferograms. Our proposed methods, dubbed deep coherence holography (DCH), predict the non-diffracted fields or the sub-objects included in the 3D object from the captured interferograms, yielding better reconstructed objects than the traditional analytical imaging methods in terms of accuracy, resolution, and time. The DCH needs one image per sub-object as opposed to N images for the traditional sin-fit algorithm, and hence the total reconstruction time is reduced by N×. Furthermore, with noisy interferograms the DCH amplitude mean square reconstruction error (MSE) is 5×104× and 104× and phase MSE is 102× and 3×103× better than Fourier fringe and sin-fit algorithms, respectively. The amplitude peak signal to noise ratio (PSNR) is 3× and 2× and phase PSNR is 5× and 3× better than Fourier fringe and sin-fit algorithms, respectively. The reconstruction resolution is the same as sin-fit but 2× better than the Fourier fringe analysis technique.
Collapse
|
18
|
Lei M, Zhao J, Zhou J, Lee H, Wu Q, Burns Z, Chen G, Liu Z. Super resolution label-free dark-field microscopy by deep learning. NANOSCALE 2024; 16:4703-4709. [PMID: 38268454 DOI: 10.1039/d3nr04294d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Dark-field microscopy (DFM) is a powerful label-free and high-contrast imaging technique due to its ability to reveal features of transparent specimens with inhomogeneities. However, owing to the Abbe's diffraction limit, fine structures at sub-wavelength scale are difficult to resolve. In this work, we report a single image super resolution DFM scheme using a convolutional neural network (CNN). A U-net based CNN is trained with a dataset which is numerically simulated based on the forward physical model of the DFM. The forward physical model described by the parameters of the imaging setup connects the object ground truths and dark field images. With the trained network, we demonstrate super resolution dark field imaging of various test samples with twice resolution improvement. Our technique illustrates a promising deep learning approach to double the resolution of DFM without any hardware modification.
Collapse
Affiliation(s)
- Ming Lei
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Junxiang Zhao
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Junxiao Zhou
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Hongki Lee
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Qianyi Wu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Zachary Burns
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Guanghao Chen
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Zhaowei Liu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
- Materials Science and Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| |
Collapse
|
19
|
Kim S, Lee J, Ko J, Park S, Lee SR, Kim Y, Lee T, Choi S, Kim J, Kim W, Chung Y, Kwon OH, Jeon NL. Angio-Net: deep learning-based label-free detection and morphometric analysis of in vitro angiogenesis. LAB ON A CHIP 2024; 24:751-763. [PMID: 38193617 DOI: 10.1039/d3lc00935a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
Despite significant advancements in three-dimensional (3D) cell culture technology and the acquisition of extensive data, there is an ongoing need for more effective and dependable data analysis methods. These concerns arise from the continued reliance on manual quantification techniques. In this study, we introduce a microphysiological system (MPS) that seamlessly integrates 3D cell culture to acquire large-scale imaging data and employs deep learning-based virtual staining for quantitative angiogenesis analysis. We utilize a standardized microfluidic device to obtain comprehensive angiogenesis data. Introducing Angio-Net, a novel solution that replaces conventional immunocytochemistry, we convert brightfield images into label-free virtual fluorescence images through the fusion of SegNet and cGAN. Moreover, we develop a tool capable of extracting morphological blood vessel features and automating their measurement, facilitating precise quantitative analysis. This integrated system proves to be invaluable for evaluating drug efficacy, including the assessment of anticancer drugs on targets such as the tumor microenvironment. Additionally, its unique ability to enable live cell imaging without the need for cell fixation promises to broaden the horizons of pharmaceutical and biological research. Our study pioneers a powerful approach to high-throughput angiogenesis analysis, marking a significant advancement in MPS.
Collapse
Affiliation(s)
- Suryong Kim
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Jungseub Lee
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Jihoon Ko
- Department of BioNano Technology, Gachon University, Gyeonggi, 13120, Republic of Korea
| | - Seonghyuk Park
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Seung-Ryeol Lee
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Youngtaek Kim
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Taeseung Lee
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Sunbeen Choi
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Jiho Kim
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Wonbae Kim
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
| | - Yoojin Chung
- Division of Computer Engineering, Hankuk University of Foreign Studies, Yongin, 17035, Republic of Korea
| | - Oh-Heum Kwon
- Department of IT convergence and Applications Engineering, Pukyong National University, Busan, 48513, Republic of Korea
| | - Noo Li Jeon
- Department of Mechanical Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
- Institute of Advanced Machines and Design, Seoul National University, Seoul, 08826, Republic of Korea
| |
Collapse
|
20
|
Gómez-de-Mariscal E, Del Rosario M, Pylvänäinen JW, Jacquemet G, Henriques R. Harnessing artificial intelligence to reduce phototoxicity in live imaging. J Cell Sci 2024; 137:jcs261545. [PMID: 38324353 PMCID: PMC10912813 DOI: 10.1242/jcs.261545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024] Open
Abstract
Fluorescence microscopy is essential for studying living cells, tissues and organisms. However, the fluorescent light that switches on fluorescent molecules also harms the samples, jeopardizing the validity of results - particularly in techniques such as super-resolution microscopy, which demands extended illumination. Artificial intelligence (AI)-enabled software capable of denoising, image restoration, temporal interpolation or cross-modal style transfer has great potential to rescue live imaging data and limit photodamage. Yet we believe the focus should be on maintaining light-induced damage at levels that preserve natural cell behaviour. In this Opinion piece, we argue that a shift in role for AIs is needed - AI should be used to extract rich insights from gentle imaging rather than recover compromised data from harsh illumination. Although AI can enhance imaging, our ultimate goal should be to uncover biological truths, not just retrieve data. It is essential to prioritize minimizing photodamage over merely pushing technical limits. Our approach is aimed towards gentle acquisition and observation of undisturbed living systems, aligning with the essence of live-cell fluorescence microscopy.
Collapse
Affiliation(s)
| | | | - Joanna W. Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku 20100, Finland
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal
- UCL Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK
| |
Collapse
|
21
|
Priessner M, Gaboriau DCA, Sheridan A, Lenn T, Garzon-Coral C, Dunn AR, Chubb JR, Tousley AM, Majzner RG, Manor U, Vilar R, Laine RF. Content-aware frame interpolation (CAFI): deep learning-based temporal super-resolution for fast bioimaging. Nat Methods 2024; 21:322-330. [PMID: 38238557 PMCID: PMC10864186 DOI: 10.1038/s41592-023-02138-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 11/17/2023] [Indexed: 02/15/2024]
Abstract
The development of high-resolution microscopes has made it possible to investigate cellular processes in 3D and over time. However, observing fast cellular dynamics remains challenging because of photobleaching and phototoxicity. Here we report the implementation of two content-aware frame interpolation (CAFI) deep learning networks, Zooming SlowMo and Depth-Aware Video Frame Interpolation, that are highly suited for accurately predicting images in between image pairs, therefore improving the temporal resolution of image series post-acquisition. We show that CAFI is capable of understanding the motion context of biological structures and can perform better than standard interpolation methods. We benchmark CAFI's performance on 12 different datasets, obtained from four different microscopy modalities, and demonstrate its capabilities for single-particle tracking and nuclear segmentation. CAFI potentially allows for reduced light exposure and phototoxicity on the sample for improved long-term live-cell imaging. The models and the training and testing data are available via the ZeroCostDL4Mic platform.
Collapse
Affiliation(s)
- Martin Priessner
- Department of Chemistry, Imperial College London, London, UK.
- Centre of Excellence in Neurotechnology, Imperial College London, London, UK.
| | - David C A Gaboriau
- Facility for Imaging by Light Microscopy, NHLI, Imperial College London, London, UK
| | - Arlo Sheridan
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Tchern Lenn
- CRUK City of London Centre, UCL Cancer Institute, London, UK
| | - Carlos Garzon-Coral
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
- Institute of Human Biology, Roche Pharma Research & Early Development, Roche Innovation Center Basel, Basel, Switzerland
| | - Alexander R Dunn
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
| | - Jonathan R Chubb
- Laboratory for Molecular Cell Biology, University College London, London, UK
| | - Aidan M Tousley
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Robbie G Majzner
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Uri Manor
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
- Department of Cell & Developmental Biology, University of California, San Diego, CA, USA
| | - Ramon Vilar
- Department of Chemistry, Imperial College London, London, UK
| | - Romain F Laine
- Micrographia Bio, Translation and Innovation Hub, London, UK.
| |
Collapse
|
22
|
Chang GH, Wu MY, Yen LH, Huang DY, Lin YH, Luo YR, Liu YD, Xu B, Leong KW, Lai WS, Chiang AS, Wang KC, Lin CH, Wang SL, Chu LA. Isotropic multi-scale neuronal reconstruction from high-ratio expansion microscopy with contrastive unsupervised deep generative models. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107991. [PMID: 38185040 DOI: 10.1016/j.cmpb.2023.107991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 12/10/2023] [Accepted: 12/19/2023] [Indexed: 01/09/2024]
Abstract
BACKGROUND AND OBJECTIVE Current methods for imaging reconstruction from high-ratio expansion microscopy (ExM) data are limited by anisotropic optical resolution and the requirement for extensive manual annotation, creating a significant bottleneck in the analysis of complex neuronal structures. METHODS We devised an innovative approach called the IsoGAN model, which utilizes a contrastive unsupervised generative adversarial network to sidestep these constraints. This model leverages multi-scale and isotropic neuron/protein/blood vessel morphology data to generate high-fidelity 3D representations of these structures, eliminating the need for rigorous manual annotation and supervision. The IsoGAN model introduces simplified structures with idealized morphologies as shape priors to ensure high consistency in the generated neuronal profiles across all points in space and scalability for arbitrarily large volumes. RESULTS The efficacy of the IsoGAN model in accurately reconstructing complex neuronal structures was quantitatively assessed by examining the consistency between the axial and lateral views and identifying a reduction in erroneous imaging artifacts. The IsoGAN model accurately reconstructed complex neuronal structures, as evidenced by the consistency between the axial and lateral views and a reduction in erroneous imaging artifacts, and can be further applied to various biological samples. CONCLUSION With its ability to generate detailed 3D neurons/proteins/blood vessel structures using significantly fewer axial view images, IsoGAN can streamline the process of imaging reconstruction while maintaining the necessary detail, offering a transformative solution to the existing limitations in high-throughput morphology analysis across different structures.
Collapse
Affiliation(s)
- Gary Han Chang
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC; Graduate School of Advanced Technology, National Taiwan University, Taipei, Taiwan, ROC.
| | - Meng-Yun Wu
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC
| | - Ling-Hui Yen
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Da-Yu Huang
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC
| | - Ya-Hui Lin
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Yi-Ru Luo
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Ya-Ding Liu
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Bin Xu
- Department of Psychiatry, Columbia University, New York, NY 10032, USA
| | - Kam W Leong
- Department of Biomedical Engineering, Columbia University, New York, NY 10032, USA
| | - Wen-Sung Lai
- Department of Psychology, National Taiwan University, Taipei, Taiwan, ROC
| | - Ann-Shyn Chiang
- Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC; Institute of System Neuroscience, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Kuo-Chuan Wang
- Department of Neurosurgery, National Taiwan University Hospital, Taipei, Taiwan, ROC
| | - Chin-Hsien Lin
- Department of Neurosurgery, National Taiwan University Hospital, Taipei, Taiwan, ROC
| | - Shih-Luen Wang
- Department of Physics and Center for Interdisciplinary Research on Complex Systems, Northeastern University, Boston, MA 02115, USA
| | - Li-An Chu
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC.
| |
Collapse
|
23
|
Wang Q, Li Z, Zhang S, Chi N, Dai Q. A versatile Wavelet-Enhanced CNN-Transformer for improved fluorescence microscopy image restoration. Neural Netw 2024; 170:227-241. [PMID: 37992510 DOI: 10.1016/j.neunet.2023.11.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 11/06/2023] [Accepted: 11/17/2023] [Indexed: 11/24/2023]
Abstract
Fluorescence microscopes are indispensable tools for the life science research community. Nevertheless, the presence of optical component limitations, coupled with the maximum photon budget that the specimen can tolerate, inevitably leads to a decline in imaging quality and a lack of useful signals. Therefore, image restoration becomes essential for ensuring high-quality and accurate analyses. This paper presents the Wavelet-Enhanced Convolutional-Transformer (WECT), a novel deep learning technique developed specifically for the purpose of reducing noise in microscopy images and attaining super-resolution. Unlike traditional approaches, WECT integrates wavelet transform and inverse-transform for multi-resolution image decomposition and reconstruction, resulting in an expanded receptive field for the network without compromising information integrity. Subsequently, multiple consecutive parallel CNN-Transformer modules are utilized to collaboratively model local and global dependencies, thus facilitating the extraction of more comprehensive and diversified deep features. In addition, the incorporation of generative adversarial networks (GANs) into WECT enhances its capacity to generate high perceptual quality microscopic images. Extensive experiments have demonstrated that the WECT framework outperforms current state-of-the-art restoration methods on real fluorescence microscopy data under various imaging modalities and conditions, in terms of quantitative and qualitative analysis.
Collapse
Affiliation(s)
- Qinghua Wang
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China.
| | - Ziwei Li
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Shanghai ERC of LEO Satellite Communication and Applications, Shanghai CIC of LEO Satellite Communication Technology, Fudan University, Shanghai, 200433, China; Pujiang Laboratory, Shanghai, China.
| | - Shuqi Zhang
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China.
| | - Nan Chi
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Shanghai ERC of LEO Satellite Communication and Applications, Shanghai CIC of LEO Satellite Communication Technology, Fudan University, Shanghai, 200433, China; Shanghai Collaborative Innovation Center of Low-Earth-Orbit Satellite Communication Technology, Shanghai, 200433, China.
| | - Qionghai Dai
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Department of Automation, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
24
|
Li H, Sun X, Cui W, Xu M, Dong J, Ekundayo BE, Ni D, Rao Z, Guo L, Stahlberg H, Yuan S, Vogel H. Computational drug development for membrane protein targets. Nat Biotechnol 2024; 42:229-242. [PMID: 38361054 DOI: 10.1038/s41587-023-01987-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 09/13/2023] [Indexed: 02/17/2024]
Abstract
The application of computational biology in drug development for membrane protein targets has experienced a boost from recent developments in deep learning-driven structure prediction, increased speed and resolution of structure elucidation, machine learning structure-based design and the evaluation of big data. Recent protein structure predictions based on machine learning tools have delivered surprisingly reliable results for water-soluble and membrane proteins but have limitations for development of drugs that target membrane proteins. Structural transitions of membrane proteins have a central role during transmembrane signaling and are often influenced by therapeutic compounds. Resolving the structural and functional basis of dynamic transmembrane signaling networks, especially within the native membrane or cellular environment, remains a central challenge for drug development. Tackling this challenge will require an interplay between experimental and computational tools, such as super-resolution optical microscopy for quantification of the molecular interactions of cellular signaling networks and their modulation by potential drugs, cryo-electron microscopy for determination of the structural transitions of proteins in native cell membranes and entire cells, and computational tools for data analysis and prediction of the structure and function of cellular signaling networks, as well as generation of promising drug candidates.
Collapse
Affiliation(s)
- Haijian Li
- Center for Computer-Aided Drug Discovery, Faculty of Pharmaceutical Sciences, Shenzhen Institute of Advanced Technology/Chinese Academy of Sciences (SIAT/CAS), Shenzhen, China
| | - Xiaolin Sun
- Center for Computer-Aided Drug Discovery, Faculty of Pharmaceutical Sciences, Shenzhen Institute of Advanced Technology/Chinese Academy of Sciences (SIAT/CAS), Shenzhen, China
| | - Wenqiang Cui
- Center for Computer-Aided Drug Discovery, Faculty of Pharmaceutical Sciences, Shenzhen Institute of Advanced Technology/Chinese Academy of Sciences (SIAT/CAS), Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Marc Xu
- Center for Computer-Aided Drug Discovery, Faculty of Pharmaceutical Sciences, Shenzhen Institute of Advanced Technology/Chinese Academy of Sciences (SIAT/CAS), Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Junlin Dong
- Center for Computer-Aided Drug Discovery, Faculty of Pharmaceutical Sciences, Shenzhen Institute of Advanced Technology/Chinese Academy of Sciences (SIAT/CAS), Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Babatunde Edukpe Ekundayo
- Laboratory of Biological Electron Microscopy, IPHYS, SB, EPFL and Department of Fundamental Microbiology, Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland
| | - Dongchun Ni
- Laboratory of Biological Electron Microscopy, IPHYS, SB, EPFL and Department of Fundamental Microbiology, Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland
| | - Zhili Rao
- Center for Computer-Aided Drug Discovery, Faculty of Pharmaceutical Sciences, Shenzhen Institute of Advanced Technology/Chinese Academy of Sciences (SIAT/CAS), Shenzhen, China
| | - Liwei Guo
- Center for Computer-Aided Drug Discovery, Faculty of Pharmaceutical Sciences, Shenzhen Institute of Advanced Technology/Chinese Academy of Sciences (SIAT/CAS), Shenzhen, China
| | - Henning Stahlberg
- Laboratory of Biological Electron Microscopy, IPHYS, SB, EPFL and Department of Fundamental Microbiology, Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland.
| | - Shuguang Yuan
- Center for Computer-Aided Drug Discovery, Faculty of Pharmaceutical Sciences, Shenzhen Institute of Advanced Technology/Chinese Academy of Sciences (SIAT/CAS), Shenzhen, China.
| | - Horst Vogel
- Center for Computer-Aided Drug Discovery, Faculty of Pharmaceutical Sciences, Shenzhen Institute of Advanced Technology/Chinese Academy of Sciences (SIAT/CAS), Shenzhen, China.
- Institut des Sciences et Ingénierie Chimiques (ISIC), Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
25
|
Ortiz-Perez A, Zhang M, Fitzpatrick LW, Izquierdo-Lozano C, Albertazzi L. Advanced optical imaging for the rational design of nanomedicines. Adv Drug Deliv Rev 2024; 204:115138. [PMID: 37980951 DOI: 10.1016/j.addr.2023.115138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 11/06/2023] [Accepted: 11/08/2023] [Indexed: 11/21/2023]
Abstract
Despite the enormous potential of nanomedicines to shape the future of medicine, their clinical translation remains suboptimal. Translational challenges are present in every step of the development pipeline, from a lack of understanding of patient heterogeneity to insufficient insights on nanoparticle properties and their impact on material-cell interactions. Here, we discuss how the adoption of advanced optical microscopy techniques, such as super-resolution optical microscopies, correlative techniques, and high-content modalities, could aid the rational design of nanocarriers, by characterizing the cell, the nanomaterial, and their interaction with unprecedented spatial and/or temporal detail. In this nanomedicine arena, we will discuss how the implementation of these techniques, with their versatility and specificity, can yield high volumes of multi-parametric data; and how machine learning can aid the rapid advances in microscopy: from image acquisition to data interpretation.
Collapse
Affiliation(s)
- Ana Ortiz-Perez
- Department of Biomedical Engineering, Institute of Complex Molecular Systems, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Miao Zhang
- Department of Biomedical Engineering, Institute of Complex Molecular Systems, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Laurence W Fitzpatrick
- Department of Biomedical Engineering, Institute of Complex Molecular Systems, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Cristina Izquierdo-Lozano
- Department of Biomedical Engineering, Institute of Complex Molecular Systems, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Lorenzo Albertazzi
- Department of Biomedical Engineering, Institute of Complex Molecular Systems, Eindhoven University of Technology, Eindhoven, the Netherlands.
| |
Collapse
|
26
|
Yu F, Du K, Ju X, Wang F, Li K, Chen C, Du G, Deng B, Xie H, Xiao T. Dynamic X-ray speckle-tracking imaging with high-accuracy phase retrieval based on deep learning. IUCRJ 2024; 11:73-81. [PMID: 38096037 PMCID: PMC10833393 DOI: 10.1107/s2052252523010114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 11/22/2023] [Indexed: 01/10/2024]
Abstract
Speckle-tracking X-ray imaging is an attractive candidate for dynamic X-ray imaging owing to its flexible setup and simultaneous yields of phase, transmission and scattering images. However, traditional speckle-tracking imaging methods suffer from phase distortion at locations with abrupt changes in density, which is always the case for real samples, limiting the applications of the speckle-tracking X-ray imaging method. In this paper, we report a deep-learning based method which can achieve dynamic X-ray speckle-tracking imaging with high-accuracy phase retrieval. The calibration results of a phantom show that the profile of the retrieved phase is highly consistent with the theoretical one. Experiments of polyurethane foaming demonstrated that the proposed method revealed the evolution of the complicated microstructure of the bubbles accurately. The proposed method is a promising solution for dynamic X-ray imaging with high-accuracy phase retrieval, and has extensive applications in metrology and quantitative analysis of dynamics in material science, physics, chemistry and biomedicine.
Collapse
Affiliation(s)
- Fucheng Yu
- Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201204, People’s Republic of China
- Shanghai Synchrotron Radiation Facility/Zhang Jiang Lab, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201800, People’s Republic of China
- University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China
| | - Kang Du
- Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201204, People’s Republic of China
- Shanghai Synchrotron Radiation Facility/Zhang Jiang Lab, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201800, People’s Republic of China
- University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China
| | - Xiaolu Ju
- Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201204, People’s Republic of China
- Shanghai Synchrotron Radiation Facility/Zhang Jiang Lab, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201800, People’s Republic of China
- University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China
| | - Feixiang Wang
- Shanghai Synchrotron Radiation Facility/Zhang Jiang Lab, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201800, People’s Republic of China
| | - Ke Li
- Shanghai Synchrotron Radiation Facility/Zhang Jiang Lab, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201800, People’s Republic of China
| | - Can Chen
- Zhejiang Institute of Metrology, Hangzhou 310063, People’s Republic of China
| | - Guohao Du
- Shanghai Synchrotron Radiation Facility/Zhang Jiang Lab, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201800, People’s Republic of China
| | - Biao Deng
- Shanghai Synchrotron Radiation Facility/Zhang Jiang Lab, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201800, People’s Republic of China
| | - Honglan Xie
- Shanghai Synchrotron Radiation Facility/Zhang Jiang Lab, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201800, People’s Republic of China
| | - Tiqiao Xiao
- Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201204, People’s Republic of China
- Shanghai Synchrotron Radiation Facility/Zhang Jiang Lab, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201800, People’s Republic of China
- University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China
| |
Collapse
|
27
|
Bi X, Lin L, Chen Z, Ye J. Artificial Intelligence for Surface-Enhanced Raman Spectroscopy. SMALL METHODS 2024; 8:e2301243. [PMID: 37888799 DOI: 10.1002/smtd.202301243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/11/2023] [Indexed: 10/28/2023]
Abstract
Surface-enhanced Raman spectroscopy (SERS), well acknowledged as a fingerprinting and sensitive analytical technique, has exerted high applicational value in a broad range of fields including biomedicine, environmental protection, food safety among the others. In the endless pursuit of ever-sensitive, robust, and comprehensive sensing and imaging, advancements keep emerging in the whole pipeline of SERS, from the design of SERS substrates and reporter molecules, synthetic route planning, instrument refinement, to data preprocessing and analysis methods. Artificial intelligence (AI), which is created to imitate and eventually exceed human behaviors, has exhibited its power in learning high-level representations and recognizing complicated patterns with exceptional automaticity. Therefore, facing up with the intertwining influential factors and explosive data size, AI has been increasingly leveraged in all the above-mentioned aspects in SERS, presenting elite efficiency in accelerating systematic optimization and deepening understanding about the fundamental physics and spectral data, which far transcends human labors and conventional computations. In this review, the recent progresses in SERS are summarized through the integration of AI, and new insights of the challenges and perspectives are provided in aim to better gear SERS toward the fast track.
Collapse
Affiliation(s)
- Xinyuan Bi
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - Li Lin
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - Zhou Chen
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - Jian Ye
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200127, P. R. China
- Shanghai Key Laboratory of Gynecologic Oncology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, P. R. China
| |
Collapse
|
28
|
Rames M, Kenison JP, Heineck D, Civitci F, Szczepaniak M, Zheng T, Shangguan J, Zhang Y, Tao K, Esener S, Nan X. Multiplexed and Millimeter-Scale Fluorescence Nanoscopy of Cells and Tissue Sections via Prism-Illumination and Microfluidics-Enhanced DNA-PAINT. CHEMICAL & BIOMEDICAL IMAGING 2023; 1:817-830. [PMID: 38155726 PMCID: PMC10751790 DOI: 10.1021/cbmi.3c00060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 07/24/2023] [Accepted: 08/18/2023] [Indexed: 12/30/2023]
Abstract
Fluorescence nanoscopy has become increasingly powerful for biomedical research, but it has historically afforded a small field-of-view (FOV) of around 50 μm × 50 μm at once and more recently up to ∼200 μm × 200 μm. Efforts to further increase the FOV in fluorescence nanoscopy have thus far relied on the use of fabricated waveguide substrates, adding cost and sample constraints to the applications. Here we report PRism-Illumination and Microfluidics-Enhanced DNA-PAINT (PRIME-PAINT) for multiplexed fluorescence nanoscopy across millimeter-scale FOVs. Built upon the well-established prism-type total internal reflection microscopy, PRIME-PAINT achieves robust single-molecule localization with up to ∼520 μm × 520 μm single FOVs and 25-40 nm lateral resolutions. Through stitching, nanoscopic imaging over mm2 sample areas can be completed in as little as 40 min per target. An on-stage microfluidics chamber facilitates probe exchange for multiplexing and enhances image quality, particularly for formalin-fixed paraffin-embedded (FFPE) tissue sections. We demonstrate the utility of PRIME-PAINT by analyzing ∼106 caveolae structures in ∼1,000 cells and imaging entire pancreatic cancer lesions from patient tissue biopsies. By imaging from nanometers to millimeters with multiplexity and broad sample compatibility, PRIME-PAINT will be useful for building multiscale, Google-Earth-like views of biological systems.
Collapse
Affiliation(s)
- Matthew
J. Rames
- Cancer
Early Detection Advanced Research Center, Knight Cancer Institute, Oregon Health & Science University, 2720 South Moody Avenue, Portland, Oregon 97201, United States
- Program
in Quantitative and Systems Biology, Department of Biomedical Engineering, Oregon Health & Science University, 2730 South Moody Avenue, Portland, Oregon 97201, United States
| | - John P. Kenison
- Cancer
Early Detection Advanced Research Center, Knight Cancer Institute, Oregon Health & Science University, 2720 South Moody Avenue, Portland, Oregon 97201, United States
| | - Daniel Heineck
- Cancer
Early Detection Advanced Research Center, Knight Cancer Institute, Oregon Health & Science University, 2720 South Moody Avenue, Portland, Oregon 97201, United States
- Program
in Quantitative and Systems Biology, Department of Biomedical Engineering, Oregon Health & Science University, 2730 South Moody Avenue, Portland, Oregon 97201, United States
| | - Fehmi Civitci
- Cancer
Early Detection Advanced Research Center, Knight Cancer Institute, Oregon Health & Science University, 2720 South Moody Avenue, Portland, Oregon 97201, United States
| | - Malwina Szczepaniak
- Program
in Quantitative and Systems Biology, Department of Biomedical Engineering, Oregon Health & Science University, 2730 South Moody Avenue, Portland, Oregon 97201, United States
| | - Ting Zheng
- Cancer
Early Detection Advanced Research Center, Knight Cancer Institute, Oregon Health & Science University, 2720 South Moody Avenue, Portland, Oregon 97201, United States
| | - Julia Shangguan
- Program
in Quantitative and Systems Biology, Department of Biomedical Engineering, Oregon Health & Science University, 2730 South Moody Avenue, Portland, Oregon 97201, United States
| | - Yujia Zhang
- Cancer
Early Detection Advanced Research Center, Knight Cancer Institute, Oregon Health & Science University, 2720 South Moody Avenue, Portland, Oregon 97201, United States
- Program
in Quantitative and Systems Biology, Department of Biomedical Engineering, Oregon Health & Science University, 2730 South Moody Avenue, Portland, Oregon 97201, United States
| | - Kai Tao
- Program
in Quantitative and Systems Biology, Department of Biomedical Engineering, Oregon Health & Science University, 2730 South Moody Avenue, Portland, Oregon 97201, United States
| | - Sadik Esener
- Cancer
Early Detection Advanced Research Center, Knight Cancer Institute, Oregon Health & Science University, 2720 South Moody Avenue, Portland, Oregon 97201, United States
- Program
in Quantitative and Systems Biology, Department of Biomedical Engineering, Oregon Health & Science University, 2730 South Moody Avenue, Portland, Oregon 97201, United States
| | - Xiaolin Nan
- Cancer
Early Detection Advanced Research Center, Knight Cancer Institute, Oregon Health & Science University, 2720 South Moody Avenue, Portland, Oregon 97201, United States
- Program
in Quantitative and Systems Biology, Department of Biomedical Engineering, Oregon Health & Science University, 2730 South Moody Avenue, Portland, Oregon 97201, United States
| |
Collapse
|
29
|
Imboden S, Liu X, Payne MC, Hsieh CJ, Lin NY. Trustworthy in silico cell labeling via ensemble-based image translation. BIOPHYSICAL REPORTS 2023; 3:100133. [PMID: 38026685 PMCID: PMC10663640 DOI: 10.1016/j.bpr.2023.100133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023]
Abstract
Artificial intelligence (AI) image translation has been a valuable tool for processing image data in biological and medical research. To apply such a tool in mission-critical applications, including drug screening, toxicity study, and clinical diagnostics, it is essential to ensure that the AI prediction is trustworthy. Here, we demonstrate that an ensemble learning method can quantify the uncertainty of AI image translation. We tested the uncertainty evaluation using experimentally acquired images of mesenchymal stromal cells. We find that the ensemble method reports a prediction standard deviation that correlates with the prediction error, estimating the prediction uncertainty. We show that this uncertainty is in agreement with the prediction error and Pearson correlation coefficient. We further show that the ensemble method can detect out-of-distribution input images by reporting increased uncertainty. Altogether, these results suggest that the ensemble-estimated uncertainty can be a useful indicator for identifying erroneous AI image translations.
Collapse
Affiliation(s)
- Sara Imboden
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Xuanqing Liu
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Marie C. Payne
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Cho-Jui Hsieh
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Neil Y.C. Lin
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
- Department of Bioengineering, University of California, Los Angeles, Los Angeles, California
- Institute for Quantitative and Computational Biosciences, University of California, Los Angeles, Los Angeles, California
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, California
- Jonsson Comprehensive Cancer Center, University of California, Los Angeles, Los Angeles, California
- Broad Stem Cell Center, University of California, Los Angeles, Los Angeles, California
| |
Collapse
|
30
|
Astratov VN, Sahel YB, Eldar YC, Huang L, Ozcan A, Zheludev N, Zhao J, Burns Z, Liu Z, Narimanov E, Goswami N, Popescu G, Pfitzner E, Kukura P, Hsiao YT, Hsieh CL, Abbey B, Diaspro A, LeGratiet A, Bianchini P, Shaked NT, Simon B, Verrier N, Debailleul M, Haeberlé O, Wang S, Liu M, Bai Y, Cheng JX, Kariman BS, Fujita K, Sinvani M, Zalevsky Z, Li X, Huang GJ, Chu SW, Tzang O, Hershkovitz D, Cheshnovsky O, Huttunen MJ, Stanciu SG, Smolyaninova VN, Smolyaninov II, Leonhardt U, Sahebdivan S, Wang Z, Luk’yanchuk B, Wu L, Maslov AV, Jin B, Simovski CR, Perrin S, Montgomery P, Lecler S. Roadmap on Label-Free Super-Resolution Imaging. LASER & PHOTONICS REVIEWS 2023; 17:2200029. [PMID: 38883699 PMCID: PMC11178318 DOI: 10.1002/lpor.202200029] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Indexed: 06/18/2024]
Abstract
Label-free super-resolution (LFSR) imaging relies on light-scattering processes in nanoscale objects without a need for fluorescent (FL) staining required in super-resolved FL microscopy. The objectives of this Roadmap are to present a comprehensive vision of the developments, the state-of-the-art in this field, and to discuss the resolution boundaries and hurdles which need to be overcome to break the classical diffraction limit of the LFSR imaging. The scope of this Roadmap spans from the advanced interference detection techniques, where the diffraction-limited lateral resolution is combined with unsurpassed axial and temporal resolution, to techniques with true lateral super-resolution capability which are based on understanding resolution as an information science problem, on using novel structured illumination, near-field scanning, and nonlinear optics approaches, and on designing superlenses based on nanoplasmonics, metamaterials, transformation optics, and microsphere-assisted approaches. To this end, this Roadmap brings under the same umbrella researchers from the physics and biomedical optics communities in which such studies have often been developing separately. The ultimate intent of this paper is to create a vision for the current and future developments of LFSR imaging based on its physical mechanisms and to create a great opening for the series of articles in this field.
Collapse
Affiliation(s)
- Vasily N. Astratov
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Yair Ben Sahel
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Yonina C. Eldar
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
- David Geffen School of Medicine, University of California, Los Angeles, California 90095, USA
| | - Nikolay Zheludev
- Optoelectronics Research Centre, University of Southampton, Southampton, SO17 1BJ, UK
- Centre for Disruptive Photonic Technologies, The Photonics Institute, School of Physical and Mathematical Sciences, Nanyang Technological University, 637371, Singapore
| | - Junxiang Zhao
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zachary Burns
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zhaowei Liu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
- Material Science and Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Evgenii Narimanov
- School of Electrical Engineering, and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA
| | - Neha Goswami
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Gabriel Popescu
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Emanuel Pfitzner
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Philipp Kukura
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Yi-Teng Hsiao
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Chia-Lung Hsieh
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Brian Abbey
- Australian Research Council Centre of Excellence for Advanced Molecular Imaging, La Trobe University, Melbourne, Victoria, Australia
- Department of Chemistry and Physics, La Trobe Institute for Molecular Science (LIMS), La Trobe University, Melbourne, Victoria, Australia
| | - Alberto Diaspro
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Aymeric LeGratiet
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- Université de Rennes, CNRS, Institut FOTON - UMR 6082, F-22305 Lannion, France
| | - Paolo Bianchini
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Natan T. Shaked
- Tel Aviv University, Faculty of Engineering, Department of Biomedical Engineering, Tel Aviv 6997801, Israel
| | - Bertrand Simon
- LP2N, Institut d’Optique Graduate School, CNRS UMR 5298, Université de Bordeaux, Talence France
| | - Nicolas Verrier
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | | | - Olivier Haeberlé
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | - Sheng Wang
- School of Physics and Technology, Wuhan University, China
- Wuhan Institute of Quantum Technology, China
| | - Mengkun Liu
- Department of Physics and Astronomy, Stony Brook University, USA
- National Synchrotron Light Source II, Brookhaven National Laboratory, USA
| | - Yeran Bai
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Ji-Xin Cheng
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Behjat S. Kariman
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Katsumasa Fujita
- Department of Applied Physics and the Advanced Photonics and Biosensing Open Innovation Laboratory (AIST); and the Transdimensional Life Imaging Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| | - Moshe Sinvani
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Zeev Zalevsky
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Xiangping Li
- Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Institute of Photonics Technology, Jinan University, Guangzhou 510632, China
| | - Guan-Jie Huang
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Shi-Wei Chu
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Omer Tzang
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Dror Hershkovitz
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Ori Cheshnovsky
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Mikko J. Huttunen
- Laboratory of Photonics, Physics Unit, Tampere University, FI-33014, Tampere, Finland
| | - Stefan G. Stanciu
- Center for Microscopy – Microanalysis and Information Processing, Politehnica University of Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
| | - Vera N. Smolyaninova
- Department of Physics Astronomy and Geosciences, Towson University, 8000 York Rd., Towson, MD 21252, USA
| | - Igor I. Smolyaninov
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - Ulf Leonhardt
- Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Sahar Sahebdivan
- EMTensor GmbH, TechGate, Donau-City-Strasse 1, 1220 Wien, Austria
| | - Zengbo Wang
- School of Computer Science and Electronic Engineering, Bangor University, Bangor, LL57 1UT, United Kingdom
| | - Boris Luk’yanchuk
- Faculty of Physics, Lomonosov Moscow State University, Moscow 119991, Russia
| | - Limin Wu
- Department of Materials Science and State Key Laboratory of Molecular Engineering of Polymers, Fudan University, Shanghai 200433, China
| | - Alexey V. Maslov
- Department of Radiophysics, University of Nizhny Novgorod, Nizhny Novgorod, 603022, Russia
| | - Boya Jin
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Constantin R. Simovski
- Department of Electronics and Nano-Engineering, Aalto University, FI-00076, Espoo, Finland
- Faculty of Physics and Engineering, ITMO University, 199034, St-Petersburg, Russia
| | - Stephane Perrin
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Paul Montgomery
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Sylvain Lecler
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| |
Collapse
|
31
|
Laine RF, Heil HS, Coelho S, Nixon-Abell J, Jimenez A, Wiesner T, Martínez D, Galgani T, Régnier L, Stubb A, Follain G, Webster S, Goyette J, Dauphin A, Salles A, Culley S, Jacquemet G, Hajj B, Leterrier C, Henriques R. High-fidelity 3D live-cell nanoscopy through data-driven enhanced super-resolution radial fluctuation. Nat Methods 2023; 20:1949-1956. [PMID: 37957430 PMCID: PMC10703683 DOI: 10.1038/s41592-023-02057-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/29/2023] [Indexed: 11/15/2023]
Abstract
Live-cell super-resolution microscopy enables the imaging of biological structure dynamics below the diffraction limit. Here we present enhanced super-resolution radial fluctuations (eSRRF), substantially improving image fidelity and resolution compared to the original SRRF method. eSRRF incorporates automated parameter optimization based on the data itself, giving insight into the trade-off between resolution and fidelity. We demonstrate eSRRF across a range of imaging modalities and biological systems. Notably, we extend eSRRF to three dimensions by combining it with multifocus microscopy. This realizes live-cell volumetric super-resolution imaging with an acquisition speed of ~1 volume per second. eSRRF provides an accessible super-resolution approach, maximizing information extraction across varied experimental conditions while minimizing artifacts. Its optimal parameter prediction strategy is generalizable, moving toward unbiased and optimized analyses in super-resolution microscopy.
Collapse
Affiliation(s)
- Romain F Laine
- Laboratory for Molecular Cell Biology, University College London, London, UK
- The Francis Crick Institute, London, UK
- Micrographia Bio, Translation and Innovation Hub, London, UK
| | - Hannah S Heil
- Optical Cell Biology, Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | - Simao Coelho
- Optical Cell Biology, Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | - Jonathon Nixon-Abell
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Cambridge Institute for Medical Research, Cambridge Univeristy, Cambridge, UK
| | - Angélique Jimenez
- Aix-Marseille Université, CNRS, INP UMR7051, NeuroCyto, Marseille, France
| | - Theresa Wiesner
- Aix-Marseille Université, CNRS, INP UMR7051, NeuroCyto, Marseille, France
| | - Damián Martínez
- Optical Cell Biology, Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | - Tommaso Galgani
- Laboratoire Physico-Chimie Curie, Institut Curie, PSL Research University, Sorbonne Université, CNRS UMR168, Paris, France
- Revvity Signals, Tres Cantos, Madrid, Spain
| | - Louise Régnier
- Laboratoire Physico-Chimie Curie, Institut Curie, PSL Research University, Sorbonne Université, CNRS UMR168, Paris, France
| | - Aki Stubb
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Department of Cell and Tissue Dynamics, Max Planck Institute for Molecular Biomedicine, Munster, Germany
| | - Gautier Follain
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku, Finland
| | - Samantha Webster
- EMBL Australia Node in Single Molecule Science, School of Biomedical Sciences, University of New South Wales, Sydney, New South Wales, Australia
| | - Jesse Goyette
- EMBL Australia Node in Single Molecule Science, School of Biomedical Sciences, University of New South Wales, Sydney, New South Wales, Australia
| | - Aurelien Dauphin
- Unite Genetique et Biologie du Développement U934, PICT-IBiSA, Institut Curie, INSERM, CNRS, PSL Research University, Paris, France
| | - Audrey Salles
- Institut Pasteur, Université Paris Cité, Unit of Technology and Service Photonic BioImaging (UTechS PBI), C2RT, Paris, France
| | - Siân Culley
- Laboratory for Molecular Cell Biology, University College London, London, UK
- Randall Centre for Cell and Molecular Biophysics, King's College London, Guy's Campus, London, UK
| | - Guillaume Jacquemet
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku, Finland
| | - Bassam Hajj
- Laboratoire Physico-Chimie Curie, Institut Curie, PSL Research University, Sorbonne Université, CNRS UMR168, Paris, France.
| | | | - Ricardo Henriques
- Laboratory for Molecular Cell Biology, University College London, London, UK.
- The Francis Crick Institute, London, UK.
- Optical Cell Biology, Instituto Gulbenkian de Ciência, Oeiras, Portugal.
| |
Collapse
|
32
|
Guo X, Zhao F, Zhu J, Zhu D, Zhao Y, Fei P. Rapid 3D isotropic imaging of whole organ with double-ring light-sheet microscopy and self-learning side-lobe elimination. BIOMEDICAL OPTICS EXPRESS 2023; 14:6206-6221. [PMID: 38420327 PMCID: PMC10898557 DOI: 10.1364/boe.505217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 10/26/2023] [Accepted: 10/30/2023] [Indexed: 03/02/2024]
Abstract
Bessel-like plane illumination forms a new type of light-sheet microscopy with ultra-long optical sectioning distance that enables rapid 3D imaging of fine cellular structures across an entire large tissue. However, the side-lobe excitation of conventional Bessel light sheets severely impairs the quality of the reconstructed 3D image. Here, we propose a self-supervised deep learning (DL) approach that can completely eliminate the residual side lobes for a double-ring-modulated non-diffraction light-sheet microscope, thereby substantially improving the axial resolution of the 3D image. This lightweight DL model utilizes the own point spread function (PSF) of the microscope as prior information without the need for external high-resolution microscopy data. After a quick training process based on a small number of datasets, the grown-up model can restore sidelobe-free 3D images with near isotropic resolution for diverse samples. Using an advanced double-ring light-sheet microscope in conjunction with this efficient restoration approach, we demonstrate 5-minute rapid imaging of an entire mouse brain with a size of ∼12 mm × 8 mm × 6 mm and achieve uniform isotropic resolution of ∼4 µm (1.6-µm voxel) capable of discerning the single neurons and vessels across the whole brain.
Collapse
Affiliation(s)
- Xinyi Guo
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Fang Zhao
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jingtan Zhu
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, 430074, Wuhan, China
| | - Dan Zhu
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, 430074, Wuhan, China
- Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Yuxuan Zhao
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Peng Fei
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, 430074, Wuhan, China
| |
Collapse
|
33
|
Saguy A, Alalouf O, Opatovski N, Jang S, Heilemann M, Shechtman Y. DBlink: dynamic localization microscopy in super spatiotemporal resolution via deep learning. Nat Methods 2023; 20:1939-1948. [PMID: 37500760 DOI: 10.1038/s41592-023-01966-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 06/26/2023] [Indexed: 07/29/2023]
Abstract
Single-molecule localization microscopy (SMLM) has revolutionized biological imaging, improving the spatial resolution of traditional microscopes by an order of magnitude. However, SMLM techniques require long acquisition times, typically a few minutes, to yield a single super-resolved image, because they depend on accumulation of many localizations over thousands of recorded frames. Hence, the capability of SMLM to observe dynamics at high temporal resolution has always been limited. In this work, we present DBlink, a deep-learning-based method for super spatiotemporal resolution reconstruction from SMLM data. The input to DBlink is a recorded video of SMLM data and the output is a super spatiotemporal resolution video reconstruction. We use a convolutional neural network combined with a bidirectional long short-term memory network architecture, designed for capturing long-term dependencies between different input frames. We demonstrate DBlink performance on simulated filaments and mitochondria-like structures, on experimental SMLM data under controlled motion conditions and on live-cell dynamic SMLM. DBlink's spatiotemporal interpolation constitutes an important advance in super-resolution imaging of dynamic processes in live cells.
Collapse
Affiliation(s)
- Alon Saguy
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, Israel
| | - Onit Alalouf
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, Israel
| | - Nadav Opatovski
- Russell Berrie Nanotechnology Institute, Technion-Israel Institute of Technology, Haifa, Israel
| | - Soohyen Jang
- Institute of Physical and Theoretical Chemistry, Goethe-University Frankfurt, Frankfurt, Germany
- Institute of Physical and Theoretical Chemistry, IMPRS on Cellular Biophysics, Goethe-University Frankfurt, Frankfurt, Germany
| | - Mike Heilemann
- Institute of Physical and Theoretical Chemistry, Goethe-University Frankfurt, Frankfurt, Germany
- Institute of Physical and Theoretical Chemistry, IMPRS on Cellular Biophysics, Goethe-University Frankfurt, Frankfurt, Germany
| | - Yoav Shechtman
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, Israel.
| |
Collapse
|
34
|
Ibrahim KA, Grußmayer KS, Riguet N, Feletti L, Lashuel HA, Radenovic A. Label-free identification of protein aggregates using deep learning. Nat Commun 2023; 14:7816. [PMID: 38016971 PMCID: PMC10684545 DOI: 10.1038/s41467-023-43440-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 11/09/2023] [Indexed: 11/30/2023] Open
Abstract
Protein misfolding and aggregation play central roles in the pathogenesis of various neurodegenerative diseases (NDDs), including Huntington's disease, which is caused by a genetic mutation in exon 1 of the Huntingtin protein (Httex1). The fluorescent labels commonly used to visualize and monitor the dynamics of protein expression have been shown to alter the biophysical properties of proteins and the final ultrastructure, composition, and toxic properties of the formed aggregates. To overcome this limitation, we present a method for label-free identification of NDD-associated aggregates (LINA). Our approach utilizes deep learning to detect unlabeled and unaltered Httex1 aggregates in living cells from transmitted-light images, without the need for fluorescent labeling. Our models are robust across imaging conditions and on aggregates formed by different constructs of Httex1. LINA enables the dynamic identification of label-free aggregates and measurement of their dry mass and area changes during their growth process, offering high speed, specificity, and simplicity to analyze protein aggregation dynamics and obtain high-fidelity information.
Collapse
Affiliation(s)
- Khalid A Ibrahim
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Laboratory of Molecular and Chemical Biology of Neurodegeneration, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Kristin S Grußmayer
- Department of Bionanoscience and Kavli Institute of Nanoscience Delft, Delft University of Technology, Delft, Netherlands.
| | - Nathan Riguet
- Laboratory of Molecular and Chemical Biology of Neurodegeneration, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Lely Feletti
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Hilal A Lashuel
- Laboratory of Molecular and Chemical Biology of Neurodegeneration, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| | - Aleksandra Radenovic
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
35
|
Xu L, Kan S, Yu X, Liu Y, Fu Y, Peng Y, Liang Y, Cen Y, Zhu C, Jiang W. Deep learning enables stochastic optical reconstruction microscopy-like superresolution image reconstruction from conventional microscopy. iScience 2023; 26:108145. [PMID: 37867953 PMCID: PMC10587619 DOI: 10.1016/j.isci.2023.108145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 08/05/2023] [Accepted: 10/02/2023] [Indexed: 10/24/2023] Open
Abstract
Despite its remarkable potential for transforming low-resolution images, deep learning faces significant challenges in achieving high-quality superresolution microscopy imaging from wide-field (conventional) microscopy. Here, we present X-Microscopy, a computational tool comprising two deep learning subnets, UR-Net-8 and X-Net, which enables STORM-like superresolution microscopy image reconstruction from wide-field images with input-size flexibility. X-Microscopy was trained using samples of various subcellular structures, including cytoskeletal filaments, dot-like, beehive-like, and nanocluster-like structures, to generate prediction models capable of producing images of comparable quality to STORM-like images. In addition to enabling multicolour superresolution image reconstructions, X-Microscopy also facilitates superresolution image reconstruction from different conventional microscopic systems. The capabilities of X-Microscopy offer promising prospects for making superresolution microscopy accessible to a broader range of users, going beyond the confines of well-equipped laboratories.
Collapse
Affiliation(s)
- Lei Xu
- Department of Etiology and Carcinogenesis and State Key Laboratory of Molecular Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
- Key Laboratory of Molecular and Cellular Systems Biology, College of Life Sciences, Tianjin Normal University, Tianjin 300387, China
| | - Shichao Kan
- School of Computer Science and Engineering, Central South University, Changsha, Hunan 410083, China
| | - Xiying Yu
- Department of Etiology and Carcinogenesis and State Key Laboratory of Molecular Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ye Liu
- HAMD (Ningbo) Intelligent Medical Technology Co., Ltd, Ningbo 315194, China
| | - Yuxia Fu
- Department of Etiology and Carcinogenesis and State Key Laboratory of Molecular Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yiqiang Peng
- HAMD (Ningbo) Intelligent Medical Technology Co., Ltd, Ningbo 315194, China
| | - Yanhui Liang
- HAMD (Ningbo) Intelligent Medical Technology Co., Ltd, Ningbo 315194, China
| | - Yigang Cen
- Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China
| | - Changjun Zhu
- Key Laboratory of Molecular and Cellular Systems Biology, College of Life Sciences, Tianjin Normal University, Tianjin 300387, China
| | - Wei Jiang
- Department of Etiology and Carcinogenesis and State Key Laboratory of Molecular Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| |
Collapse
|
36
|
Barentine AES, Lin Y, Courvan EM, Kidd P, Liu M, Balduf L, Phan T, Rivera-Molina F, Grace MR, Marin Z, Lessard M, Rios Chen J, Wang S, Neugebauer KM, Bewersdorf J, Baddeley D. An integrated platform for high-throughput nanoscopy. Nat Biotechnol 2023; 41:1549-1556. [PMID: 36914886 PMCID: PMC10497732 DOI: 10.1038/s41587-023-01702-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 02/02/2023] [Indexed: 03/16/2023]
Abstract
Single-molecule localization microscopy enables three-dimensional fluorescence imaging at tens-of-nanometer resolution, but requires many camera frames to reconstruct a super-resolved image. This limits the typical throughput to tens of cells per day. While frame rates can now be increased by over an order of magnitude, the large data volumes become limiting in existing workflows. Here we present an integrated acquisition and analysis platform leveraging microscopy-specific data compression, distributed storage and distributed analysis to enable an acquisition and analysis throughput of 10,000 cells per day. The platform facilitates graphically reconfigurable analyses to be automatically initiated from the microscope during acquisition and remotely executed, and can even feed back and queue new acquisition tasks on the microscope. We demonstrate the utility of this framework by imaging hundreds of cells per well in multi-well sample formats. Our platform, implemented within the PYthon-Microscopy Environment (PYME), is easily configurable to control custom microscopes, and includes a plugin framework for user-defined extensions.
Collapse
Affiliation(s)
- Andrew E S Barentine
- Department of Cell Biology, Yale School of Medicine, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Yu Lin
- Department of Cell Biology, Yale School of Medicine, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Edward M Courvan
- Department of Cell Biology, Yale School of Medicine, New Haven, CT, USA
- Department of Molecular Biophysics and Biochemistry, Yale School of Medicine, New Haven, CT, USA
| | - Phylicia Kidd
- Department of Cell Biology, Yale School of Medicine, New Haven, CT, USA
| | - Miao Liu
- Department of Genetics, Yale School of Medicine, New Haven, CT, USA
| | - Leonhard Balduf
- Department of Cell Biology, Yale School of Medicine, New Haven, CT, USA
- Department of Computer Science and Mathematics, University of Applied Sciences, Munich, Germany
| | - Timy Phan
- Department of Cell Biology, Yale School of Medicine, New Haven, CT, USA
- Department of Computer Science and Mathematics, University of Applied Sciences, Munich, Germany
| | | | - Michael R Grace
- Department of Cell Biology, Yale School of Medicine, New Haven, CT, USA
| | - Zach Marin
- Department of Cell Biology, Yale School of Medicine, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
- Auckland Bioengineering Institute at University of Auckland, Auckland, New Zealand
| | - Mark Lessard
- Department of Cell Biology, Yale School of Medicine, New Haven, CT, USA
| | - Juliana Rios Chen
- Department of Cell Biology, Yale School of Medicine, New Haven, CT, USA
| | - Siyuan Wang
- Department of Cell Biology, Yale School of Medicine, New Haven, CT, USA
- Department of Genetics, Yale School of Medicine, New Haven, CT, USA
| | - Karla M Neugebauer
- Department of Cell Biology, Yale School of Medicine, New Haven, CT, USA
- Department of Molecular Biophysics and Biochemistry, Yale School of Medicine, New Haven, CT, USA
| | - Joerg Bewersdorf
- Department of Cell Biology, Yale School of Medicine, New Haven, CT, USA.
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
- Department of Physics, Yale University, New Haven, CT, USA.
- Nanobiology Institute, Yale University, West Haven, CT, USA.
| | - David Baddeley
- Department of Cell Biology, Yale School of Medicine, New Haven, CT, USA.
- Auckland Bioengineering Institute at University of Auckland, Auckland, New Zealand.
- Nanobiology Institute, Yale University, West Haven, CT, USA.
| |
Collapse
|
37
|
Yu X, Luan S, Lei S, Huang J, Liu Z, Xue X, Ma T, Ding Y, Zhu B. Deep learning for fast denoising filtering in ultrasound localization microscopy. Phys Med Biol 2023; 68:205002. [PMID: 37703894 DOI: 10.1088/1361-6560/acf98f] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 09/13/2023] [Indexed: 09/15/2023]
Abstract
Objective.Addition of a denoising filter step in ultrasound localization microscopy (ULM) has been shown to effectively reduce the error localizations of microbubbles (MBs) and achieve resolution improvement for super-resolution ultrasound (SR-US) imaging. However, previous image-denoising methods (e.g. block-matching 3D, BM3D) requires long data processing times, making ULM only able to be processed offline. This work introduces a new way to reduce data processing time through deep learning.Approach.In this study, we propose deep learning (DL) denoising based on contrastive semi-supervised network (CS-Net). The neural network is mainly trained with simulated MBs data to extract MB signals from noise. And the performances of CS-Net denoising are evaluated in bothin vitroflow phantom experiment andin vivoexperiment of New Zealand rabbit tumor.Main results.Forin vitroflow phantom experiment, the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of single microbubble image are 26.91 dB and 4.01 dB, repectively. Forin vivoanimal experiment , the SNR and CNR were 12.29 dB and 6.06 dB. In addition, single microvessel of 24μm and two microvessels separated by 46μm could be clearly displayed. Most importantly,, the CS-Net denoising speeds forin vitroandin vivoexperiments were 0.041 s frame-1and 0.062 s frame-1, respectively.Significance.DL denoising based on CS-Net can improve the resolution of SR-US as well as reducing denoising time, thereby making further contributions to the clinical real-time imaging of ULM.
Collapse
Affiliation(s)
- Xiangyang Yu
- Shool of Integrated Circuit, Wuhan National Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Shunyao Luan
- Shool of Integrated Circuit, Wuhan National Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Shuang Lei
- Shool of Integrated Circuit, Wuhan National Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Jing Huang
- Shool of Integrated Circuit, Wuhan National Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Zeqing Liu
- Shool of Integrated Circuit, Wuhan National Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Xudong Xue
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, People's Republic of China
| | - Teng Ma
- The Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Yi Ding
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, People's Republic of China
| | - Benpeng Zhu
- Shool of Integrated Circuit, Wuhan National Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| |
Collapse
|
38
|
You Q, Lowerison MR, Shin Y, Chen X, Sekaran NVC, Dong Z, Llano DA, Anastasio MA, Song P. Contrast-Free Super-Resolution Power Doppler (CS-PD) Based on Deep Neural Networks. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1355-1368. [PMID: 37566494 PMCID: PMC10619974 DOI: 10.1109/tuffc.2023.3304527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/13/2023]
Abstract
Super-resolution ultrasound microvessel imaging based on ultrasound localization microscopy (ULM) is an emerging imaging modality that is capable of resolving micrometer-scaled vessels deep into tissue. In practice, ULM is limited by the need for contrast injection, long data acquisition, and computationally expensive postprocessing times. In this study, we present a contrast-free super-resolution power Doppler (CS-PD) technique that uses deep networks to achieve super-resolution with short data acquisition. The training dataset is comprised of spatiotemporal ultrafast ultrasound signals acquired from in vivo mouse brains, while the testing dataset includes in vivo mouse brain, chicken embryo chorioallantoic membrane (CAM), and healthy human subjects. The in vivo mouse imaging studies demonstrate that CS-PD could achieve an approximate twofold improvement in spatial resolution when compared with conventional power Doppler. In addition, the microvascular images generated by CS-PD showed good agreement with the corresponding ULM images as indicated by a structural similarity index of 0.7837 and a peak signal-to-noise ratio (PSNR) of 25.52. Moreover, CS-PD was able to preserve the temporal profile of the blood flow (e.g., pulsatility) that is similar to conventional power Doppler. Finally, the generalizability of CS-PD was demonstrated on testing data of different tissues using different imaging settings. The fast inference time of the proposed deep neural network also allows CS-PD to be implemented for real-time imaging. These features of CS-PD offer a practical, fast, and robust microvascular imaging solution for many preclinical and clinical applications of Doppler ultrasound.
Collapse
|
39
|
Wang S, Zhang Z, Yao M, Deng Z, Peng J, Zhong J. Contrast-enhanced, single-shot LED array microscopy based on Fourier ptychographic algorithm and deep learning. J Microsc 2023; 292:19-26. [PMID: 37606467 DOI: 10.1111/jmi.13218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 08/05/2023] [Accepted: 08/16/2023] [Indexed: 08/23/2023]
Abstract
LED array microscopes have the advantages of miniaturisation and low cost. It has been demonstrated that LED array microscopes outperform Köhler illumination microscopes in some applications. A LED array allows for a large numerical aperture of illumination. The larger numerical aperture of illumination brings the higher spatial resolution, but the lower image contrast as well. Therefore, there is a tradeoff between resolution and contrast for LED array microscopes. The Fourier ptychographic algorithm can overcome this tradeoff by increasing image contrast without sacrificing spatial resolution. However, the Fourier ptychographic algorithm requires acquisition of multiple images, which is time-consuming and results in live sample imaging challenging. To solve this problem, we develop contrast-enhanced, single-shot LED array microscopy based on the Fourier ptychographic algorithm and deep learning. The sample to be imaged is under illumination by all LEDs of the array simultaneously. The image captured is fed to several trained convolutional neural networks to generate the same number of images that are required by the Fourier ptychographic algorithm. We experimentally present that the image contrast of the final reconstruction is remarkably improved in comparison with the image captured. The proposed method can also produce chromatic-aberration-free results, even when an objective without aberration correction is used. We believe the method might provide live sample imaging with a low-cost approach.
Collapse
Affiliation(s)
- Shengping Wang
- Department of Optoelectronic Engineering, Jinan University, Guangzhou, China
| | - Zibang Zhang
- Department of Optoelectronic Engineering, Jinan University, Guangzhou, China
| | - Manhong Yao
- School of Optoelectronic Engineering, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Zihao Deng
- Department of Optoelectronic Engineering, Jinan University, Guangzhou, China
| | - Junzheng Peng
- Department of Optoelectronic Engineering, Jinan University, Guangzhou, China
| | - Jingang Zhong
- Department of Optoelectronic Engineering, Jinan University, Guangzhou, China
| |
Collapse
|
40
|
Petkidis A, Andriasyan V, Greber UF. Machine learning for cross-scale microscopy of viruses. CELL REPORTS METHODS 2023; 3:100557. [PMID: 37751685 PMCID: PMC10545915 DOI: 10.1016/j.crmeth.2023.100557] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 06/05/2023] [Accepted: 07/20/2023] [Indexed: 09/28/2023]
Abstract
Despite advances in virological sciences and antiviral research, viruses continue to emerge, circulate, and threaten public health. We still lack a comprehensive understanding of how cells and individuals remain susceptible to infectious agents. This deficiency is in part due to the complexity of viruses, including the cell states controlling virus-host interactions. Microscopy samples distinct cellular infection stages in a multi-parametric, time-resolved manner at molecular resolution and is increasingly enhanced by machine learning and deep learning. Here we discuss how state-of-the-art artificial intelligence (AI) augments light and electron microscopy and advances virological research of cells. We describe current procedures for image denoising, object segmentation, tracking, classification, and super-resolution and showcase examples of how AI has improved the acquisition and analyses of microscopy data. The power of AI-enhanced microscopy will continue to help unravel virus infection mechanisms, develop antiviral agents, and improve viral vectors.
Collapse
Affiliation(s)
- Anthony Petkidis
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland.
| | - Vardan Andriasyan
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
| | - Urs F Greber
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland.
| |
Collapse
|
41
|
Jang S, Narayanasamy KK, Rahm JV, Saguy A, Kompa J, Dietz MS, Johnsson K, Shechtman Y, Heilemann M. Neural network-assisted single-molecule localization microscopy with a weak-affinity protein tag. BIOPHYSICAL REPORTS 2023; 3:100123. [PMID: 37680382 PMCID: PMC10480660 DOI: 10.1016/j.bpr.2023.100123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 08/16/2023] [Indexed: 09/09/2023]
Abstract
Single-molecule localization microscopy achieves nanometer spatial resolution by localizing single fluorophores separated in space and time. A major challenge of single-molecule localization microscopy is the long acquisition time, leading to low throughput, as well as to a poor temporal resolution that limits its use to visualize the dynamics of cellular structures in live cells. Another challenge is photobleaching, which reduces information density over time and limits throughput and the available observation time in live-cell applications. To address both challenges, we combine two concepts: first, we integrate the neural network DeepSTORM to predict super-resolution images from high-density imaging data, which increases acquisition speed. Second, we employ a direct protein label, HaloTag7, in combination with exchangeable ligands (xHTLs), for fluorescence labeling. This labeling method bypasses photobleaching by providing a constant signal over time and is compatible with live-cell imaging. The combination of both a neural network and a weak-affinity protein label reduced the acquisition time up to ∼25-fold. Furthermore, we demonstrate live-cell imaging with increased temporal resolution, and capture the dynamics of the endoplasmic reticulum over extended time without signal loss.
Collapse
Affiliation(s)
- Soohyen Jang
- Institute of Physical and Theoretical Chemistry, Johann Wolfgang Goethe-University, Frankfurt am Main, Germany
- Institute of Physical and Theoretical Chemistry, IMPRS on Cellular Biophysics, Johann Wolfgang Goethe-University, Frankfurt am Main, Germany
| | - Kaarjel K. Narayanasamy
- Institute of Physical and Theoretical Chemistry, Johann Wolfgang Goethe-University, Frankfurt am Main, Germany
- Department of Functional Neuroanatomy, Institute for Anatomy and Cell Biology, Heidelberg University, Heidelberg, Germany
| | - Johanna V. Rahm
- Institute of Physical and Theoretical Chemistry, Johann Wolfgang Goethe-University, Frankfurt am Main, Germany
| | - Alon Saguy
- Department of Biomedical Engineering, Technion – Israel Institute of Technology, Haifa, Israel
| | - Julian Kompa
- Department of Chemical Biology, Max Planck Institute for Medical Research, Heidelberg, Germany
| | - Marina S. Dietz
- Institute of Physical and Theoretical Chemistry, Johann Wolfgang Goethe-University, Frankfurt am Main, Germany
| | - Kai Johnsson
- Department of Chemical Biology, Max Planck Institute for Medical Research, Heidelberg, Germany
| | - Yoav Shechtman
- Department of Biomedical Engineering, Technion – Israel Institute of Technology, Haifa, Israel
| | - Mike Heilemann
- Institute of Physical and Theoretical Chemistry, Johann Wolfgang Goethe-University, Frankfurt am Main, Germany
- Institute of Physical and Theoretical Chemistry, IMPRS on Cellular Biophysics, Johann Wolfgang Goethe-University, Frankfurt am Main, Germany
| |
Collapse
|
42
|
Yang B, Liu W, Chen X, Chen G, Zhu X. A novel multi-frame wavelet generative adversarial network for scattering reconstruction of structured illumination microscopy. Phys Med Biol 2023; 68:185016. [PMID: 37619594 DOI: 10.1088/1361-6560/acf3cb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 08/24/2023] [Indexed: 08/26/2023]
Abstract
Objective. Structured illumination microscopy (SIM) is widely used in various fields of life science research. In clinical practice, it has low phototoxicity, fast imaging speed and no special fluorescent markers. However, SIM is still affected by the scattering medium of biological tissues, resulting in insufficient resolution of the obtained images, which limits the development of life sciences. A novel multi-frame wavelet generation adversarial network (MWGAN) is proposed to improve the scattering reconstruction capability of SIM.Approach. MWGAN is based on two components derived from the original image. A generative adversarial network constructed by wavelet transform is trained to reconstruct some complex details in the cell structure. Multi-frame adversarial network is used to obtain the inter-frame information of the image and use the complementary information of the before and after frames to improve the quality of the model reconstruction.Results. To demonstrate the robustness of MWGAN, multiple low-quality SIM image datasets are tested. Compared with the state-of-the-art methods, the proposed method achieves superior performance in both of the subjective and objective evaluation.Conclusion. MWGAN is effective for improving the clarity of SIM images. Meanwhile, the SIM images reconstructed by multiple frames improve the reconstruction quality of complex regions and allow clearer and dynamic observation of cellular functions.
Collapse
Affiliation(s)
- Bin Yang
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Weiping Liu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Xinghong Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Guannan Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Xiaoqin Zhu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
| |
Collapse
|
43
|
Colson L, Kwon Y, Nam S, Bhandari A, Maya NM, Lu Y, Cho Y. Trends in Single-Molecule Total Internal Reflection Fluorescence Imaging and Their Biological Applications with Lab-on-a-Chip Technology. SENSORS (BASEL, SWITZERLAND) 2023; 23:7691. [PMID: 37765748 PMCID: PMC10537725 DOI: 10.3390/s23187691] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 09/01/2023] [Accepted: 09/03/2023] [Indexed: 09/29/2023]
Abstract
Single-molecule imaging technologies, especially those based on fluorescence, have been developed to probe both the equilibrium and dynamic properties of biomolecules at the single-molecular and quantitative levels. In this review, we provide an overview of the state-of-the-art advancements in single-molecule fluorescence imaging techniques. We systematically explore the advanced implementations of in vitro single-molecule imaging techniques using total internal reflection fluorescence (TIRF) microscopy, which is widely accessible. This includes discussions on sample preparation, passivation techniques, data collection and analysis, and biological applications. Furthermore, we delve into the compatibility of microfluidic technology for single-molecule fluorescence imaging, highlighting its potential benefits and challenges. Finally, we summarize the current challenges and prospects of fluorescence-based single-molecule imaging techniques, paving the way for further advancements in this rapidly evolving field.
Collapse
Affiliation(s)
- Louis Colson
- Department of Systems Biology, Harvard Medical School, Boston, MA 02115, USA; (L.C.); (A.B.); (N.M.M.); (Y.L.)
| | - Youngeun Kwon
- Department of Chemical Engineering, Myongji University, Yongin 17058, Republic of Korea; (Y.K.); (S.N.)
| | - Soobin Nam
- Department of Chemical Engineering, Myongji University, Yongin 17058, Republic of Korea; (Y.K.); (S.N.)
| | - Avinashi Bhandari
- Department of Systems Biology, Harvard Medical School, Boston, MA 02115, USA; (L.C.); (A.B.); (N.M.M.); (Y.L.)
| | - Nolberto Martinez Maya
- Department of Systems Biology, Harvard Medical School, Boston, MA 02115, USA; (L.C.); (A.B.); (N.M.M.); (Y.L.)
| | - Ying Lu
- Department of Systems Biology, Harvard Medical School, Boston, MA 02115, USA; (L.C.); (A.B.); (N.M.M.); (Y.L.)
| | - Yongmin Cho
- Department of Chemical Engineering, Myongji University, Yongin 17058, Republic of Korea; (Y.K.); (S.N.)
| |
Collapse
|
44
|
Li X, Wu Y, Su Y, Rey-Suarez I, Matthaeus C, Updegrove TB, Wei Z, Zhang L, Sasaki H, Li Y, Guo M, Giannini JP, Vishwasrao HD, Chen J, Lee SJJ, Shao L, Liu H, Ramamurthi KS, Taraska JW, Upadhyaya A, La Riviere P, Shroff H. Three-dimensional structured illumination microscopy with enhanced axial resolution. Nat Biotechnol 2023; 41:1307-1319. [PMID: 36702897 PMCID: PMC10497409 DOI: 10.1038/s41587-022-01651-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 12/16/2022] [Indexed: 01/27/2023]
Abstract
The axial resolution of three-dimensional structured illumination microscopy (3D SIM) is limited to ∼300 nm. Here we present two distinct, complementary methods to improve axial resolution in 3D SIM with minimal or no modification to the optical system. We show that placing a mirror directly opposite the sample enables four-beam interference with higher spatial frequency content than 3D SIM illumination, offering near-isotropic imaging with ∼120-nm lateral and 160-nm axial resolution. We also developed a deep learning method achieving ∼120-nm isotropic resolution. This method can be combined with denoising to facilitate volumetric imaging spanning dozens of timepoints. We demonstrate the potential of these advances by imaging a variety of cellular samples, delineating the nanoscale distribution of vimentin and microtubule filaments, observing the relative positions of caveolar coat proteins and lysosomal markers and visualizing cytoskeletal dynamics within T cells in the early stages of immune synapse formation.
Collapse
Affiliation(s)
- Xuesong Li
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA.
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA.
| | - Yicong Wu
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA.
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA.
| | - Yijun Su
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
- Leica Microsystems, Inc., Deerfield, IL, USA
- SVision, LLC, Bellevue, WA, USA
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Ivan Rey-Suarez
- Institute for Physical Science and Technology, University of Maryland, College Park, MD, USA
| | - Claudia Matthaeus
- Biochemistry and Biophysics Center, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA
| | - Taylor B Updegrove
- Laboratory of Molecular Biology, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Zhuang Wei
- Section on Biophotonics, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
| | - Lixia Zhang
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
| | - Hideki Sasaki
- Leica Microsystems, Inc., Deerfield, IL, USA
- SVision, LLC, Bellevue, WA, USA
| | - Yue Li
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Min Guo
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - John P Giannini
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
| | - Harshad D Vishwasrao
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
| | - Jiji Chen
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
| | - Shih-Jong J Lee
- Leica Microsystems, Inc., Deerfield, IL, USA
- SVision, LLC, Bellevue, WA, USA
| | - Lin Shao
- Department of Neuroscience and Department of Cell Biology, Yale University School of Medicine, New Haven, CT, USA
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Kumaran S Ramamurthi
- Laboratory of Molecular Biology, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Justin W Taraska
- Biochemistry and Biophysics Center, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA
| | - Arpita Upadhyaya
- Institute for Physical Science and Technology, University of Maryland, College Park, MD, USA
- Department of Physics, University of Maryland, College Park, MD, USA
| | - Patrick La Riviere
- Department of Radiology, University of Chicago, Chicago, IL, USA
- MBL Fellows, Marine Biological Laboratory, Woods Hole, MA, USA
| | - Hari Shroff
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
- MBL Fellows, Marine Biological Laboratory, Woods Hole, MA, USA
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| |
Collapse
|
45
|
Soha SA, Santhireswaran A, Huq S, Casimir-Powell J, Jenkins N, Hodgson GK, Sugiyama M, Antonescu CN, Impellizzeri S, Botelho RJ. Improved imaging and preservation of lysosome dynamics using silver nanoparticle-enhanced fluorescence. Mol Biol Cell 2023; 34:ar96. [PMID: 37405751 PMCID: PMC10551705 DOI: 10.1091/mbc.e22-06-0200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 06/26/2023] [Accepted: 06/27/2023] [Indexed: 07/06/2023] Open
Abstract
The dynamics of living cells can be studied by live-cell fluorescence microscopy. However, this requires the use of excessive light energy to obtain good signal-to-noise ratio, which can then photobleach fluorochromes, and more worrisomely, lead to phototoxicity. Upon light excitation, noble metal nanoparticles such as silver nanoparticles (AgNPs) generate plasmons, which can then amplify excitation in direct proximity of the nanoparticle's surface and couple to the oscillating dipole of nearby radiating fluorophores, modifying their rate of emission and thus, enhancing their fluorescence. Here, we show that AgNPs fed to cells to accumulate within lysosomes enhanced the fluorescence of lysosome-targeted Alexa488-conjugated dextran, BODIPY-cholesterol, and DQ-BSA. Moreover, AgNP increased the fluorescence of GFP fused to the cytosolic tail of LAMP1, showing that metal enhanced fluorescence can occur across the lysosomal membrane. The inclusion of AgNPs in lysosomes did not disturb lysosomal properties such as lysosomal pH, degradative capacity, autophagy and autophagic flux, and membrane integrity, though AgNP seemed to increase basal lysosome tubulation. Importantly, by using AgNP, we could track lysosome motility with reduced laser power without damaging and altering lysosome dynamics. Overall, AgNP-enhanced fluorescence may be a useful tool to study the dynamics of the endo-lysosomal pathway while minimizing phototoxicity.
Collapse
Affiliation(s)
- Sumaiya A. Soha
- Molecular Science Graduate Program, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
| | - Araniy Santhireswaran
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
| | - Saaimatul Huq
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
| | - Jayde Casimir-Powell
- Molecular Science Graduate Program, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
| | - Nicala Jenkins
- Molecular Science Graduate Program, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
| | - Gregory K. Hodgson
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
| | - Michael Sugiyama
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
| | - Costin N. Antonescu
- Molecular Science Graduate Program, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
| | - Stefania Impellizzeri
- Molecular Science Graduate Program, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
| | - Roberto J. Botelho
- Molecular Science Graduate Program, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Ontario, Canada, M5B 2K3
| |
Collapse
|
46
|
Liao J, Zhang C, Xu X, Zhou L, Yu B, Lin D, Li J, Qu J. Deep-MSIM: Fast Image Reconstruction with Deep Learning in Multifocal Structured Illumination Microscopy. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2300947. [PMID: 37424045 PMCID: PMC10520669 DOI: 10.1002/advs.202300947] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 06/02/2023] [Indexed: 07/11/2023]
Abstract
Fast and precise reconstruction algorithm is desired for for multifocal structured illumination microscopy (MSIM) to obtain the super-resolution image. This work proposes a deep convolutional neural network (CNN) to learn a direct mapping from raw MSIM images to super-resolution image, which takes advantage of the computational advances of deep learning to accelerate the reconstruction. The method is validated on diverse biological structures and in vivo imaging of zebrafish at a depth of 100 µm. The results show that high-quality, super-resolution images can be reconstructed in one-third of the runtime consumed by conventional MSIM method, without compromising spatial resolution. Last but not least, a fourfold reduction in the number of raw images required for reconstruction is achieved by using the same network architecture, yet with different training data.
Collapse
Affiliation(s)
- Jianhui Liao
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Chenshuang Zhang
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Xiangcong Xu
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Liangliang Zhou
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Bin Yu
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Danying Lin
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Jia Li
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Junle Qu
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| |
Collapse
|
47
|
Zhu M, Zhang L, Jin L, Chen Y, Yang H, Ji B, Xu Y. Deep learning-enabled fast DNA-PAINT imaging in cells. BIOPHYSICS REPORTS 2023; 9:177-187. [PMID: 38516619 PMCID: PMC10951475 DOI: 10.52601/bpr.2023.230014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 10/07/2023] [Indexed: 03/23/2024] Open
Abstract
DNA-based point accumulation in nanoscale topography (DNA-PAINT) is a well-established technique for single-molecule localization microscopy (SMLM), enabling resolution of up to a few nanometers. Traditionally, DNA-PAINT involves the utilization of tens of thousands of single-molecule fluorescent images to generate a single super-resolution image. This process can be time-consuming, which makes it unfeasible for many researchers. Here, we propose a simplified DNA-PAINT labeling method and a deep learning-enabled fast DNA-PAINT imaging strategy for subcellular structures, such as microtubules. By employing our method, super-resolution reconstruction can be achieved with only one-tenth of the raw data previously needed, along with the option of acquiring the widefield image. As a result, DNA-PAINT imaging is significantly accelerated, making it more accessible to a wider range of biological researchers.
Collapse
Affiliation(s)
- Min Zhu
- Department of Biomedical Engineering, Key Laboratory of Biomedical Engineering of Ministry of Education, State Key Laboratory of Extreme Photonics and Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang Provincial Key Laboratory of Traditional Chinese Medicine for Clinical Evaluation and Translational Research, Zhejiang University, Hangzhou 310027, China
| | - Luhao Zhang
- Department of Biomedical Engineering, Key Laboratory of Biomedical Engineering of Ministry of Education, State Key Laboratory of Extreme Photonics and Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang Provincial Key Laboratory of Traditional Chinese Medicine for Clinical Evaluation and Translational Research, Zhejiang University, Hangzhou 310027, China
- Binjiang Institute of Zhejiang University, Hangzhou 310053, China
| | - Luhong Jin
- Department of Biomedical Engineering, Key Laboratory of Biomedical Engineering of Ministry of Education, State Key Laboratory of Extreme Photonics and Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang Provincial Key Laboratory of Traditional Chinese Medicine for Clinical Evaluation and Translational Research, Zhejiang University, Hangzhou 310027, China
| | - Yunyue Chen
- Department of Biomedical Engineering, Key Laboratory of Biomedical Engineering of Ministry of Education, State Key Laboratory of Extreme Photonics and Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang Provincial Key Laboratory of Traditional Chinese Medicine for Clinical Evaluation and Translational Research, Zhejiang University, Hangzhou 310027, China
| | - Haixu Yang
- Department of Biomedical Engineering, Key Laboratory of Biomedical Engineering of Ministry of Education, State Key Laboratory of Extreme Photonics and Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang Provincial Key Laboratory of Traditional Chinese Medicine for Clinical Evaluation and Translational Research, Zhejiang University, Hangzhou 310027, China
| | - Baohua Ji
- Department of Engineering Mechanics, Biomechanics and Biomaterials Laboratory, Zhejiang University, Hangzhou 310027, China
| | - Yingke Xu
- Department of Biomedical Engineering, Key Laboratory of Biomedical Engineering of Ministry of Education, State Key Laboratory of Extreme Photonics and Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang Provincial Key Laboratory of Traditional Chinese Medicine for Clinical Evaluation and Translational Research, Zhejiang University, Hangzhou 310027, China
- Binjiang Institute of Zhejiang University, Hangzhou 310053, China
- Department of Endocrinology, Children’s Hospital of Zhejiang University School of Medicine, National Clinical Research Center for Children’s Health, Hangzhou 310051, China
| |
Collapse
|
48
|
Ning K, Lu B, Wang X, Zhang X, Nie S, Jiang T, Li A, Fan G, Wang X, Luo Q, Gong H, Yuan J. Deep self-learning enables fast, high-fidelity isotropic resolution restoration for volumetric fluorescence microscopy. LIGHT, SCIENCE & APPLICATIONS 2023; 12:204. [PMID: 37640721 PMCID: PMC10462670 DOI: 10.1038/s41377-023-01230-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 07/04/2023] [Accepted: 07/12/2023] [Indexed: 08/31/2023]
Abstract
One intrinsic yet critical issue that troubles the field of fluorescence microscopy ever since its introduction is the unmatched resolution in the lateral and axial directions (i.e., resolution anisotropy), which severely deteriorates the quality, reconstruction, and analysis of 3D volume images. By leveraging the natural anisotropy, we present a deep self-learning method termed Self-Net that significantly improves the resolution of axial images by using the lateral images from the same raw dataset as rational targets. By incorporating unsupervised learning for realistic anisotropic degradation and supervised learning for high-fidelity isotropic recovery, our method can effectively suppress the hallucination with substantially enhanced image quality compared to previously reported methods. In the experiments, we show that Self-Net can reconstruct high-fidelity isotropic 3D images from organelle to tissue levels via raw images from various microscopy platforms, e.g., wide-field, laser-scanning, or super-resolution microscopy. For the first time, Self-Net enables isotropic whole-brain imaging at a voxel resolution of 0.2 × 0.2 × 0.2 μm3, which addresses the last-mile problem of data quality in single-neuron morphology visualization and reconstruction with minimal effort and cost. Overall, Self-Net is a promising approach to overcoming the inherent resolution anisotropy for all classes of 3D fluorescence microscopy.
Collapse
Affiliation(s)
- Kefu Ning
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China
| | - Bolin Lu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China
| | - Xiaojun Wang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- School of Biomedical Engineering, Hainan University, Haikou, China
| | - Xiaoyu Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shuo Nie
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tao Jiang
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China
| | - Guoqing Fan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaofeng Wang
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China
- School of Biomedical Engineering, Hainan University, Haikou, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China.
| | - Jing Yuan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China.
| |
Collapse
|
49
|
Hembrow J, Deeks MJ, Richards DM. Automatic extraction of actin networks in plants. PLoS Comput Biol 2023; 19:e1011407. [PMID: 37647341 PMCID: PMC10497154 DOI: 10.1371/journal.pcbi.1011407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 09/12/2023] [Accepted: 08/01/2023] [Indexed: 09/01/2023] Open
Abstract
The actin cytoskeleton is essential in eukaryotes, not least in the plant kingdom where it plays key roles in cell expansion, cell division, environmental responses and pathogen defence. Yet, the precise structure-function relationships of properties of the actin network in plants are still to be unravelled, including details of how the network configuration depends upon cell type, tissue type and developmental stage. Part of the problem lies in the difficulty of extracting high-quality, quantitative measures of actin network features from microscopy data. To address this problem, we have developed DRAGoN, a novel image analysis algorithm that can automatically extract the actin network across a range of cell types, providing seventeen different quantitative measures that describe the network at a local level. Using this algorithm, we then studied a number of cases in Arabidopsis thaliana, including several different tissues, a variety of actin-affected mutants, and cells responding to powdery mildew. In many cases we found statistically-significant differences in actin network properties. In addition to these results, our algorithm is designed to be easily adaptable to other tissues, mutants and plants, and so will be a valuable asset for the study and future biological engineering of the actin cytoskeleton in globally-important crops.
Collapse
Affiliation(s)
- Jordan Hembrow
- Living Systems Institute and Department of Physics and Astronomy, University of Exeter, Exeter, United Kingdom
| | - Michael J. Deeks
- Department of Biosciences, University of Exeter, Exeter, United Kingdom
| | - David M. Richards
- Living Systems Institute and Department of Physics and Astronomy, University of Exeter, Exeter, United Kingdom
| |
Collapse
|
50
|
Chen X, Xu S, Shabani S, Zhao Y, Fu M, Millis AJ, Fogler MM, Pasupathy AN, Liu M, Basov DN. Machine Learning for Optical Scanning Probe Nanoscopy. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2023; 35:e2109171. [PMID: 36333118 DOI: 10.1002/adma.202109171] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 07/09/2022] [Indexed: 06/16/2023]
Abstract
The ability to perform nanometer-scale optical imaging and spectroscopy is key to deciphering the low-energy effects in quantum materials, as well as vibrational fingerprints in planetary and extraterrestrial particles, catalytic substances, and aqueous biological samples. These tasks can be accomplished by the scattering-type scanning near-field optical microscopy (s-SNOM) technique that has recently spread to many research fields and enabled notable discoveries. Herein, it is shown that the s-SNOM, together with scanning probe research in general, can benefit in many ways from artificial-intelligence (AI) and machine-learning (ML) algorithms. Augmented with AI- and ML-enhanced data acquisition and analysis, scanning probe optical nanoscopy is poised to become more efficient, accurate, and intelligent.
Collapse
Affiliation(s)
- Xinzhong Chen
- Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Suheng Xu
- Department of Physics, Columbia University, New York, NY, 10027, USA
| | - Sara Shabani
- Department of Physics, Columbia University, New York, NY, 10027, USA
| | - Yueqi Zhao
- Department of Physics, University of California at San Diego, La Jolla, CA, 92093-0319, USA
| | - Matthew Fu
- Department of Physics, Columbia University, New York, NY, 10027, USA
| | - Andrew J Millis
- Department of Physics, Columbia University, New York, NY, 10027, USA
| | - Michael M Fogler
- Department of Physics, University of California at San Diego, La Jolla, CA, 92093-0319, USA
| | - Abhay N Pasupathy
- Department of Physics, Columbia University, New York, NY, 10027, USA
| | - Mengkun Liu
- Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY, 11794, USA
- National Synchrotron Light Source II, Brookhaven National Laboratory, Upton, NY, 11973, USA
| | - D N Basov
- Department of Physics, Columbia University, New York, NY, 10027, USA
| |
Collapse
|