1
|
Chai B, Efstathiou C, Yue H, Draviam VM. Opportunities and challenges for deep learning in cell dynamics research. Trends Cell Biol 2024; 34:955-967. [PMID: 38030542 DOI: 10.1016/j.tcb.2023.10.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 09/30/2023] [Accepted: 10/13/2023] [Indexed: 12/01/2023]
Abstract
The growth of artificial intelligence (AI) has led to an increase in the adoption of computer vision and deep learning (DL) techniques for the evaluation of microscopy images and movies. This adoption has not only addressed hurdles in quantitative analysis of dynamic cell biological processes but has also started to support advances in drug development, precision medicine, and genome-phenome mapping. We survey existing AI-based techniques and tools, as well as open-source datasets, with a specific focus on the computational tasks of segmentation, classification, and tracking of cellular and subcellular structures and dynamics. We summarise long-standing challenges in microscopy video analysis from a computational perspective and review emerging research frontiers and innovative applications for DL-guided automation in cell dynamics research.
Collapse
Affiliation(s)
- Binghao Chai
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Christoforos Efstathiou
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Haoran Yue
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Viji M Draviam
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK; The Alan Turing Institute, London NW1 2DB, UK.
| |
Collapse
|
2
|
Wang Z, Ma W, Yang Z, Kiesewetter DO, Wu Y, Lang L, Zhang G, Nakuchima S, Chen J, Su Y, Han S, Wu LG, Jin AJ, Huang W. A Type I Photosensitizer-Polymersome Boosts Reactive Oxygen Species Generation by Forcing H-Aggregation for Amplifying STING Immunotherapy. J Am Chem Soc 2024; 146:28973-28984. [PMID: 39383053 PMCID: PMC11505375 DOI: 10.1021/jacs.4c09831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/11/2024]
Abstract
Activation of the innate immune Stimulator of Interferon Genes (STING) pathway potentiates antitumor immunity. However, delivering STING agonists systemically to tumors presents a formidable challenge, and resistance to STING monotherapy has emerged in clinical trials with diminishing natural killer (NK) cell proliferation. Here, we encapsulated the STING agonist diABZI within polymersomes containing a Type I photosensitizer (NBS), creating a nanoagonist (PNBS/diABZI) for highly responsive tumor immunotherapy. This structure promoted H-aggregation and intersystem crossing of NBS, resulting in a ∼ 3-fold amplification in superoxide anion and singlet oxygen generation. The photodynamic therapy directly damaged hypoxia tumor cells and stimulated the proliferation of NK cells and cytotoxic T lymphocytes, thereby sensitizing STING immunotherapy. A single systemic intravenous administration of PNBS/diABZI eradicated orthotopic mammary tumors in murine models, achieving long-term antitumor immune memory to inhibit tumor recurrence and metastasis and significantly improving long-term tumor-free survival. This work provides a design rule for boosting reactive oxygen species production by promoting the intersystem crossing process, highlighting the potential of Type I photosensitizer-polymer vehicles for augmenting STING immunotherapy.
Collapse
Affiliation(s)
- Zhixiong Wang
- Intramural Research Program, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland 20892, United States
| | - Wen Ma
- Strait Laboratory of Flexible Electronics (SLoFE), Fujian Key Laboratory of Flexible Electronics, Strait Institute of Flexible Electronics (Future Technologies), Fujian Normal University, Fuzhou 350117, China
| | - Zhen Yang
- Strait Laboratory of Flexible Electronics (SLoFE), Fujian Key Laboratory of Flexible Electronics, Strait Institute of Flexible Electronics (Future Technologies), Fujian Normal University, Fuzhou 350117, China
| | - Dale O Kiesewetter
- Intramural Research Program, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland 20892, United States
| | - Yicong Wu
- Intramural Research Program, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland 20892, United States
| | - Lixin Lang
- Intramural Research Program, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland 20892, United States
| | - Guofeng Zhang
- Intramural Research Program, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland 20892, United States
| | - Sofia Nakuchima
- Intramural Research Program, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland 20892, United States
| | - Jiji Chen
- Advanced Imaging and Microscopy (AIM) Resource, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland 20892, United States
| | - Yijun Su
- Advanced Imaging and Microscopy (AIM) Resource, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland 20892, United States
| | - Sue Han
- Synaptic Transmission Section, National Institute of Neurological Disorders and Stroke, Bethesda, Maryland 20892, United States
| | - Ling-Gang Wu
- Synaptic Transmission Section, National Institute of Neurological Disorders and Stroke, Bethesda, Maryland 20892, United States
| | - Albert J Jin
- Intramural Research Program, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, Maryland 20892, United States
| | - Wei Huang
- Strait Laboratory of Flexible Electronics (SLoFE), Fujian Key Laboratory of Flexible Electronics, Strait Institute of Flexible Electronics (Future Technologies), Fujian Normal University, Fuzhou 350117, China
- Frontiers Science Center for Flexible Electronics, Xi'an Institute of Flexible Electronics (IFE), Northwestern Polytechnical University, Xi'an 710072, China
| |
Collapse
|
3
|
Shen B, Lu Y, Guo F, Lin F, Hu R, Rao F, Qu J, Liu L. Overcoming photon and spatiotemporal sparsity in fluorescence lifetime imaging with SparseFLIM. Commun Biol 2024; 7:1359. [PMID: 39433929 PMCID: PMC11494201 DOI: 10.1038/s42003-024-07080-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 10/15/2024] [Indexed: 10/23/2024] Open
Abstract
Fluorescence lifetime imaging microscopy (FLIM) provides quantitative readouts of biochemical microenvironments, holding great promise for biomedical imaging. However, conventional FLIM relies on slow photon counting routines to accumulate sufficient photon statistics, restricting acquisition speeds. Here we demonstrate SparseFLIM, an intelligent paradigm for achieving high-fidelity FLIM reconstruction from sparse photon measurements. We develop a coupled bidirectional propagation network that enriches photon counts and recovers hidden spatial-temporal information. Quantitative analysis shows over tenfold photon enrichment, dramatically improving signal-to-noise ratio, lifetime accuracy, and correlation compared to the original sparse data. SparseFLIM enables reconstructing spatially and temporally undersampled FLIM at full resolution and channel count. The model exhibits strong generalization across experimental modalities including multispectral FLIM and in vivo endoscopic FLIM. This work establishes deep learning as a promising approach to enhance fluorescence lifetime imaging and transcend limitations imposed by the inherent codependence between measurement duration and information content.
Collapse
Affiliation(s)
- Binglin Shen
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Yuan Lu
- The Sixth People's Hospital of Shenzhen, Shenzhen, China
| | - Fangyin Guo
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Fangrui Lin
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Rui Hu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Feng Rao
- College of Material Science and Engineering, Shenzhen University, Shenzhen, China
| | - Junle Qu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Liwei Liu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China.
| |
Collapse
|
4
|
Choudhury P, Boruah BR. Neural network-assisted localization of clustered point spread functions in single-molecule localization microscopy. J Microsc 2024. [PMID: 39367610 DOI: 10.1111/jmi.13362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/16/2024] [Accepted: 09/19/2024] [Indexed: 10/06/2024]
Abstract
Single-molecule localization microscopy (SMLM), which has revolutionized nanoscale imaging, faces challenges in densely labelled samples due to fluorophore clustering, leading to compromised localization accuracy. In this paper, we propose a novel convolutional neural network (CNN)-assisted approach to address the issue of locating the clustered fluorophores. Our CNN is trained on a diverse data set of simulated SMLM images where it learns to predict point spread function (PSF) locations by generating Gaussian blobs as output. Through rigorous evaluation, we demonstrate significant improvements in PSF localization accuracy, especially in densely labelled samples where traditional methods struggle. In addition, we employ blob detection as a post-processing technique to refine the predicted PSF locations and enhance localization precision. Our study underscores the efficacy of CNN in addressing clustering challenges in SMLM, thereby advancing spatial resolution and enabling deeper insights into complex biological structures.
Collapse
Affiliation(s)
- Pranjal Choudhury
- Department of Physics, Indian Institute of Technology Guwahati, Guwahati, Assam, India
| | - Bosanta R Boruah
- Department of Physics, Indian Institute of Technology Guwahati, Guwahati, Assam, India
| |
Collapse
|
5
|
Landoni JC, Kleele T, Winter J, Stepp W, Manley S. Mitochondrial Structure, Dynamics, and Physiology: Light Microscopy to Disentangle the Network. Annu Rev Cell Dev Biol 2024; 40:219-240. [PMID: 38976811 DOI: 10.1146/annurev-cellbio-111822-114733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Mitochondria serve as energetic and signaling hubs of the cell: This function results from the complex interplay between their structure, function, dynamics, interactions, and molecular organization. The ability to observe and quantify these properties often represents the puzzle piece critical for deciphering the mechanisms behind mitochondrial function and dysfunction. Fluorescence microscopy addresses this critical need and has become increasingly powerful with the advent of superresolution methods and context-sensitive fluorescent probes. In this review, we delve into advanced light microscopy methods and analyses for studying mitochondrial ultrastructure, dynamics, and physiology, and highlight notable discoveries they enabled.
Collapse
Affiliation(s)
- Juan C Landoni
- Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland;
| | - Tatjana Kleele
- Institute of Biochemistry, Swiss Federal Institute of Technology Zürich (ETH), Zürich, Switzerland;
- Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland;
| | - Julius Winter
- Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland;
| | - Willi Stepp
- Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland;
| | - Suliana Manley
- Institute of Physics, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland;
| |
Collapse
|
6
|
Qu L, Zhao S, Huang Y, Ye X, Wang K, Liu Y, Liu X, Mao H, Hu G, Chen W, Guo C, He J, Tan J, Li H, Chen L, Zhao W. Self-inspired learning for denoising live-cell super-resolution microscopy. Nat Methods 2024; 21:1895-1908. [PMID: 39261639 DOI: 10.1038/s41592-024-02400-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 07/31/2024] [Indexed: 09/13/2024]
Abstract
Every collected photon is precious in live-cell super-resolution (SR) microscopy. Here, we describe a data-efficient, deep learning-based denoising solution to improve diverse SR imaging modalities. The method, SN2N, is a Self-inspired Noise2Noise module with self-supervised data generation and self-constrained learning process. SN2N is fully competitive with supervised learning methods and circumvents the need for large training set and clean ground truth, requiring only a single noisy frame for training. We show that SN2N improves photon efficiency by one-to-two orders of magnitude and is compatible with multiple imaging modalities for volumetric, multicolor, time-lapse SR microscopy. We further integrated SN2N into different SR reconstruction algorithms to effectively mitigate image artifacts. We anticipate SN2N will enable improved live-SR imaging and inspire further advances.
Collapse
Affiliation(s)
- Liying Qu
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Shiqun Zhao
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Yuanyuan Huang
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Xianxin Ye
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Kunhao Wang
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Yuzhen Liu
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Xianming Liu
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Heng Mao
- School of Mathematical Sciences, Peking University, Beijing, China
| | - Guangwei Hu
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Wei Chen
- School of Mechanical Science and Engineering, Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, China
| | - Changliang Guo
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Jiaye He
- National Innovation Center for Advanced Medical Devices, Shenzhen, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Jiubin Tan
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China
| | - Haoyu Li
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China
- Frontiers Science Center for Matter Behave in Space Environment, Harbin Institute of Technology, Harbin, China
- Key Laboratory of Micro-Systems and Micro-Structures Manufacturing of Ministry of Education, Harbin Institute of Technology, Harbin, China
| | - Liangyi Chen
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
- PKU-IDG/McGovern Institute for Brain Research, Beijing, China
- Beijing Academy of Artificial Intelligence, Beijing, China
| | - Weisong Zhao
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China.
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China.
- Frontiers Science Center for Matter Behave in Space Environment, Harbin Institute of Technology, Harbin, China.
- Key Laboratory of Micro-Systems and Micro-Structures Manufacturing of Ministry of Education, Harbin Institute of Technology, Harbin, China.
| |
Collapse
|
7
|
Zhu E, Li YR, Margolis S, Wang J, Wang K, Zhang Y, Wang S, Park J, Zheng C, Yang L, Chu A, Zhang Y, Gao L, Hsiai TK. Frontiers in artificial intelligence-directed light-sheet microscopy for uncovering biological phenomena and multi-organ imaging. VIEW 2024; 5:20230087. [PMID: 39478956 PMCID: PMC11521201 DOI: 10.1002/viw.20230087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 07/18/2024] [Indexed: 11/02/2024] Open
Abstract
Light-sheet fluorescence microscopy (LSFM) introduces fast scanning of biological phenomena with deep photon penetration and minimal phototoxicity. This advancement represents a significant shift in 3-D imaging of large-scale biological tissues and 4-D (space + time) imaging of small live animals. The large data associated with LSFM requires efficient imaging acquisition and analysis with the use of artificial intelligence (AI)/machine learning (ML) algorithms. To this end, AI/ML-directed LSFM is an emerging area for multi-organ imaging and tumor diagnostics. This review will present the development of LSFM and highlight various LSFM configurations and designs for multi-scale imaging. Optical clearance techniques will be compared for effective reduction in light scattering and optimal deep-tissue imaging. This review will further depict a diverse range of research and translational applications, from small live organisms to multi-organ imaging to tumor diagnosis. In addition, this review will address AI/ML-directed imaging reconstruction, including the application of convolutional neural networks (CNNs) and generative adversarial networks (GANs). In summary, the advancements of LSFM have enabled effective and efficient post-imaging reconstruction and data analyses, underscoring LSFM's contribution to advancing fundamental and translational research.
Collapse
Affiliation(s)
- Enbo Zhu
- Department of Bioengineering, UCLA, California, 90095, USA
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
- Department of Medicine, Greater Los Angeles VA Healthcare System, California, 90073, USA
- Department of Microbiology, Immunology & Molecular Genetics, UCLA, California, 90095, USA
| | - Yan-Ruide Li
- Department of Microbiology, Immunology & Molecular Genetics, UCLA, California, 90095, USA
| | - Samuel Margolis
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
| | - Jing Wang
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Kaidong Wang
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
- Department of Medicine, Greater Los Angeles VA Healthcare System, California, 90073, USA
| | - Yaran Zhang
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Shaolei Wang
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Jongchan Park
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Charlie Zheng
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
| | - Lili Yang
- Department of Microbiology, Immunology & Molecular Genetics, UCLA, California, 90095, USA
- Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research, UCLA, California, 90095, USA
- Jonsson Comprehensive Cancer Center, David Geffen School of Medicine, UCLA, California, 90095, USA
- Molecular Biology Institute, UCLA, California, 90095, USA
| | - Alison Chu
- Division of Neonatology and Developmental Biology, Department of Pediatrics, David Geffen School of Medicine, UCLA, California, 90095, USA
| | - Yuhua Zhang
- Doheny Eye Institute, Department of Ophthalmology, UCLA, California, 90095, USA
| | - Liang Gao
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Tzung K. Hsiai
- Department of Bioengineering, UCLA, California, 90095, USA
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
- Department of Medicine, Greater Los Angeles VA Healthcare System, California, 90073, USA
| |
Collapse
|
8
|
Kaderuppan SS, Sharma A, Saifuddin MR, Wong WLE, Woo WL. Θ-Net: A Deep Neural Network Architecture for the Resolution Enhancement of Phase-Modulated Optical Micrographs In Silico. SENSORS (BASEL, SWITZERLAND) 2024; 24:6248. [PMID: 39409287 PMCID: PMC11478931 DOI: 10.3390/s24196248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Revised: 09/23/2024] [Accepted: 09/23/2024] [Indexed: 10/20/2024]
Abstract
Optical microscopy is widely regarded to be an indispensable tool in healthcare and manufacturing quality control processes, although its inability to resolve structures separated by a lateral distance under ~200 nm has culminated in the emergence of a new field named fluorescence nanoscopy, while this too is prone to several caveats (namely phototoxicity, interference caused by exogenous probes and cost). In this regard, we present a triplet string of concatenated O-Net ('bead') architectures (termed 'Θ-Net' in the present study) as a cost-efficient and non-invasive approach to enhancing the resolution of non-fluorescent phase-modulated optical microscopical images in silico. The quality of the afore-mentioned enhanced resolution (ER) images was compared with that obtained via other popular frameworks (such as ANNA-PALM, BSRGAN and 3D RCAN), with the Θ-Net-generated ER images depicting an increased level of detail (unlike previous DNNs). In addition, the use of cross-domain (transfer) learning to enhance the capabilities of models trained on differential interference contrast (DIC) datasets [where phasic variations are not as prominently manifested as amplitude/intensity differences in the individual pixels unlike phase-contrast microscopy (PCM)] has resulted in the Θ-Net-generated images closely approximating that of the expected (ground truth) images for both the DIC and PCM datasets. This thus demonstrates the viability of our current Θ-Net architecture in attaining highly resolved images under poor signal-to-noise ratios while eliminating the need for a priori PSF and OTF information, thereby potentially impacting several engineering fronts (particularly biomedical imaging and sensing, precision engineering and optical metrology).
Collapse
Affiliation(s)
- Shiraz S. Kaderuppan
- Faculty of Science, Agriculture & Engineering (SAgE), Newcastle University, Newcastle upon Tyne NE1 7RU, UK; (A.S.); (M.R.S.)
| | - Anurag Sharma
- Faculty of Science, Agriculture & Engineering (SAgE), Newcastle University, Newcastle upon Tyne NE1 7RU, UK; (A.S.); (M.R.S.)
| | - Muhammad Ramadan Saifuddin
- Faculty of Science, Agriculture & Engineering (SAgE), Newcastle University, Newcastle upon Tyne NE1 7RU, UK; (A.S.); (M.R.S.)
| | - Wai Leong Eugene Wong
- Engineering Cluster, Singapore Institute of Technology, 10 Dover Drive, Singapore 138683, Singapore;
| | - Wai Lok Woo
- Computer and Information Sciences, Sutherland Building, Northumbria University, Northumberland Road, Newcastle upon Tyne NE1 8ST, UK;
| |
Collapse
|
9
|
Azzari L, Vippola M, Nymark S, Ihalainen TO, Mäntylä E. Iterative Immunostaining and NEDD Denoising for Improved Signal-To-Noise Ratio in ExM-LSCM. Bio Protoc 2024; 14:e5072. [PMID: 39346757 PMCID: PMC11427331 DOI: 10.21769/bioprotoc.5072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 08/08/2024] [Accepted: 08/12/2024] [Indexed: 10/01/2024] Open
Abstract
Expansion microscopy (ExM) has significantly reformed the field of super-resolution imaging, emerging as a powerful tool for visualizing complex cellular structures with nanoscale precision. Despite its capabilities, the epitope accessibility, labeling density, and precision of individual molecule detection pose challenges. We recently developed an iterative indirect immunofluorescence (IT-IF) method to improve the epitope labeling density, improving the signal and total intensity. In our protocol, we iteratively apply immunostaining steps before the expansion and exploit signal processing through noise estimation, denoising, and deblurring (NEDD) to aid in quantitative image analyses. Herein, we describe the steps of the iterative staining procedure and provide instructions on how to perform NEDD-based signal processing. Overall, IT-IF in ExM-laser scanning confocal microscopy (LSCM) represents a significant advancement in the field of cellular imaging, offering researchers a versatile tool for unraveling the structural complexity of biological systems at the molecular level with an increased signal-to-noise ratio and fluorescence intensity. Key features • Builds upon the method developed by Mäntylä et al. [1] and introduces the IT-IF method and signal-processing platform for several nanoscopy imaging applications. • Retains signal-to-noise ratio and significantly enhances the fluorescence intensity of ExM-LSCM data. • Automatic estimation of noise, signal reconstruction, denoising, and deblurring for increased reliability in image quantifications. • Requires at least seven days to complete.
Collapse
Affiliation(s)
- Lucio Azzari
- Tampere Microscopy Center (TMC), Tampere University, Tampere, Finland
| | - Minnamari Vippola
- Tampere Microscopy Center (TMC), Tampere University, Tampere, Finland
| | - Soile Nymark
- BioMediTech, Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Teemu O Ihalainen
- BioMediTech, Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
- Tampere Institute for Advanced Study, Tampere University, Tampere, Finland
| | - Elina Mäntylä
- BioMediTech, Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| |
Collapse
|
10
|
Zhang C, Wang P, He J, Wu Q, Xie S, Li B, Hao X, Wang S, Zhang H, Hao Z, Gao W, Liu Y, Guo J, Hu M, Gao Y. Super-resolution reconstruction improves multishell diffusion: using radiomics to predict adult-type diffuse glioma IDH and grade. Front Oncol 2024; 14:1435204. [PMID: 39296980 PMCID: PMC11408129 DOI: 10.3389/fonc.2024.1435204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2024] [Accepted: 08/19/2024] [Indexed: 09/21/2024] Open
Abstract
Objectives Multishell diffusion scanning is limited by low spatial resolution. We sought to improve the resolution of multishell diffusion images through deep learning-based super-resolution reconstruction (SR) and subsequently develop and validate a prediction model for adult-type diffuse glioma, isocitrate dehydrogenase status and grade 2/3 tumors. Materials and methods A simple diffusion model (DTI) and three advanced diffusion models (DKI, MAP, and NODDI) were constructed based on multishell diffusion scanning. Migration was performed with a generative adversarial network based on deep residual channel attention networks, after which images with 2x and 4x resolution improvements were generated. Radiomic features were used as inputs, and diagnostic models were subsequently constructed via multiple pipelines. Results This prospective study included 90 instances (median age, 54.5 years; 39 men) diagnosed with adult-type diffuse glioma. Images with both 2x- and 4x-improved resolution were visually superior to the original images, and the 2x-improved images allowed better predictions than did the 4x-improved images (P<.001). A comparison of the areas under the curve among the multiple pipeline-constructed models revealed that the advanced diffusion models did not have greater diagnostic performance than the simple diffusion model (P>.05). The NODDI model constructed with 2x-improved images had the best performance in predicting isocitrate dehydrogenase status (AUC_validation=0.877; Brier score=0.132). The MAP model constructed with the original images performed best in classifying grade 2 and grade 3 tumors (AUC_validation=0.806; Brier score=0.168). Conclusion SR improves the resolution of multishell diffusion images and has different advantages in achieving different goals and creating different target diffusion models.
Collapse
Affiliation(s)
- Chi Zhang
- Department of Radiology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| | - Peng Wang
- Department of Radiology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| | - Jinlong He
- Department of Radiology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| | - Qiong Wu
- Department of Radiology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| | - Shenghui Xie
- Department of Radiology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| | - Bo Li
- Department of Radiology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| | - Xiangcheng Hao
- Department of Radiology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| | - Shaoyu Wang
- MR Research Collaboration, Siemens Healthineers, Shanghai, China
| | - Huapeng Zhang
- MR Research Collaboration, Siemens Healthineers, Shanghai, China
| | - Zhiyue Hao
- Department of Radiology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| | - Weilin Gao
- Department of Radiology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| | - Yanhao Liu
- Department of Radiology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| | - Jiahui Guo
- Department of Radiology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| | - Mingxue Hu
- Department of Radiology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| | - Yang Gao
- Department of Radiology, Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| |
Collapse
|
11
|
Zhong L, Li L, Yang G. Benchmarking robustness of deep neural networks in semantic segmentation of fluorescence microscopy images. BMC Bioinformatics 2024; 25:269. [PMID: 39164632 PMCID: PMC11334404 DOI: 10.1186/s12859-024-05894-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Accepted: 08/07/2024] [Indexed: 08/22/2024] Open
Abstract
BACKGROUND Fluorescence microscopy (FM) is an important and widely adopted biological imaging technique. Segmentation is often the first step in quantitative analysis of FM images. Deep neural networks (DNNs) have become the state-of-the-art tools for image segmentation. However, their performance on natural images may collapse under certain image corruptions or adversarial attacks. This poses real risks to their deployment in real-world applications. Although the robustness of DNN models in segmenting natural images has been studied extensively, their robustness in segmenting FM images remains poorly understood RESULTS: To address this deficiency, we have developed an assay that benchmarks robustness of DNN segmentation models using datasets of realistic synthetic 2D FM images with precisely controlled corruptions or adversarial attacks. Using this assay, we have benchmarked robustness of ten representative models such as DeepLab and Vision Transformer. We find that models with good robustness on natural images may perform poorly on FM images. We also find new robustness properties of DNN models and new connections between their corruption robustness and adversarial robustness. To further assess the robustness of the selected models, we have also benchmarked them on real microscopy images of different modalities without using simulated degradation. The results are consistent with those obtained on the realistic synthetic images, confirming the fidelity and reliability of our image synthesis method as well as the effectiveness of our assay. CONCLUSIONS Based on comprehensive benchmarking experiments, we have found distinct robustness properties of deep neural networks in semantic segmentation of FM images. Based on the findings, we have made specific recommendations on selection and design of robust models for FM image segmentation.
Collapse
Affiliation(s)
- Liqun Zhong
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China
| | - Lingrui Li
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China
| | - Ge Yang
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China.
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China.
| |
Collapse
|
12
|
Rehman A, Zhovmer A, Sato R, Mukouyama YS, Chen J, Rissone A, Puertollano R, Liu J, Vishwasrao HD, Shroff H, Combs CA, Xue H. Convolutional neural network transformer (CNNT) for fluorescence microscopy image denoising with improved generalization and fast adaptation. Sci Rep 2024; 14:18184. [PMID: 39107416 PMCID: PMC11303381 DOI: 10.1038/s41598-024-68918-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 07/30/2024] [Indexed: 08/10/2024] Open
Abstract
Deep neural networks can improve the quality of fluorescence microscopy images. Previous methods, based on Convolutional Neural Networks (CNNs), require time-consuming training of individual models for each experiment, impairing their applicability and generalization. In this study, we propose a novel imaging-transformer based model, Convolutional Neural Network Transformer (CNNT), that outperforms CNN based networks for image denoising. We train a general CNNT based backbone model from pairwise high-low Signal-to-Noise Ratio (SNR) image volumes, gathered from a single type of fluorescence microscope, an instant Structured Illumination Microscope. Fast adaptation to new microscopes is achieved by fine-tuning the backbone on only 5-10 image volume pairs per new experiment. Results show that the CNNT backbone and fine-tuning scheme significantly reduces training time and improves image quality, outperforming models trained using only CNNs such as 3D-RCAN and Noise2Fast. We show three examples of efficacy of this approach in wide-field, two-photon, and confocal fluorescence microscopy.
Collapse
Affiliation(s)
- Azaan Rehman
- Office of AI Research, National Heart, Lung and Blood Institute (NHLBI), National Institutes of Health (NIH), Bethesda, MD, 20892, USA
| | - Alexander Zhovmer
- Center for Biologics Evaluation and Research, U.S. Food and Drug Administration (FDA), Silver Spring, MD, 20903, USA
| | - Ryo Sato
- Laboratory of Stem Cell and Neurovascular Research, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Yoh-Suke Mukouyama
- Laboratory of Stem Cell and Neurovascular Research, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Jiji Chen
- Advanced Imaging and Microscopy Resource, NIBIB, NIH, Bethesda, MD, 20892, USA
| | - Alberto Rissone
- Laboratory of Protein Trafficking and Organelle Biology, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Rosa Puertollano
- Laboratory of Protein Trafficking and Organelle Biology, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Jiamin Liu
- Advanced Imaging and Microscopy Resource, NIBIB, NIH, Bethesda, MD, 20892, USA
| | | | - Hari Shroff
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Christian A Combs
- Light Microscopy Core, National Heart, Lung, and Blood Institute, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD, 20892, USA.
| | - Hui Xue
- Office of AI Research, National Heart, Lung and Blood Institute (NHLBI), National Institutes of Health (NIH), Bethesda, MD, 20892, USA
- Health Futures, Microsoft Research, Redmond, Washington, 98052, USA
| |
Collapse
|
13
|
Ma C, Tan W, He R, Yan B. Pretraining a foundation model for generalizable fluorescence microscopy-based image restoration. Nat Methods 2024; 21:1558-1567. [PMID: 38609490 DOI: 10.1038/s41592-024-02244-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 03/13/2024] [Indexed: 04/14/2024]
Abstract
Fluorescence microscopy-based image restoration has received widespread attention in the life sciences and has led to significant progress, benefiting from deep learning technology. However, most current task-specific methods have limited generalizability to different fluorescence microscopy-based image restoration problems. Here, we seek to improve generalizability and explore the potential of applying a pretrained foundation model to fluorescence microscopy-based image restoration. We provide a universal fluorescence microscopy-based image restoration (UniFMIR) model to address different restoration problems, and show that UniFMIR offers higher image restoration precision, better generalization and increased versatility. Demonstrations on five tasks and 14 datasets covering a wide range of microscopy imaging modalities and biological samples demonstrate that the pretrained UniFMIR can effectively transfer knowledge to a specific situation via fine-tuning, uncover clear nanoscale biomolecular structures and facilitate high-quality imaging. This work has the potential to inspire and trigger new research highlights for fluorescence microscopy-based image restoration.
Collapse
Affiliation(s)
- Chenxi Ma
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Weimin Tan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Ruian He
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Bo Yan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China.
| |
Collapse
|
14
|
Ertürk A. Deep 3D histology powered by tissue clearing, omics and AI. Nat Methods 2024; 21:1153-1165. [PMID: 38997593 DOI: 10.1038/s41592-024-02327-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 05/28/2024] [Indexed: 07/14/2024]
Abstract
To comprehensively understand tissue and organism physiology and pathophysiology, it is essential to create complete three-dimensional (3D) cellular maps. These maps require structural data, such as the 3D configuration and positioning of tissues and cells, and molecular data on the constitution of each cell, spanning from the DNA sequence to protein expression. While single-cell transcriptomics is illuminating the cellular and molecular diversity across species and tissues, the 3D spatial context of these molecular data is often overlooked. Here, I discuss emerging 3D tissue histology techniques that add the missing third spatial dimension to biomedical research. Through innovations in tissue-clearing chemistry, labeling and volumetric imaging that enhance 3D reconstructions and their synergy with molecular techniques, these technologies will provide detailed blueprints of entire organs or organisms at the cellular level. Machine learning, especially deep learning, will be essential for extracting meaningful insights from the vast data. Further development of integrated structural, molecular and computational methods will unlock the full potential of next-generation 3D histology.
Collapse
Affiliation(s)
- Ali Ertürk
- Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Zentrum München, Neuherberg, Germany.
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians University, Munich, Germany.
- School of Medicine, Koç University, İstanbul, Turkey.
- Deep Piction GmbH, Munich, Germany.
| |
Collapse
|
15
|
Zhang H, Xu Z, Chen N, Ma F, Zheng W, Liu C, Meng J. Simultaneous removal of noise and correction of motion warping in neuron calcium imaging using a pipeline structure of self-supervised deep learning models. BIOMEDICAL OPTICS EXPRESS 2024; 15:4300-4317. [PMID: 39022541 PMCID: PMC11249678 DOI: 10.1364/boe.527919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 06/07/2024] [Accepted: 06/10/2024] [Indexed: 07/20/2024]
Abstract
Calcium imaging is susceptible to motion distortions and background noises, particularly for monitoring active animals under low-dose laser irradiation, and hence unavoidably hinder the critical analysis of neural functions. Current research efforts tend to focus on either denoising or dewarping and do not provide effective methods for videos distorted by both noises and motion artifacts simultaneously. We found that when a self-supervised denoising model of DeepCAD [Nat. Methods18, 1359 (2021)10.1038/s41592-021-01225-0] is used on the calcium imaging contaminated by noise and motion warping, it can remove the motion artifacts effectively but with regenerated noises. To address this issue, we develop a two-level deep-learning (DL) pipeline to dewarp and denoise the calcium imaging video sequentially. The pipeline consists of two 3D self-supervised DL models that do not require warp-free and high signal-to-noise ratio (SNR) observations for network optimization. Specifically, a high-frequency enhancement block is presented in the denoising network to restore more structure information in the denoising process; a hierarchical perception module and a multi-scale attention module are designed in the dewarping network to tackle distortions of various sizes. Experiments conducted on seven videos from two-photon and confocal imaging systems demonstrate that our two-level DL pipeline can restore high-clarity neuron images distorted by both motion warping and background noises. Compared to typical DeepCAD, our denoising model achieves a significant improvement of approximately 30% in image resolution and up to 28% in signal-to-noise ratio; compared to traditional dewarping and denoising methods, our proposed pipeline network recovers more neurons, enhancing signal fidelity and improving data correlation among frames by 35% and 60% respectively. This work may provide an attractive method for long-term neural activity monitoring in awake animals and also facilitate functional analysis of neural circuits.
Collapse
Affiliation(s)
- Hongdong Zhang
- School of Computer, Qufu Normal University, Rizhao 276826, China
| | - Zhiqiang Xu
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ningbo Chen
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Fei Ma
- School of Computer, Qufu Normal University, Rizhao 276826, China
| | - Wei Zheng
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Chengbo Liu
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Jing Meng
- School of Computer, Qufu Normal University, Rizhao 276826, China
| |
Collapse
|
16
|
Shroff H, Testa I, Jug F, Manley S. Live-cell imaging powered by computation. Nat Rev Mol Cell Biol 2024; 25:443-463. [PMID: 38378991 DOI: 10.1038/s41580-024-00702-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2024] [Indexed: 02/22/2024]
Abstract
The proliferation of microscopy methods for live-cell imaging offers many new possibilities for users but can also be challenging to navigate. The prevailing challenge in live-cell fluorescence microscopy is capturing intra-cellular dynamics while preserving cell viability. Computational methods can help to address this challenge and are now shifting the boundaries of what is possible to capture in living systems. In this Review, we discuss these computational methods focusing on artificial intelligence-based approaches that can be layered on top of commonly used existing microscopies as well as hybrid methods that integrate computation and microscope hardware. We specifically discuss how computational approaches can improve the signal-to-noise ratio, spatial resolution, temporal resolution and multi-colour capacity of live-cell imaging.
Collapse
Affiliation(s)
- Hari Shroff
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Ilaria Testa
- Department of Applied Physics and Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Florian Jug
- Fondazione Human Technopole (HT), Milan, Italy
| | - Suliana Manley
- Institute of Physics, School of Basic Sciences, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
17
|
Qiao C, Zeng Y, Meng Q, Chen X, Chen H, Jiang T, Wei R, Guo J, Fu W, Lu H, Li D, Wang Y, Qiao H, Wu J, Li D, Dai Q. Zero-shot learning enables instant denoising and super-resolution in optical fluorescence microscopy. Nat Commun 2024; 15:4180. [PMID: 38755148 PMCID: PMC11099110 DOI: 10.1038/s41467-024-48575-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 05/07/2024] [Indexed: 05/18/2024] Open
Abstract
Computational super-resolution methods, including conventional analytical algorithms and deep learning models, have substantially improved optical microscopy. Among them, supervised deep neural networks have demonstrated outstanding performance, however, demanding abundant high-quality training data, which are laborious and even impractical to acquire due to the high dynamics of living cells. Here, we develop zero-shot deconvolution networks (ZS-DeconvNet) that instantly enhance the resolution of microscope images by more than 1.5-fold over the diffraction limit with 10-fold lower fluorescence than ordinary super-resolution imaging conditions, in an unsupervised manner without the need for either ground truths or additional data acquisition. We demonstrate the versatile applicability of ZS-DeconvNet on multiple imaging modalities, including total internal reflection fluorescence microscopy, three-dimensional wide-field microscopy, confocal microscopy, two-photon microscopy, lattice light-sheet microscopy, and multimodal structured illumination microscopy, which enables multi-color, long-term, super-resolution 2D/3D imaging of subcellular bioprocesses from mitotic single cells to multicellular embryos of mouse and C. elegans.
Collapse
Affiliation(s)
- Chang Qiao
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Yunmin Zeng
- Department of Automation, Tsinghua University, 100084, Beijing, China
| | - Quan Meng
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Xingye Chen
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
- Research Institute for Frontier Science, Beihang University, 100191, Beijing, China
| | - Haoyu Chen
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Tao Jiang
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Rongfei Wei
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
| | - Jiabao Guo
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Wenfeng Fu
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Huaide Lu
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Di Li
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
| | - Yuwang Wang
- Beijing National Research Center for Information Science and Technology, Tsinghua University, 100084, Beijing, China
| | - Hui Qiao
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Dong Li
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China.
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, 100084, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China.
| |
Collapse
|
18
|
Das V, Zhang F, Bower AJ, Li J, Liu T, Aguilera N, Alvisio B, Liu Z, Hammer DX, Tam J. Revealing speckle obscured living human retinal cells with artificial intelligence assisted adaptive optics optical coherence tomography. COMMUNICATIONS MEDICINE 2024; 4:68. [PMID: 38600290 PMCID: PMC11006674 DOI: 10.1038/s43856-024-00483-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 03/13/2024] [Indexed: 04/12/2024] Open
Abstract
BACKGROUND In vivo imaging of the human retina using adaptive optics optical coherence tomography (AO-OCT) has transformed medical imaging by enabling visualization of 3D retinal structures at cellular-scale resolution, including the retinal pigment epithelial (RPE) cells, which are essential for maintaining visual function. However, because noise inherent to the imaging process (e.g., speckle) makes it difficult to visualize RPE cells from a single volume acquisition, a large number of 3D volumes are typically averaged to improve contrast, substantially increasing the acquisition duration and reducing the overall imaging throughput. METHODS Here, we introduce parallel discriminator generative adversarial network (P-GAN), an artificial intelligence (AI) method designed to recover speckle-obscured cellular features from a single AO-OCT volume, circumventing the need for acquiring a large number of volumes for averaging. The combination of two parallel discriminators in P-GAN provides additional feedback to the generator to more faithfully recover both local and global cellular structures. Imaging data from 8 eyes of 7 participants were used in this study. RESULTS We show that P-GAN not only improves RPE cell contrast by 3.5-fold, but also improves the end-to-end time required to visualize RPE cells by 99-fold, thereby enabling large-scale imaging of cells in the living human eye. RPE cell spacing measured across a large set of AI recovered images from 3 participants were in agreement with expected normative ranges. CONCLUSIONS The results demonstrate the potential of AI assisted imaging in overcoming a key limitation of RPE imaging and making it more accessible in a routine clinical setting.
Collapse
Affiliation(s)
- Vineeta Das
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Furu Zhang
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Andrew J Bower
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Joanne Li
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Tao Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Nancy Aguilera
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Bruno Alvisio
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Zhuolin Liu
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Daniel X Hammer
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Johnny Tam
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA.
| |
Collapse
|
19
|
Gómez-de-Mariscal E, Del Rosario M, Pylvänäinen JW, Jacquemet G, Henriques R. Harnessing artificial intelligence to reduce phototoxicity in live imaging. J Cell Sci 2024; 137:jcs261545. [PMID: 38324353 PMCID: PMC10912813 DOI: 10.1242/jcs.261545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024] Open
Abstract
Fluorescence microscopy is essential for studying living cells, tissues and organisms. However, the fluorescent light that switches on fluorescent molecules also harms the samples, jeopardizing the validity of results - particularly in techniques such as super-resolution microscopy, which demands extended illumination. Artificial intelligence (AI)-enabled software capable of denoising, image restoration, temporal interpolation or cross-modal style transfer has great potential to rescue live imaging data and limit photodamage. Yet we believe the focus should be on maintaining light-induced damage at levels that preserve natural cell behaviour. In this Opinion piece, we argue that a shift in role for AIs is needed - AI should be used to extract rich insights from gentle imaging rather than recover compromised data from harsh illumination. Although AI can enhance imaging, our ultimate goal should be to uncover biological truths, not just retrieve data. It is essential to prioritize minimizing photodamage over merely pushing technical limits. Our approach is aimed towards gentle acquisition and observation of undisturbed living systems, aligning with the essence of live-cell fluorescence microscopy.
Collapse
Affiliation(s)
| | | | - Joanna W. Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku 20100, Finland
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal
- UCL Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK
| |
Collapse
|
20
|
He H, Cao M, Gao Y, Zheng P, Yan S, Zhong JH, Wang L, Jin D, Ren B. Noise learning of instruments for high-contrast, high-resolution and fast hyperspectral microscopy and nanoscopy. Nat Commun 2024; 15:754. [PMID: 38272927 PMCID: PMC10810791 DOI: 10.1038/s41467-024-44864-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 01/05/2024] [Indexed: 01/27/2024] Open
Abstract
The low scattering efficiency of Raman scattering makes it challenging to simultaneously achieve good signal-to-noise ratio (SNR), high imaging speed, and adequate spatial and spectral resolutions. Here, we report a noise learning (NL) approach that estimates the intrinsic noise distribution of each instrument by statistically learning the noise in the pixel-spatial frequency domain. The estimated noise is then removed from the noisy spectra. This enhances the SNR by ca. 10 folds, and suppresses the mean-square error by almost 150 folds. NL allows us to improve the positioning accuracy and spatial resolution and largely eliminates the impact of thermal drift on tip-enhanced Raman spectroscopic nanoimaging. NL is also applicable to enhance SNR in fluorescence and photoluminescence imaging. Our method manages the ground truth spectra and the instrumental noise simultaneously within the training dataset, which bypasses the tedious labelling of huge dataset required in conventional deep learning, potentially shifting deep learning from sample-dependent to instrument-dependent.
Collapse
Affiliation(s)
- Hao He
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, 361005, China
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), The MOE Key Laboratory of Spectrochemical Analysis and Instrumentation, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China
- Department of Biomedical Engineering, College of Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China
| | - Maofeng Cao
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), The MOE Key Laboratory of Spectrochemical Analysis and Instrumentation, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China
| | - Yun Gao
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, 361005, China
| | - Peng Zheng
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, 361005, China
| | - Sen Yan
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), The MOE Key Laboratory of Spectrochemical Analysis and Instrumentation, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China
| | - Jin-Hui Zhong
- Department of Materials Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China.
| | - Lei Wang
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, 361005, China.
| | - Dayong Jin
- Department of Biomedical Engineering, College of Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China
- Institute for Biomedical Materials & Devices (IBMD), University of Technology Sydney, Sydney, NSW, 2007, Australia
| | - Bin Ren
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), The MOE Key Laboratory of Spectrochemical Analysis and Instrumentation, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China.
- Tan Kah Kee Innovation Laboratory, Xiamen, 361104, China.
| |
Collapse
|
21
|
Wang J, Zhao X, Wang Y, Li D. Quantitative real-time phase microscopy for extended depth-of-field imaging based on the 3D single-shot differential phase contrast (ssDPC) imaging method. OPTICS EXPRESS 2024; 32:2081-2096. [PMID: 38297745 DOI: 10.1364/oe.512285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 12/21/2023] [Indexed: 02/02/2024]
Abstract
Optical diffraction tomography (ODT) is a promising label-free imaging method capable of quantitatively measuring the three-dimensional (3D) refractive index distribution of transparent samples. In recent years, partially coherent ODT (PC-ODT) has attracted increasing attention due to its system simplicity and absence of laser speckle noise. Quantitative phase imaging (QPI) technologies represented by Fourier ptychographic microscopy (FPM), differential phase contrast (DPC) imaging and intensity diffraction tomography (IDT) need to collect several or hundreds of intensity images, which usually introduce motion artifacts when shooting fast-moving targets, leading to a decrease in image quality. Hence, a quantitative real-time phase microscopy (qRPM) for extended depth of field (DOF) imaging based on 3D single-shot differential phase contrast (ssDPC) imaging method is proposed in this research study. qRPM incorporates a microlens array (MLA) to simultaneously collect spatial information and angular information. In subsequent optical information processing, a deconvolution method is used to obtain intensity stacks under different illumination angles in a raw light field image. Importing the obtained intensity stack into the 3D DPC imaging model is able to finally obtain the 3D refractive index distribution. The captured four-dimensional light field information enables the reconstruction of 3D information in a single snapshot and extending the DOF of qRPM. The imaging capability of the proposed qRPM system is experimental verified on different samples, achieve single-exposure 3D label-free imaging with an extended DOF for 160 µm which is nearly 30 times higher than the traditional microscope system.
Collapse
|
22
|
Sonneck J, Zhou Y, Chen J. MMV_Im2Im: an open-source microscopy machine vision toolbox for image-to-image transformation. Gigascience 2024; 13:giad120. [PMID: 38280188 PMCID: PMC10821710 DOI: 10.1093/gigascience/giad120] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 09/30/2023] [Accepted: 12/28/2023] [Indexed: 01/29/2024] Open
Abstract
Over the past decade, deep learning (DL) research in computer vision has been growing rapidly, with many advances in DL-based image analysis methods for biomedical problems. In this work, we introduce MMV_Im2Im, a new open-source Python package for image-to-image transformation in bioimaging applications. MMV_Im2Im is designed with a generic image-to-image transformation framework that can be used for a wide range of tasks, including semantic segmentation, instance segmentation, image restoration, image generation, and so on. Our implementation takes advantage of state-of-the-art machine learning engineering techniques, allowing researchers to focus on their research without worrying about engineering details. We demonstrate the effectiveness of MMV_Im2Im on more than 10 different biomedical problems, showcasing its general potentials and applicabilities. For computational biomedical researchers, MMV_Im2Im provides a starting point for developing new biomedical image analysis or machine learning algorithms, where they can either reuse the code in this package or fork and extend this package to facilitate the development of new methods. Experimental biomedical researchers can benefit from this work by gaining a comprehensive view of the image-to-image transformation concept through diversified examples and use cases. We hope this work can give the community inspirations on how DL-based image-to-image transformation can be integrated into the assay development process, enabling new biomedical studies that cannot be done only with traditional experimental assays. To help researchers get started, we have provided source code, documentation, and tutorials for MMV_Im2Im at [https://github.com/MMV-Lab/mmv_im2im] under MIT license.
Collapse
Affiliation(s)
- Justin Sonneck
- Leibniz-Institut für Analytische Wissenschaften – ISAS – e.V., Bunsen-Kirchhoff-Str. 11, Dortmund 44139, Germany
- Faculty of Computer Science, Ruhr-University Bochum, Universitätsstraße 150, Bochum 44801, Germany
| | - Yu Zhou
- Leibniz-Institut für Analytische Wissenschaften – ISAS – e.V., Bunsen-Kirchhoff-Str. 11, Dortmund 44139, Germany
| | - Jianxu Chen
- Leibniz-Institut für Analytische Wissenschaften – ISAS – e.V., Bunsen-Kirchhoff-Str. 11, Dortmund 44139, Germany
| |
Collapse
|
23
|
Xypakis E, de Turris V, Gala F, Ruocco G, Leonetti M. Physics-informed deep neural network for image denoising. OPTICS EXPRESS 2023; 31:43838-43849. [PMID: 38178470 DOI: 10.1364/oe.504606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/14/2023] [Indexed: 01/06/2024]
Abstract
Image enhancement deep neural networks (DNN) can improve signal to noise ratio or resolution of optically collected visual information. The literature reports a variety of approaches with varying effectiveness. All these algorithms rely on arbitrary data (the pixels' count-rate) normalization, making their performance strngly affected by dataset or user-specific data pre-manipulation. We developed a DNN algorithm capable to enhance images signal-to-noise surpassing previous algorithms. Our model stems from the nature of the photon detection process which is characterized by an inherently Poissonian statistics. Our algorithm is thus driven by distance between probability functions instead than relying on the sole count-rate, producing high performance results especially in high-dynamic-range images. Moreover, it does not require any arbitrary image renormalization other than the transformation of the camera's count-rate into photon-number.
Collapse
|
24
|
Yi C, Zhu L, Sun J, Wang Z, Zhang M, Zhong F, Yan L, Tang J, Huang L, Zhang YH, Li D, Fei P. Video-rate 3D imaging of living cells using Fourier view-channel-depth light field microscopy. Commun Biol 2023; 6:1259. [PMID: 38086994 PMCID: PMC10716377 DOI: 10.1038/s42003-023-05636-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 11/27/2023] [Indexed: 12/18/2023] Open
Abstract
Interrogation of subcellular biological dynamics occurring in a living cell often requires noninvasive imaging of the fragile cell with high spatiotemporal resolution across all three dimensions. It thereby poses big challenges to modern fluorescence microscopy implementations because the limited photon budget in a live-cell imaging task makes the achievable performance of conventional microscopy approaches compromise between their spatial resolution, volumetric imaging speed, and phototoxicity. Here, we incorporate a two-stage view-channel-depth (VCD) deep-learning reconstruction strategy with a Fourier light-field microscope based on diffractive optical element to realize fast 3D super-resolution reconstructions of intracellular dynamics from single diffraction-limited 2D light-filed measurements. This VCD-enabled Fourier light-filed imaging approach (F-VCD), achieves video-rate (50 volumes per second) 3D imaging of intracellular dynamics at a high spatiotemporal resolution of ~180 nm × 180 nm × 400 nm and strong noise-resistant capability, with which light field images with a signal-to-noise ratio (SNR) down to -1.62 dB could be well reconstructed. With this approach, we successfully demonstrate the 4D imaging of intracellular organelle dynamics, e.g., mitochondria fission and fusion, with ~5000 times of observation.
Collapse
Affiliation(s)
- Chengqiang Yi
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Lanxin Zhu
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jiahao Sun
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Zhaofei Wang
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Meng Zhang
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
- Britton Chance Center for Biomedical Photonics-MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Fenghe Zhong
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Luxin Yan
- State Education Commission Key Laboratory for Image Processing and Intelligent Control, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Jiang Tang
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Liang Huang
- Department of Hematology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, 430030, China
| | - Yu-Hui Zhang
- Britton Chance Center for Biomedical Photonics-MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Dongyu Li
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Peng Fei
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| |
Collapse
|
25
|
Li X, Hu X, Chen X, Fan J, Zhao Z, Wu J, Wang H, Dai Q. Spatial redundancy transformer for self-supervised fluorescence image denoising. NATURE COMPUTATIONAL SCIENCE 2023; 3:1067-1080. [PMID: 38177722 PMCID: PMC10766531 DOI: 10.1038/s43588-023-00568-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 11/07/2023] [Indexed: 01/06/2024]
Abstract
Fluorescence imaging with high signal-to-noise ratios has become the foundation of accurate visualization and analysis of biological phenomena. However, the inevitable noise poses a formidable challenge to imaging sensitivity. Here we provide the spatial redundancy denoising transformer (SRDTrans) to remove noise from fluorescence images in a self-supervised manner. First, a sampling strategy based on spatial redundancy is proposed to extract adjacent orthogonal training pairs, which eliminates the dependence on high imaging speed. Second, we designed a lightweight spatiotemporal transformer architecture to capture long-range dependencies and high-resolution features at low computational cost. SRDTrans can restore high-frequency information without producing oversmoothed structures and distorted fluorescence traces. Finally, we demonstrate the state-of-the-art denoising performance of SRDTrans on single-molecule localization microscopy and two-photon volumetric calcium imaging. SRDTrans does not contain any assumptions about the imaging process and the sample, thus can be easily extended to various imaging modalities and biological applications.
Collapse
Affiliation(s)
- Xinyang Li
- Department of Automation, Tsinghua University, Beijing, China
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Xiaowan Hu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Xingye Chen
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Jiaqi Fan
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
- Department of Electronic Engineering, Tsinghua University, Beijing, China
| | - Zhifeng Zhao
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Multi-dimension and Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| | - Haoqian Wang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China.
- The Shenzhen Institute of Future Media Technology, Shenzhen, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Multi-dimension and Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| |
Collapse
|
26
|
Pylvänäinen JW, Gómez-de-Mariscal E, Henriques R, Jacquemet G. Live-cell imaging in the deep learning era. Curr Opin Cell Biol 2023; 85:102271. [PMID: 37897927 DOI: 10.1016/j.ceb.2023.102271] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 10/30/2023]
Abstract
Live imaging is a powerful tool, enabling scientists to observe living organisms in real time. In particular, when combined with fluorescence microscopy, live imaging allows the monitoring of cellular components with high sensitivity and specificity. Yet, due to critical challenges (i.e., drift, phototoxicity, dataset size), implementing live imaging and analyzing the resulting datasets is rarely straightforward. Over the past years, the development of bioimage analysis tools, including deep learning, is changing how we perform live imaging. Here we briefly cover important computational methods aiding live imaging and carrying out key tasks such as drift correction, denoising, super-resolution imaging, artificial labeling, tracking, and time series analysis. We also cover recent advances in self-driving microscopy.
Collapse
Affiliation(s)
- Joanna W Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland
| | | | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal; University College London, London WC1E 6BT, United Kingdom
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland; Turku Bioscience Centre, University of Turku and Åbo Akademi University, 20520, Turku, Finland; InFLAMES Research Flagship Center, University of Turku and Åbo Akademi University, 20520 Turku, Finland; Turku Bioimaging, University of Turku and Åbo Akademi University, FI- 20520 Turku, Finland.
| |
Collapse
|
27
|
Astratov VN, Sahel YB, Eldar YC, Huang L, Ozcan A, Zheludev N, Zhao J, Burns Z, Liu Z, Narimanov E, Goswami N, Popescu G, Pfitzner E, Kukura P, Hsiao YT, Hsieh CL, Abbey B, Diaspro A, LeGratiet A, Bianchini P, Shaked NT, Simon B, Verrier N, Debailleul M, Haeberlé O, Wang S, Liu M, Bai Y, Cheng JX, Kariman BS, Fujita K, Sinvani M, Zalevsky Z, Li X, Huang GJ, Chu SW, Tzang O, Hershkovitz D, Cheshnovsky O, Huttunen MJ, Stanciu SG, Smolyaninova VN, Smolyaninov II, Leonhardt U, Sahebdivan S, Wang Z, Luk’yanchuk B, Wu L, Maslov AV, Jin B, Simovski CR, Perrin S, Montgomery P, Lecler S. Roadmap on Label-Free Super-Resolution Imaging. LASER & PHOTONICS REVIEWS 2023; 17:2200029. [PMID: 38883699 PMCID: PMC11178318 DOI: 10.1002/lpor.202200029] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Indexed: 06/18/2024]
Abstract
Label-free super-resolution (LFSR) imaging relies on light-scattering processes in nanoscale objects without a need for fluorescent (FL) staining required in super-resolved FL microscopy. The objectives of this Roadmap are to present a comprehensive vision of the developments, the state-of-the-art in this field, and to discuss the resolution boundaries and hurdles which need to be overcome to break the classical diffraction limit of the LFSR imaging. The scope of this Roadmap spans from the advanced interference detection techniques, where the diffraction-limited lateral resolution is combined with unsurpassed axial and temporal resolution, to techniques with true lateral super-resolution capability which are based on understanding resolution as an information science problem, on using novel structured illumination, near-field scanning, and nonlinear optics approaches, and on designing superlenses based on nanoplasmonics, metamaterials, transformation optics, and microsphere-assisted approaches. To this end, this Roadmap brings under the same umbrella researchers from the physics and biomedical optics communities in which such studies have often been developing separately. The ultimate intent of this paper is to create a vision for the current and future developments of LFSR imaging based on its physical mechanisms and to create a great opening for the series of articles in this field.
Collapse
Affiliation(s)
- Vasily N. Astratov
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Yair Ben Sahel
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Yonina C. Eldar
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
- David Geffen School of Medicine, University of California, Los Angeles, California 90095, USA
| | - Nikolay Zheludev
- Optoelectronics Research Centre, University of Southampton, Southampton, SO17 1BJ, UK
- Centre for Disruptive Photonic Technologies, The Photonics Institute, School of Physical and Mathematical Sciences, Nanyang Technological University, 637371, Singapore
| | - Junxiang Zhao
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zachary Burns
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zhaowei Liu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
- Material Science and Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Evgenii Narimanov
- School of Electrical Engineering, and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA
| | - Neha Goswami
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Gabriel Popescu
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Emanuel Pfitzner
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Philipp Kukura
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Yi-Teng Hsiao
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Chia-Lung Hsieh
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Brian Abbey
- Australian Research Council Centre of Excellence for Advanced Molecular Imaging, La Trobe University, Melbourne, Victoria, Australia
- Department of Chemistry and Physics, La Trobe Institute for Molecular Science (LIMS), La Trobe University, Melbourne, Victoria, Australia
| | - Alberto Diaspro
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Aymeric LeGratiet
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- Université de Rennes, CNRS, Institut FOTON - UMR 6082, F-22305 Lannion, France
| | - Paolo Bianchini
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Natan T. Shaked
- Tel Aviv University, Faculty of Engineering, Department of Biomedical Engineering, Tel Aviv 6997801, Israel
| | - Bertrand Simon
- LP2N, Institut d’Optique Graduate School, CNRS UMR 5298, Université de Bordeaux, Talence France
| | - Nicolas Verrier
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | | | - Olivier Haeberlé
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | - Sheng Wang
- School of Physics and Technology, Wuhan University, China
- Wuhan Institute of Quantum Technology, China
| | - Mengkun Liu
- Department of Physics and Astronomy, Stony Brook University, USA
- National Synchrotron Light Source II, Brookhaven National Laboratory, USA
| | - Yeran Bai
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Ji-Xin Cheng
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Behjat S. Kariman
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Katsumasa Fujita
- Department of Applied Physics and the Advanced Photonics and Biosensing Open Innovation Laboratory (AIST); and the Transdimensional Life Imaging Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| | - Moshe Sinvani
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Zeev Zalevsky
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Xiangping Li
- Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Institute of Photonics Technology, Jinan University, Guangzhou 510632, China
| | - Guan-Jie Huang
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Shi-Wei Chu
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Omer Tzang
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Dror Hershkovitz
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Ori Cheshnovsky
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Mikko J. Huttunen
- Laboratory of Photonics, Physics Unit, Tampere University, FI-33014, Tampere, Finland
| | - Stefan G. Stanciu
- Center for Microscopy – Microanalysis and Information Processing, Politehnica University of Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
| | - Vera N. Smolyaninova
- Department of Physics Astronomy and Geosciences, Towson University, 8000 York Rd., Towson, MD 21252, USA
| | - Igor I. Smolyaninov
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - Ulf Leonhardt
- Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Sahar Sahebdivan
- EMTensor GmbH, TechGate, Donau-City-Strasse 1, 1220 Wien, Austria
| | - Zengbo Wang
- School of Computer Science and Electronic Engineering, Bangor University, Bangor, LL57 1UT, United Kingdom
| | - Boris Luk’yanchuk
- Faculty of Physics, Lomonosov Moscow State University, Moscow 119991, Russia
| | - Limin Wu
- Department of Materials Science and State Key Laboratory of Molecular Engineering of Polymers, Fudan University, Shanghai 200433, China
| | - Alexey V. Maslov
- Department of Radiophysics, University of Nizhny Novgorod, Nizhny Novgorod, 603022, Russia
| | - Boya Jin
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Constantin R. Simovski
- Department of Electronics and Nano-Engineering, Aalto University, FI-00076, Espoo, Finland
- Faculty of Physics and Engineering, ITMO University, 199034, St-Petersburg, Russia
| | - Stephane Perrin
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Paul Montgomery
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Sylvain Lecler
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| |
Collapse
|
28
|
Zhou H, Li Y, Chen B, Yang H, Zou M, Wen W, Ma Y, Chen M. Registration-free 3D super-resolution generative deep-learning network for fluorescence microscopy imaging. OPTICS LETTERS 2023; 48:6300-6303. [PMID: 38039252 DOI: 10.1364/ol.503238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 11/02/2023] [Indexed: 12/03/2023]
Abstract
Volumetric fluorescence microscopy has a great demand for high-resolution (HR) imaging and comes at the cost of sophisticated imaging solutions. Image super-resolution (SR) methods offer an effective way to recover HR images from low-resolution (LR) images. Nevertheless, these methods require pixel-level registered LR and HR images, posing a challenge in accurate image registration. To address these issues, we propose a novel registration-free image SR method. Our method conducts SR training and prediction directly on unregistered LR and HR volume neuronal images. The network is built on the CycleGAN framework and the 3D UNet based on attention mechanism. We evaluated our method on LR (5×/0.16-NA) and HR (20×/1.0-NA) fluorescence volume neuronal images collected by light-sheet microscopy. Compared to other super-resolution methods, our approach achieved the best reconstruction results. Our method shows promise for wide applications in the field of neuronal image super-resolution.
Collapse
|
29
|
Vora N, Polleys CM, Sakellariou F, Georgalis G, Thieu HT, Genega EM, Jahanseir N, Patra A, Miller E, Georgakoudi I. Restoration of metabolic functional metrics from label-free, two-photon human tissue images using multiscale deep-learning-based denoising algorithms. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:126006. [PMID: 38144697 PMCID: PMC10742979 DOI: 10.1117/1.jbo.28.12.126006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 10/23/2023] [Accepted: 11/28/2023] [Indexed: 12/26/2023]
Abstract
Significance Label-free, two-photon excited fluorescence (TPEF) imaging captures morphological and functional metabolic tissue changes and enables enhanced understanding of numerous diseases. However, noise and other artifacts present in these images severely complicate the extraction of biologically useful information. Aim We aim to employ deep neural architectures in the synthesis of a multiscale denoising algorithm optimized for restoring metrics of metabolic activity from low-signal-to-noise ratio (SNR), TPEF images. Approach TPEF images of reduced nicotinamide adenine dinucleotide (phosphate) (NAD(P)H) and flavoproteins (FAD) from freshly excised human cervical tissues are used to assess the impact of various denoising models, preprocessing methods, and data on metrics of image quality and the recovery of six metrics of metabolic function from the images relative to ground truth images. Results Optimized recovery of the redox ratio and mitochondrial organization is achieved using a novel algorithm based on deep denoising in the wavelet transform domain. This algorithm also leads to significant improvements in peak-SNR (PSNR) and structural similarity index measure (SSIM) for all images. Interestingly, other models yield even higher PSNR and SSIM improvements, but they are not optimal for recovery of metabolic function metrics. Conclusions Denoising algorithms can recover diagnostically useful information from low SNR label-free TPEF images and will be useful for the clinical translation of such imaging.
Collapse
Affiliation(s)
- Nilay Vora
- Tufts University, Department of Biomedical Engineering, Medford, Massachusetts, United States
| | - Christopher M. Polleys
- Tufts University, Department of Biomedical Engineering, Medford, Massachusetts, United States
| | | | - Georgios Georgalis
- Tufts University, Data Intensive Studies Center, Medford, Massachusetts, United States
| | - Hong-Thao Thieu
- Tufts University School of Medicine, Tufts Medical Center, Department of Obstetrics and Gynecology, Boston, Massachusetts, United States
| | - Elizabeth M. Genega
- Tufts University School of Medicine, Tufts Medical Center, Department of Pathology and Laboratory Medicine, Boston, Massachusetts, United States
| | - Narges Jahanseir
- Tufts University School of Medicine, Tufts Medical Center, Department of Pathology and Laboratory Medicine, Boston, Massachusetts, United States
| | - Abani Patra
- Tufts University, Data Intensive Studies Center, Medford, Massachusetts, United States
- Tufts University, Department of Mathematics, Medford, Massachusetts, United States
| | - Eric Miller
- Tufts University, Department of Electrical and Computer Engineering, Medford, Massachusetts, United States
- Tufts University, Tufts Institute for Artificial Intelligence, Medford, Massachusetts, United States
| | - Irene Georgakoudi
- Tufts University, Department of Biomedical Engineering, Medford, Massachusetts, United States
| |
Collapse
|
30
|
Balasubramanian H, Hobson CM, Chew TL, Aaron JS. Imagining the future of optical microscopy: everything, everywhere, all at once. Commun Biol 2023; 6:1096. [PMID: 37898673 PMCID: PMC10613274 DOI: 10.1038/s42003-023-05468-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 10/16/2023] [Indexed: 10/30/2023] Open
Abstract
The optical microscope has revolutionized biology since at least the 17th Century. Since then, it has progressed from a largely observational tool to a powerful bioanalytical platform. However, realizing its full potential to study live specimens is hindered by a daunting array of technical challenges. Here, we delve into the current state of live imaging to explore the barriers that must be overcome and the possibilities that lie ahead. We venture to envision a future where we can visualize and study everything, everywhere, all at once - from the intricate inner workings of a single cell to the dynamic interplay across entire organisms, and a world where scientists could access the necessary microscopy technologies anywhere.
Collapse
Affiliation(s)
| | - Chad M Hobson
- Advanced Imaging Center; Howard Hughes Medical Institute Janelia Research Campus, Ashburn, VA, 20147, USA
| | - Teng-Leong Chew
- Advanced Imaging Center; Howard Hughes Medical Institute Janelia Research Campus, Ashburn, VA, 20147, USA
| | - Jesse S Aaron
- Advanced Imaging Center; Howard Hughes Medical Institute Janelia Research Campus, Ashburn, VA, 20147, USA.
| |
Collapse
|
31
|
Shi M, Li X, Li M, Si Y. Attention-based generative adversarial networks improve prognostic outcome prediction of cancer from multimodal data. Brief Bioinform 2023; 24:bbad329. [PMID: 37756592 DOI: 10.1093/bib/bbad329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 08/20/2023] [Accepted: 08/28/2023] [Indexed: 09/29/2023] Open
Abstract
The prediction of prognostic outcome is critical for the development of efficient cancer therapeutics and potential personalized medicine. However, due to the heterogeneity and diversity of multimodal data of cancer, data integration and feature selection remain a challenge for prognostic outcome prediction. We proposed a deep learning method with generative adversarial network based on sequential channel-spatial attention modules (CSAM-GAN), a multimodal data integration and feature selection approach, for accomplishing prognostic stratification tasks in cancer. Sequential channel-spatial attention modules equipped with an encoder-decoder are applied for the input features of multimodal data to accurately refine selected features. A discriminator network was proposed to make the generator and discriminator learning in an adversarial way to accurately describe the complex heterogeneous information of multiple modal data. We conducted extensive experiments with various feature selection and classification methods and confirmed that the CSAM-GAN via the multilayer deep neural network (DNN) classifier outperformed these baseline methods on two different multimodal data sets with miRNA expression, mRNA expression and histopathological image data: lower-grade glioma and kidney renal clear cell carcinoma. The CSAM-GAN via the multilayer DNN classifier bridges the gap between heterogenous multimodal data and prognostic outcome prediction.
Collapse
Affiliation(s)
- Mingguang Shi
- School of Electrical Engineering and Automation, Hefei University of Technology, Hefei, Anhui 230009, China
| | - Xuefeng Li
- School of Electrical Engineering and Automation, Hefei University of Technology, Hefei, Anhui 230009, China
| | - Mingna Li
- School of Electrical Engineering and Automation, Hefei University of Technology, Hefei, Anhui 230009, China
| | - Yichong Si
- School of Electrical Engineering and Automation, Hefei University of Technology, Hefei, Anhui 230009, China
| |
Collapse
|
32
|
Li X, Wu Y, Su Y, Rey-Suarez I, Matthaeus C, Updegrove TB, Wei Z, Zhang L, Sasaki H, Li Y, Guo M, Giannini JP, Vishwasrao HD, Chen J, Lee SJJ, Shao L, Liu H, Ramamurthi KS, Taraska JW, Upadhyaya A, La Riviere P, Shroff H. Three-dimensional structured illumination microscopy with enhanced axial resolution. Nat Biotechnol 2023; 41:1307-1319. [PMID: 36702897 PMCID: PMC10497409 DOI: 10.1038/s41587-022-01651-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 12/16/2022] [Indexed: 01/27/2023]
Abstract
The axial resolution of three-dimensional structured illumination microscopy (3D SIM) is limited to ∼300 nm. Here we present two distinct, complementary methods to improve axial resolution in 3D SIM with minimal or no modification to the optical system. We show that placing a mirror directly opposite the sample enables four-beam interference with higher spatial frequency content than 3D SIM illumination, offering near-isotropic imaging with ∼120-nm lateral and 160-nm axial resolution. We also developed a deep learning method achieving ∼120-nm isotropic resolution. This method can be combined with denoising to facilitate volumetric imaging spanning dozens of timepoints. We demonstrate the potential of these advances by imaging a variety of cellular samples, delineating the nanoscale distribution of vimentin and microtubule filaments, observing the relative positions of caveolar coat proteins and lysosomal markers and visualizing cytoskeletal dynamics within T cells in the early stages of immune synapse formation.
Collapse
Affiliation(s)
- Xuesong Li
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA.
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA.
| | - Yicong Wu
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA.
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA.
| | - Yijun Su
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
- Leica Microsystems, Inc., Deerfield, IL, USA
- SVision, LLC, Bellevue, WA, USA
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Ivan Rey-Suarez
- Institute for Physical Science and Technology, University of Maryland, College Park, MD, USA
| | - Claudia Matthaeus
- Biochemistry and Biophysics Center, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA
| | - Taylor B Updegrove
- Laboratory of Molecular Biology, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Zhuang Wei
- Section on Biophotonics, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
| | - Lixia Zhang
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
| | - Hideki Sasaki
- Leica Microsystems, Inc., Deerfield, IL, USA
- SVision, LLC, Bellevue, WA, USA
| | - Yue Li
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Min Guo
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - John P Giannini
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
| | - Harshad D Vishwasrao
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
| | - Jiji Chen
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
| | - Shih-Jong J Lee
- Leica Microsystems, Inc., Deerfield, IL, USA
- SVision, LLC, Bellevue, WA, USA
| | - Lin Shao
- Department of Neuroscience and Department of Cell Biology, Yale University School of Medicine, New Haven, CT, USA
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Kumaran S Ramamurthi
- Laboratory of Molecular Biology, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Justin W Taraska
- Biochemistry and Biophysics Center, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA
| | - Arpita Upadhyaya
- Institute for Physical Science and Technology, University of Maryland, College Park, MD, USA
- Department of Physics, University of Maryland, College Park, MD, USA
| | - Patrick La Riviere
- Department of Radiology, University of Chicago, Chicago, IL, USA
- MBL Fellows, Marine Biological Laboratory, Woods Hole, MA, USA
| | - Hari Shroff
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
- MBL Fellows, Marine Biological Laboratory, Woods Hole, MA, USA
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| |
Collapse
|
33
|
Mandracchia B, Liu W, Hua X, Forghani P, Lee S, Hou J, Nie S, Xu C, Jia S. Optimal sparsity allows reliable system-aware restoration of fluorescence microscopy images. SCIENCE ADVANCES 2023; 9:eadg9245. [PMID: 37647399 PMCID: PMC10468132 DOI: 10.1126/sciadv.adg9245] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 07/31/2023] [Indexed: 09/01/2023]
Abstract
Fluorescence microscopy is one of the most indispensable and informative driving forces for biological research, but the extent of observable biological phenomena is essentially determined by the content and quality of the acquired images. To address the different noise sources that can degrade these images, we introduce an algorithm for multiscale image restoration through optimally sparse representation (MIRO). MIRO is a deterministic framework that models the acquisition process and uses pixelwise noise correction to improve image quality. Our study demonstrates that this approach yields a remarkable restoration of the fluorescence signal for a wide range of microscopy systems, regardless of the detector used (e.g., electron-multiplying charge-coupled device, scientific complementary metal-oxide semiconductor, or photomultiplier tube). MIRO improves current imaging capabilities, enabling fast, low-light optical microscopy, accurate image analysis, and robust machine intelligence when integrated with deep neural networks. This expands the range of biological knowledge that can be obtained from fluorescence microscopy.
Collapse
Affiliation(s)
- Biagio Mandracchia
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Scientific-Technical Central Units, Instituto de Salud Carlos III (ISCIII), Majadahonda, Spain
- ETSI Telecomunicación, Universidad de Valladolid, Valladolid, Spain
| | - Wenhao Liu
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Xuanwen Hua
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Parvin Forghani
- Department of Pediatrics, School of Medicine, Emory University, Atlanta, GA, USA
| | - Soojung Lee
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Jessica Hou
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
| | - Shuyi Nie
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, USA
| | - Chunhui Xu
- Department of Pediatrics, School of Medicine, Emory University, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, USA
| | - Shu Jia
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
34
|
Ning K, Lu B, Wang X, Zhang X, Nie S, Jiang T, Li A, Fan G, Wang X, Luo Q, Gong H, Yuan J. Deep self-learning enables fast, high-fidelity isotropic resolution restoration for volumetric fluorescence microscopy. LIGHT, SCIENCE & APPLICATIONS 2023; 12:204. [PMID: 37640721 PMCID: PMC10462670 DOI: 10.1038/s41377-023-01230-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 07/04/2023] [Accepted: 07/12/2023] [Indexed: 08/31/2023]
Abstract
One intrinsic yet critical issue that troubles the field of fluorescence microscopy ever since its introduction is the unmatched resolution in the lateral and axial directions (i.e., resolution anisotropy), which severely deteriorates the quality, reconstruction, and analysis of 3D volume images. By leveraging the natural anisotropy, we present a deep self-learning method termed Self-Net that significantly improves the resolution of axial images by using the lateral images from the same raw dataset as rational targets. By incorporating unsupervised learning for realistic anisotropic degradation and supervised learning for high-fidelity isotropic recovery, our method can effectively suppress the hallucination with substantially enhanced image quality compared to previously reported methods. In the experiments, we show that Self-Net can reconstruct high-fidelity isotropic 3D images from organelle to tissue levels via raw images from various microscopy platforms, e.g., wide-field, laser-scanning, or super-resolution microscopy. For the first time, Self-Net enables isotropic whole-brain imaging at a voxel resolution of 0.2 × 0.2 × 0.2 μm3, which addresses the last-mile problem of data quality in single-neuron morphology visualization and reconstruction with minimal effort and cost. Overall, Self-Net is a promising approach to overcoming the inherent resolution anisotropy for all classes of 3D fluorescence microscopy.
Collapse
Affiliation(s)
- Kefu Ning
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China
| | - Bolin Lu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China
| | - Xiaojun Wang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- School of Biomedical Engineering, Hainan University, Haikou, China
| | - Xiaoyu Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shuo Nie
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tao Jiang
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China
| | - Guoqing Fan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaofeng Wang
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China
- School of Biomedical Engineering, Hainan University, Haikou, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China.
| | - Jing Yuan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China.
| |
Collapse
|
35
|
Komuro J, Kusumoto D, Hashimoto H, Yuasa S. Machine learning in cardiology: Clinical application and basic research. J Cardiol 2023; 82:128-133. [PMID: 37141938 DOI: 10.1016/j.jjcc.2023.04.020] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 04/23/2023] [Accepted: 04/28/2023] [Indexed: 05/06/2023]
Abstract
Machine learning is a subfield of artificial intelligence. The quality and versatility of machine learning have been rapidly improving and playing a critical role in many aspects of social life. This trend is also observed in the medical field. Generally, there are three main types of machine learning: supervised, unsupervised, and reinforcement learning. Each type of learning is adequately selected for the purpose and type of data. In the field of medicine, various types of information are collected and used, and research using machine learning is becoming increasingly relevant. Many clinical studies are conducted using electronic health and medical records, including in the cardiovascular area. Machine learning has also been applied in basic research. Machine learning has been widely used for several types of data analysis, such as clustering of microarray analysis and RNA sequence analysis. Machine learning is essential for genome and multi-omics analyses. This review summarizes the recent advancements in the use of machine learning in clinical applications and basic cardiovascular research.
Collapse
Affiliation(s)
- Jin Komuro
- Department of Cardiology, Keio University School of Medicine, Tokyo, Japan
| | - Dai Kusumoto
- Department of Cardiology, Keio University School of Medicine, Tokyo, Japan
| | - Hisayuki Hashimoto
- Department of Cardiology, Keio University School of Medicine, Tokyo, Japan
| | - Shinsuke Yuasa
- Department of Cardiology, Keio University School of Medicine, Tokyo, Japan.
| |
Collapse
|
36
|
Mäntylä E, Montonen T, Azzari L, Mattola S, Hannula M, Vihinen-Ranta M, Hyttinen J, Vippola M, Foi A, Nymark S, Ihalainen TO. Iterative immunostaining combined with expansion microscopy and image processing reveals nanoscopic network organization of nuclear lamina. Mol Biol Cell 2023; 34:br13. [PMID: 37342871 PMCID: PMC10398900 DOI: 10.1091/mbc.e22-09-0448] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 04/14/2023] [Accepted: 06/12/2023] [Indexed: 06/23/2023] Open
Abstract
Investigation of nuclear lamina architecture relies on superresolved microscopy. However, epitope accessibility, labeling density, and detection precision of individual molecules pose challenges within the molecularly crowded nucleus. We developed iterative indirect immunofluorescence (IT-IF) staining approach combined with expansion microscopy (ExM) and structured illumination microscopy to improve superresolution microscopy of subnuclear nanostructures like lamins. We prove that ExM is applicable in analyzing highly compacted nuclear multiprotein complexes such as viral capsids and provide technical improvements to ExM method including three-dimensional-printed gel casting equipment. We show that in comparison with conventional immunostaining, IT-IF results in a higher signal-to-background ratio and a mean fluorescence intensity by improving the labeling density. Moreover, we present a signal-processing pipeline for noise estimation, denoising, and deblurring to aid in quantitative image analyses and provide this platform for the microscopy imaging community. Finally, we show the potential of signal-resolved IT-IF in quantitative superresolution ExM imaging of nuclear lamina and reveal nanoscopic details of the lamin network organization-a prerequisite for studying intranuclear structural coregulation of cell function and fate.
Collapse
Affiliation(s)
- Elina Mäntylä
- BioMediTech, Faculty of Medicine and Health Technology, Tampere University, 33100 Tampere, Finland
| | - Toni Montonen
- BioMediTech, Faculty of Medicine and Health Technology, Tampere University, 33100 Tampere, Finland
| | - Lucio Azzari
- Tampere Microscopy Center (TMC), Tampere University, 33100 Tampere, Finland
| | - Salla Mattola
- Department of Biological and Environmental Science and Nanoscience Center, University of Jyväskylä, 40014 Jyväskylä, Finland
| | - Markus Hannula
- BioMediTech, Faculty of Medicine and Health Technology, Tampere University, 33100 Tampere, Finland
| | - Maija Vihinen-Ranta
- Department of Biological and Environmental Science and Nanoscience Center, University of Jyväskylä, 40014 Jyväskylä, Finland
| | - Jari Hyttinen
- BioMediTech, Faculty of Medicine and Health Technology, Tampere University, 33100 Tampere, Finland
| | - Minnamari Vippola
- Tampere Microscopy Center (TMC), Tampere University, 33100 Tampere, Finland
| | - Alessandro Foi
- Faculty of Information Technology and Communication Sciences, Computing Sciences, Tampere University, 33100 Tampere, Finland
| | - Soile Nymark
- BioMediTech, Faculty of Medicine and Health Technology, Tampere University, 33100 Tampere, Finland
| | - Teemu O. Ihalainen
- BioMediTech, Faculty of Medicine and Health Technology, Tampere University, 33100 Tampere, Finland
- Tampere Institute for Advanced Study, Tampere University, 33100 Tampere, Finland
| |
Collapse
|
37
|
Bouchard C, Wiesner T, Deschênes A, Bilodeau A, Turcotte B, Gagné C, Lavoie-Cardinal F. Resolution enhancement with a task-assisted GAN to guide optical nanoscopy image analysis and acquisition. NAT MACH INTELL 2023; 5:830-844. [PMID: 37615032 PMCID: PMC10442226 DOI: 10.1038/s42256-023-00689-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 06/12/2023] [Indexed: 08/25/2023]
Abstract
Super-resolution fluorescence microscopy methods enable the characterization of nanostructures in living and fixed biological tissues. However, they require the adjustment of multiple imaging parameters while attempting to satisfy conflicting objectives, such as maximizing spatial and temporal resolution while minimizing light exposure. To overcome the limitations imposed by these trade-offs, post-acquisition algorithmic approaches have been proposed for resolution enhancement and image-quality improvement. Here we introduce the task-assisted generative adversarial network (TA-GAN), which incorporates an auxiliary task (for example, segmentation, localization) closely related to the observed biological nanostructure characterization. We evaluate how the TA-GAN improves generative accuracy over unassisted methods, using images acquired with different modalities such as confocal, bright-field, stimulated emission depletion and structured illumination microscopy. The TA-GAN is incorporated directly into the acquisition pipeline of the microscope to predict the nanometric content of the field of view without requiring the acquisition of a super-resolved image. This information is used to automatically select the imaging modality and regions of interest, optimizing the acquisition sequence by reducing light exposure. Data-driven microscopy methods like the TA-GAN will enable the observation of dynamic molecular processes with spatial and temporal resolutions that surpass the limits currently imposed by the trade-offs constraining super-resolution microscopy.
Collapse
Affiliation(s)
- Catherine Bouchard
- Institute Intelligence and Data (IID), Université Laval, Quebec City, Quebec Canada
- CERVO Brain Research Center, Quebec City, Quebec Canada
| | - Theresa Wiesner
- Institute Intelligence and Data (IID), Université Laval, Quebec City, Quebec Canada
- CERVO Brain Research Center, Quebec City, Quebec Canada
| | | | - Anthony Bilodeau
- Institute Intelligence and Data (IID), Université Laval, Quebec City, Quebec Canada
- CERVO Brain Research Center, Quebec City, Quebec Canada
| | - Benoît Turcotte
- Institute Intelligence and Data (IID), Université Laval, Quebec City, Quebec Canada
- CERVO Brain Research Center, Quebec City, Quebec Canada
| | - Christian Gagné
- Institute Intelligence and Data (IID), Université Laval, Quebec City, Quebec Canada
- Département de génie électrique et de génie informatique, Université Laval, Quebec City, Quebec Canada
| | - Flavie Lavoie-Cardinal
- Institute Intelligence and Data (IID), Université Laval, Quebec City, Quebec Canada
- CERVO Brain Research Center, Quebec City, Quebec Canada
- Département de psychiatrie et de neurosciences, Université Laval, Quebec City, Quebec Canada
| |
Collapse
|
38
|
Ebrahimi V, Stephan T, Kim J, Carravilla P, Eggeling C, Jakobs S, Han KY. Deep learning enables fast, gentle STED microscopy. Commun Biol 2023; 6:674. [PMID: 37369761 PMCID: PMC10300082 DOI: 10.1038/s42003-023-05054-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 06/16/2023] [Indexed: 06/29/2023] Open
Abstract
STED microscopy is widely used to image subcellular structures with super-resolution. Here, we report that restoring STED images with deep learning can mitigate photobleaching and photodamage by reducing the pixel dwell time by one or two orders of magnitude. Our method allows for efficient and robust restoration of noisy 2D and 3D STED images with multiple targets and facilitates long-term imaging of mitochondrial dynamics.
Collapse
Affiliation(s)
- Vahid Ebrahimi
- CREOL, The College of Optics and Photonics, University of Central Florida, Orlando, FL, USA
| | - Till Stephan
- Department of NanoBiophotonics, Max Planck Institute for Multidisciplinary Sciences, Göttingen, Germany
- Department of Neurology, University Medical Center Göttingen, Göttingen, Germany
| | - Jiah Kim
- Department of Cell and Developmental Biology, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Pablo Carravilla
- Leibniz Institute of Photonic Technology e.V., Jena, Germany, member of the Leibniz Centre for Photonics in Infection Research (LPI), Jena, Germany
- Faculty of Physics and Astronomy, Institute of Applied Optics and Biophysics, Friedrich Schiller University Jena, Jena, Germany
| | - Christian Eggeling
- Leibniz Institute of Photonic Technology e.V., Jena, Germany, member of the Leibniz Centre for Photonics in Infection Research (LPI), Jena, Germany
- Faculty of Physics and Astronomy, Institute of Applied Optics and Biophysics, Friedrich Schiller University Jena, Jena, Germany
- Jena School for Microbial Communication, Friedrich Schiller University Jena, Jena, Germany
- Medical Research Council Human Immunology Unit, Weatherall Institute of Molecular Medicine, University of Oxford, Oxford, UK
| | - Stefan Jakobs
- Department of NanoBiophotonics, Max Planck Institute for Multidisciplinary Sciences, Göttingen, Germany
- Department of Neurology, University Medical Center Göttingen, Göttingen, Germany
- Translational Neuroinflammation and Automated Microscopy, Fraunhofer Institute for Translational Medicine and Pharmacology ITMP, Göttingen, Germany
| | - Kyu Young Han
- CREOL, The College of Optics and Photonics, University of Central Florida, Orlando, FL, USA.
| |
Collapse
|
39
|
Vora N, Polleys CM, Sakellariou F, Georgalis G, Thieu HT, Genega EM, Jahanseir N, Patra A, Miller E, Georgakoudi I. Restoration of metabolic functional metrics from label-free, two-photon cervical tissue images using multiscale deep-learning-based denoising algorithms. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.07.544033. [PMID: 37333366 PMCID: PMC10274804 DOI: 10.1101/2023.06.07.544033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
Label-free, two-photon imaging captures morphological and functional metabolic tissue changes and enables enhanced understanding of numerous diseases. However, this modality suffers from low signal arising from limitations imposed by the maximum permissible dose of illumination and the need for rapid image acquisition to avoid motion artifacts. Recently, deep learning methods have been developed to facilitate the extraction of quantitative information from such images. Here, we employ deep neural architectures in the synthesis of a multiscale denoising algorithm optimized for restoring metrics of metabolic activity from low-SNR, two-photon images. Two-photon excited fluorescence (TPEF) images of reduced nicotinamide adenine dinucleotide (phosphate) (NAD(P)H) and flavoproteins (FAD) from freshly excised human cervical tissues are used. We assess the impact of the specific denoising model, loss function, data transformation, and training dataset on established metrics of image restoration when comparing denoised single frame images with corresponding six frame averages, considered as the ground truth. We further assess the restoration accuracy of six metrics of metabolic function from the denoised images relative to ground truth images. Using a novel algorithm based on deep denoising in the wavelet transform domain, we demonstrate optimal recovery of metabolic function metrics. Our results highlight the promise of denoising algorithms to recover diagnostically useful information from low SNR label-free two-photon images and their potential importance in the clinical translation of such imaging.
Collapse
Affiliation(s)
- Nilay Vora
- Department of Biomedical Engineering, Tufts University, Medford, MA 02155, USA
| | | | | | | | - Hong-Thao Thieu
- Department of Obstetrics and Gynecology, Tufts University School of Medicine, Tufts Medical Center, Boston, MA 02111, USA
| | - Elizabeth M. Genega
- Department of Pathology and Laboratory Medicine, Tufts University School of Medicine, Tufts Medical Center, Boston, MA 02111, USA
| | - Narges Jahanseir
- Department of Pathology and Laboratory Medicine, Tufts University School of Medicine, Tufts Medical Center, Boston, MA 02111, USA
| | - Abani Patra
- Data Intensive Studies Center, Tufts University, Medford, MA 02155, USA
- Department of Mathematics, Tufts University, Medford, MA 02155, USA
| | - Eric Miller
- Department of Electrical and Computer Engineering, Tufts University, Medford, MA 02155, USA
- Tufts Institute for Artificial Intelligence, Tufts University, Medford, MA 02155, USA
| | - Irene Georgakoudi
- Department of Biomedical Engineering, Tufts University, Medford, MA 02155, USA
| |
Collapse
|
40
|
Tang Y, Wen G, Liang Y, Wang L, Zhang J, Li H. Keyframe-aided resolution enhancement network for dynamic super-resolution structured illumination microscopy. OPTICS LETTERS 2023; 48:2949-2952. [PMID: 37262251 DOI: 10.1364/ol.491899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 05/01/2023] [Indexed: 06/03/2023]
Abstract
Deep learning has been used to reconstruct super-resolution structured illumination microscopy (SR-SIM) images with wide-field or fewer raw images, effectively reducing photobleaching and phototoxicity. However, the dependability of new structures or sample observation is still questioned using these methods. Here, we propose a dynamic SIM imaging strategy: the full raw images are recorded at the beginning to reconstruct the SR image as a keyframe, then only wide-field images are recorded. A deep-learning-based reconstruction algorithm, named KFA-RET, is developed to reconstruct the rest of the SR images for the whole dynamic process. With the structure at the keyframe as a reference and the temporal continuity of biological structures, KFA-RET greatly enhances the quality of reconstructed SR images while reducing photobleaching and phototoxicity. Moreover, KFA-RET has a strong transfer capability for observing new structures that were not included during network training.
Collapse
|
41
|
Niu C, Li M, Fan F, Wu W, Guo X, Lyu Q, Wang G. Noise Suppression With Similarity-Based Self-Supervised Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1590-1602. [PMID: 37015446 PMCID: PMC10288330 DOI: 10.1109/tmi.2022.3231428] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Image denoising is a prerequisite for downstream tasks in many fields. Low-dose and photon-counting computed tomography (CT) denoising can optimize diagnostic performance at minimized radiation dose. Supervised deep denoising methods are popular but require paired clean or noisy samples that are often unavailable in practice. Limited by the independent noise assumption, current self-supervised denoising methods cannot process correlated noises as in CT images. Here we propose the first-of-its-kind similarity-based self-supervised deep denoising approach, referred to as Noise2Sim, that works in a nonlocal and nonlinear fashion to suppress not only independent but also correlated noises. Theoretically, Noise2Sim is asymptotically equivalent to supervised learning methods under mild conditions. Experimentally, Nosie2Sim recovers intrinsic features from noisy low-dose CT and photon-counting CT images as effectively as or even better than supervised learning methods on practical datasets visually, quantitatively and statistically. Noise2Sim is a general self-supervised denoising approach and has great potential in diverse applications.
Collapse
|
42
|
Bharathan NK, Giang W, Hoffman CL, Aaron JS, Khuon S, Chew TL, Preibisch S, Trautman ET, Heinrich L, Bogovic J, Bennett D, Ackerman D, Park W, Petruncio A, Weigel AV, Saalfeld S, Wayne Vogl A, Stahley SN, Kowalczyk AP. Architecture and dynamics of a desmosome-endoplasmic reticulum complex. Nat Cell Biol 2023; 25:823-835. [PMID: 37291267 PMCID: PMC10960982 DOI: 10.1038/s41556-023-01154-4] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 04/24/2023] [Indexed: 06/10/2023]
Abstract
The endoplasmic reticulum (ER) forms a dynamic network that contacts other cellular membranes to regulate stress responses, calcium signalling and lipid transfer. Here, using high-resolution volume electron microscopy, we find that the ER forms a previously unknown association with keratin intermediate filaments and desmosomal cell-cell junctions. Peripheral ER assembles into mirror image-like arrangements at desmosomes and exhibits nanometre proximity to keratin filaments and the desmosome cytoplasmic plaque. ER tubules exhibit stable associations with desmosomes, and perturbation of desmosomes or keratin filaments alters ER organization, mobility and expression of ER stress transcripts. These findings indicate that desmosomes and the keratin cytoskeleton regulate the distribution, function and dynamics of the ER network. Overall, this study reveals a previously unknown subcellular architecture defined by the structural integration of ER tubules with an epithelial intercellular junction.
Collapse
Affiliation(s)
- Navaneetha Krishnan Bharathan
- Departments of Dermatology and Cellular and Molecular Physiology, Pennsylvania State University College of Medicine, Hershey, PA, USA
| | - William Giang
- Departments of Dermatology and Cellular and Molecular Physiology, Pennsylvania State University College of Medicine, Hershey, PA, USA
| | - Coryn L Hoffman
- Departments of Dermatology and Cellular and Molecular Physiology, Pennsylvania State University College of Medicine, Hershey, PA, USA
| | - Jesse S Aaron
- Advanced Imaging Center, Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Satya Khuon
- Advanced Imaging Center, Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Teng-Leong Chew
- Advanced Imaging Center, Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Stephan Preibisch
- Advanced Imaging Center, Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Eric T Trautman
- Advanced Imaging Center, Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Larissa Heinrich
- Advanced Imaging Center, Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - John Bogovic
- Advanced Imaging Center, Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Davis Bennett
- Advanced Imaging Center, Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - David Ackerman
- Advanced Imaging Center, Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Woohyun Park
- Advanced Imaging Center, Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Alyson Petruncio
- Advanced Imaging Center, Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Aubrey V Weigel
- Advanced Imaging Center, Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Stephan Saalfeld
- Advanced Imaging Center, Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - A Wayne Vogl
- Life Sciences Institute and the Department of Cellular and Physiological Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Sara N Stahley
- Departments of Dermatology and Cellular and Molecular Physiology, Pennsylvania State University College of Medicine, Hershey, PA, USA
| | - Andrew P Kowalczyk
- Departments of Dermatology and Cellular and Molecular Physiology, Pennsylvania State University College of Medicine, Hershey, PA, USA.
| |
Collapse
|
43
|
Chen R, Tang X, Zhao Y, Shen Z, Zhang M, Shen Y, Li T, Chung CHY, Zhang L, Wang J, Cui B, Fei P, Guo Y, Du S, Yao S. Single-frame deep-learning super-resolution microscopy for intracellular dynamics imaging. Nat Commun 2023; 14:2854. [PMID: 37202407 DOI: 10.1038/s41467-023-38452-2] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 04/28/2023] [Indexed: 05/20/2023] Open
Abstract
Single-molecule localization microscopy (SMLM) can be used to resolve subcellular structures and achieve a tenfold improvement in spatial resolution compared to that obtained by conventional fluorescence microscopy. However, the separation of single-molecule fluorescence events that requires thousands of frames dramatically increases the image acquisition time and phototoxicity, impeding the observation of instantaneous intracellular dynamics. Here we develop a deep-learning based single-frame super-resolution microscopy (SFSRM) method which utilizes a subpixel edge map and a multicomponent optimization strategy to guide the neural network to reconstruct a super-resolution image from a single frame of a diffraction-limited image. Under a tolerable signal density and an affordable signal-to-noise ratio, SFSRM enables high-fidelity live-cell imaging with spatiotemporal resolutions of 30 nm and 10 ms, allowing for prolonged monitoring of subcellular dynamics such as interplays between mitochondria and endoplasmic reticulum, the vesicle transport along microtubules, and the endosome fusion and fission. Moreover, its adaptability to different microscopes and spectra makes it a useful tool for various imaging systems.
Collapse
Affiliation(s)
- Rong Chen
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Xiao Tang
- Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Yuxuan Zhao
- School of Optical and Electronic Information, Huazhong University of Science and Technology, 430074, Wuhan, China
| | - Zeyu Shen
- Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Meng Zhang
- School of Optical and Electronic Information, Huazhong University of Science and Technology, 430074, Wuhan, China
| | - Yusheng Shen
- Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Tiantian Li
- Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Casper Ho Yin Chung
- Department of Mechanical and Aerospace Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Lijuan Zhang
- School of Pharmaceutical Sciences, Guizhou University, 550025, Guizhou, China
| | - Ji Wang
- Department of Mechanical and Aerospace Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Binbin Cui
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Peng Fei
- School of Optical and Electronic Information, Huazhong University of Science and Technology, 430074, Wuhan, China
| | - Yusong Guo
- Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong, China.
| | - Shengwang Du
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China.
- Department of Physics, The Hong Kong University of Science and Technology, Hong Kong, China.
- Department of Physics, The University of Texas at Dallas, Richardson, TX, 75080, USA.
| | - Shuhuai Yao
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China.
- Department of Mechanical and Aerospace Engineering, The Hong Kong University of Science and Technology, Hong Kong, China.
| |
Collapse
|
44
|
Lu P, Oetjen KA, Bender DE, Ruzinova MB, Fisher DAC, Shim KG, Pachynski RK, Brennen WN, Oh ST, Link DC, Thorek DLJ. IMC-Denoise: a content aware denoising pipeline to enhance Imaging Mass Cytometry. Nat Commun 2023; 14:1601. [PMID: 36959190 PMCID: PMC10036333 DOI: 10.1038/s41467-023-37123-6] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 03/02/2023] [Indexed: 03/25/2023] Open
Abstract
Imaging Mass Cytometry (IMC) is an emerging multiplexed imaging technology for analyzing complex microenvironments using more than 40 molecularly-specific channels. However, this modality has unique data processing requirements, particularly for patient tissue specimens where signal-to-noise ratios for markers can be low, despite optimization, and pixel intensity artifacts can deteriorate image quality and downstream analysis. Here we demonstrate an automated content-aware pipeline, IMC-Denoise, to restore IMC images deploying a differential intensity map-based restoration (DIMR) algorithm for removing hot pixels and a self-supervised deep learning algorithm for shot noise image filtering (DeepSNiF). IMC-Denoise outperforms existing methods for adaptive hot pixel and background noise removal, with significant image quality improvement in modeled data and datasets from multiple pathologies. This includes in technically challenging human bone marrow; we achieve noise level reduction of 87% for a 5.6-fold higher contrast-to-noise ratio, and more accurate background noise removal with approximately 2 × improved F1 score. Our approach enhances manual gating and automated phenotyping with cell-scale downstream analyses. Verified by manual annotations, spatial and density analysis for targeted cell groups reveal subtle but significant differences of cell populations in diseased bone marrow. We anticipate that IMC-Denoise will provide similar benefits across mass cytometric applications to more deeply characterize complex tissue microenvironments.
Collapse
Affiliation(s)
- Peng Lu
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, USA
- Department of Radiology, Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, USA
- Program in Quantitative Molecular Therapeutics, Washington University School of Medicine, St. Louis, USA
| | - Karolyn A Oetjen
- Department of Medicine, Washington University School of Medicine, St. Louis, USA
| | - Diane E Bender
- The Bursky Center for Human Immunology and Immunotherapy Programs Immunomonitoring Laboratory, Washington University School of Medicine, St. Louis, USA
| | - Marianna B Ruzinova
- Department of Pathology & Immunology, Washington University School of Medicine, St. Louis, USA
| | - Daniel A C Fisher
- Department of Medicine, Washington University School of Medicine, St. Louis, USA
| | - Kevin G Shim
- Department of Medicine, Washington University School of Medicine, St. Louis, USA
| | - Russell K Pachynski
- Department of Medicine, Washington University School of Medicine, St. Louis, USA
| | - W Nathaniel Brennen
- Department of Oncology, Sidney Kimmel Comprehensive Cancer Center (SKCCC), Johns Hopkins University, Baltimore, USA
- Department of Urology, James Buchanan Brady Urological Institute, Johns Hopkins University School of Medicine, Baltimore, USA
| | - Stephen T Oh
- Department of Medicine, Washington University School of Medicine, St. Louis, USA
- The Bursky Center for Human Immunology and Immunotherapy Programs Immunomonitoring Laboratory, Washington University School of Medicine, St. Louis, USA
- Department of Pathology & Immunology, Washington University School of Medicine, St. Louis, USA
| | - Daniel C Link
- Department of Medicine, Washington University School of Medicine, St. Louis, USA
- Department of Pathology & Immunology, Washington University School of Medicine, St. Louis, USA
| | - Daniel L J Thorek
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, USA.
- Department of Radiology, Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, USA.
- Program in Quantitative Molecular Therapeutics, Washington University School of Medicine, St. Louis, USA.
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, USA.
- Oncologic Imaging Program, Siteman Cancer Center, Washington University School of Medicine, St. Louis, USA.
| |
Collapse
|
45
|
Siu DMD, Lee KCM, Chung BMF, Wong JSJ, Zheng G, Tsia KK. Optofluidic imaging meets deep learning: from merging to emerging. LAB ON A CHIP 2023; 23:1011-1033. [PMID: 36601812 DOI: 10.1039/d2lc00813k] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Propelled by the striking advances in optical microscopy and deep learning (DL), the role of imaging in lab-on-a-chip has dramatically been transformed from a silo inspection tool to a quantitative "smart" engine. A suite of advanced optical microscopes now enables imaging over a range of spatial scales (from molecules to organisms) and temporal window (from microseconds to hours). On the other hand, the staggering diversity of DL algorithms has revolutionized image processing and analysis at the scale and complexity that were once inconceivable. Recognizing these exciting but overwhelming developments, we provide a timely review of their latest trends in the context of lab-on-a-chip imaging, or coined optofluidic imaging. More importantly, here we discuss the strengths and caveats of how to adopt, reinvent, and integrate these imaging techniques and DL algorithms in order to tailor different lab-on-a-chip applications. In particular, we highlight three areas where the latest advances in lab-on-a-chip imaging and DL can form unique synergisms: image formation, image analytics and intelligent image-guided autonomous lab-on-a-chip. Despite the on-going challenges, we anticipate that they will represent the next frontiers in lab-on-a-chip imaging that will spearhead new capabilities in advancing analytical chemistry research, accelerating biological discovery, and empowering new intelligent clinical applications.
Collapse
Affiliation(s)
- Dickson M D Siu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
| | - Kelvin C M Lee
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
| | - Bob M F Chung
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| | - Justin S J Wong
- Conzeb Limited, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| | - Guoan Zheng
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT, USA
| | - Kevin K Tsia
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| |
Collapse
|
46
|
MMSRNet: Pathological image super-resolution by multi-task and multi-scale learning. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
47
|
Real-time denoising enables high-sensitivity fluorescence time-lapse imaging beyond the shot-noise limit. Nat Biotechnol 2023; 41:282-292. [PMID: 36163547 PMCID: PMC9931589 DOI: 10.1038/s41587-022-01450-8] [Citation(s) in RCA: 39] [Impact Index Per Article: 39.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 07/29/2022] [Indexed: 11/09/2022]
Abstract
A fundamental challenge in fluorescence microscopy is the photon shot noise arising from the inevitable stochasticity of photon detection. Noise increases measurement uncertainty and limits imaging resolution, speed and sensitivity. To achieve high-sensitivity fluorescence imaging beyond the shot-noise limit, we present DeepCAD-RT, a self-supervised deep learning method for real-time noise suppression. Based on our previous framework DeepCAD, we reduced the number of network parameters by 94%, memory consumption by 27-fold and processing time by a factor of 20, allowing real-time processing on a two-photon microscope. A high imaging signal-to-noise ratio can be acquired with tenfold fewer photons than in standard imaging approaches. We demonstrate the utility of DeepCAD-RT in a series of photon-limited experiments, including in vivo calcium imaging of mice, zebrafish larva and fruit flies, recording of three-dimensional (3D) migration of neutrophils after acute brain injury and imaging of 3D dynamics of cortical ATP release. DeepCAD-RT will facilitate the morphological and functional interrogation of biological dynamics with a minimal photon budget.
Collapse
|
48
|
Seong D, Lee E, Kim Y, Han S, Lee J, Jeon M, Kim J. Three-dimensional reconstructing undersampled photoacoustic microscopy images using deep learning. PHOTOACOUSTICS 2023; 29:100429. [PMID: 36544533 PMCID: PMC9761854 DOI: 10.1016/j.pacs.2022.100429] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 10/29/2022] [Accepted: 11/28/2022] [Indexed: 05/31/2023]
Abstract
Spatial sampling density and data size are important determinants of the imaging speed of photoacoustic microscopy (PAM). Therefore, undersampling methods that reduce the number of scanning points are typically adopted to enhance the imaging speed of PAM by increasing the scanning step size. Since undersampling methods sacrifice spatial sampling density, by considering the number of data points, data size, and the characteristics of PAM that provides three-dimensional (3D) volume data, in this study, we newly reported deep learning-based fully reconstructing the undersampled 3D PAM data. The results of quantitative analyses demonstrate that the proposed method exhibits robustness and outperforms interpolation-based reconstruction methods at various undersampling ratios, enhancing the PAM system performance with 80-times faster-imaging speed and 800-times lower data size. The proposed method is demonstrated to be the closest model that can be used under experimental conditions, effectively shortening the imaging time with significantly reduced data size for processing.
Collapse
Affiliation(s)
- Daewoon Seong
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Euimin Lee
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Yoonseok Kim
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Sangyeob Han
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
- Institute of Biomedical Engineering, School of Medicine, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Jaeyul Lee
- Department of Bioengineering, University of California, Los Angeles, CA 90095, USA
| | - Mansik Jeon
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Jeehyun Kim
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| |
Collapse
|
49
|
Ebrahimi V, Stephan T, Kim J, Carravilla P, Eggeling C, Jakobs S, Han KY. Deep learning enables fast, gentle STED microscopy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023. [PMID: 36747618 PMCID: PMC9900922 DOI: 10.1101/2023.01.26.525571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
STED microscopy is widely used to image subcellular structures with super-resolution. Here, we report that denoising STED images with deep learning can mitigate photobleaching and photodamage by reducing the pixel dwell time by one or two orders of magnitude. Our method allows for efficient and robust restoration of noisy 2D and 3D STED images with multiple targets and facilitates long-term imaging of mitochondrial dynamics.
Collapse
|
50
|
BCM3D 2.0: accurate segmentation of single bacterial cells in dense biofilms using computationally generated intermediate image representations. NPJ Biofilms Microbiomes 2022; 8:99. [PMID: 36529755 PMCID: PMC9760640 DOI: 10.1038/s41522-022-00362-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022] Open
Abstract
Accurate detection and segmentation of single cells in three-dimensional (3D) fluorescence time-lapse images is essential for observing individual cell behaviors in large bacterial communities called biofilms. Recent progress in machine-learning-based image analysis is providing this capability with ever-increasing accuracy. Leveraging the capabilities of deep convolutional neural networks (CNNs), we recently developed bacterial cell morphometry in 3D (BCM3D), an integrated image analysis pipeline that combines deep learning with conventional image analysis to detect and segment single biofilm-dwelling cells in 3D fluorescence images. While the first release of BCM3D (BCM3D 1.0) achieved state-of-the-art 3D bacterial cell segmentation accuracies, low signal-to-background ratios (SBRs) and images of very dense biofilms remained challenging. Here, we present BCM3D 2.0 to address this challenge. BCM3D 2.0 is entirely complementary to the approach utilized in BCM3D 1.0. Instead of training CNNs to perform voxel classification, we trained CNNs to translate 3D fluorescence images into intermediate 3D image representations that are, when combined appropriately, more amenable to conventional mathematical image processing than a single experimental image. Using this approach, improved segmentation results are obtained even for very low SBRs and/or high cell density biofilm images. The improved cell segmentation accuracies in turn enable improved accuracies of tracking individual cells through 3D space and time. This capability opens the door to investigating time-dependent phenomena in bacterial biofilms at the cellular level.
Collapse
|