1
|
Qu L, Zhao S, Huang Y, Ye X, Wang K, Liu Y, Liu X, Mao H, Hu G, Chen W, Guo C, He J, Tan J, Li H, Chen L, Zhao W. Self-inspired learning for denoising live-cell super-resolution microscopy. Nat Methods 2024:10.1038/s41592-024-02400-9. [PMID: 39261639 DOI: 10.1038/s41592-024-02400-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 07/31/2024] [Indexed: 09/13/2024]
Abstract
Every collected photon is precious in live-cell super-resolution (SR) microscopy. Here, we describe a data-efficient, deep learning-based denoising solution to improve diverse SR imaging modalities. The method, SN2N, is a Self-inspired Noise2Noise module with self-supervised data generation and self-constrained learning process. SN2N is fully competitive with supervised learning methods and circumvents the need for large training set and clean ground truth, requiring only a single noisy frame for training. We show that SN2N improves photon efficiency by one-to-two orders of magnitude and is compatible with multiple imaging modalities for volumetric, multicolor, time-lapse SR microscopy. We further integrated SN2N into different SR reconstruction algorithms to effectively mitigate image artifacts. We anticipate SN2N will enable improved live-SR imaging and inspire further advances.
Collapse
Affiliation(s)
- Liying Qu
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Shiqun Zhao
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Yuanyuan Huang
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Xianxin Ye
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Kunhao Wang
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Yuzhen Liu
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Xianming Liu
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Heng Mao
- School of Mathematical Sciences, Peking University, Beijing, China
| | - Guangwei Hu
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Wei Chen
- School of Mechanical Science and Engineering, Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, China
| | - Changliang Guo
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Jiaye He
- National Innovation Center for Advanced Medical Devices, Shenzhen, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Jiubin Tan
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China
| | - Haoyu Li
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China
- Frontiers Science Center for Matter Behave in Space Environment, Harbin Institute of Technology, Harbin, China
- Key Laboratory of Micro-Systems and Micro-Structures Manufacturing of Ministry of Education, Harbin Institute of Technology, Harbin, China
| | - Liangyi Chen
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
- PKU-IDG/McGovern Institute for Brain Research, Beijing, China
- Beijing Academy of Artificial Intelligence, Beijing, China
| | - Weisong Zhao
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China.
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China.
- Frontiers Science Center for Matter Behave in Space Environment, Harbin Institute of Technology, Harbin, China.
- Key Laboratory of Micro-Systems and Micro-Structures Manufacturing of Ministry of Education, Harbin Institute of Technology, Harbin, China.
| |
Collapse
|
2
|
Rotem O, Schwartz T, Maor R, Tauber Y, Shapiro MT, Meseguer M, Gilboa D, Seidman DS, Zaritsky A. Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization. Nat Commun 2024; 15:7390. [PMID: 39191720 DOI: 10.1038/s41467-024-51136-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 07/31/2024] [Indexed: 08/29/2024] Open
Abstract
The success of deep learning in identifying complex patterns exceeding human intuition comes at the cost of interpretability. Non-linear entanglement of image features makes deep learning a "black box" lacking human meaningful explanations for the models' decision. We present DISCOVER, a generative model designed to discover the underlying visual properties driving image-based classification models. DISCOVER learns disentangled latent representations, where each latent feature encodes a unique classification-driving visual property. This design enables "human-in-the-loop" interpretation by generating disentangled exaggerated counterfactual explanations. We apply DISCOVER to interpret classification of in vitro fertilization embryo morphology quality. We quantitatively and systematically confirm the interpretation of known embryo properties, discover properties without previous explicit measurements, and quantitatively determine and empirically verify the classification decision of specific embryo instances. We show that DISCOVER provides human-interpretable understanding of "black box" classification models, proposes hypotheses to decipher underlying biomedical mechanisms, and provides transparency for the classification of individual predictions.
Collapse
Affiliation(s)
- Oded Rotem
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva, 84105, Israel
| | | | - Ron Maor
- AIVF Ltd., Tel Aviv, 69271, Israel
| | | | | | - Marcos Meseguer
- IVI Foundation Instituto de Investigación Sanitaria La FeValencia, Valencia, 46026, Spain
- Department of Reproductive Medicine, IVIRMA Valencia, 46015, Valencia, Spain
| | | | - Daniel S Seidman
- AIVF Ltd., Tel Aviv, 69271, Israel
- The Faculty of Medicine, Tel Aviv University, Tel-Aviv, 69978, Israel
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva, 84105, Israel.
| |
Collapse
|
3
|
Rehman A, Zhovmer A, Sato R, Mukouyama YS, Chen J, Rissone A, Puertollano R, Liu J, Vishwasrao HD, Shroff H, Combs CA, Xue H. Convolutional neural network transformer (CNNT) for fluorescence microscopy image denoising with improved generalization and fast adaptation. Sci Rep 2024; 14:18184. [PMID: 39107416 PMCID: PMC11303381 DOI: 10.1038/s41598-024-68918-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 07/30/2024] [Indexed: 08/10/2024] Open
Abstract
Deep neural networks can improve the quality of fluorescence microscopy images. Previous methods, based on Convolutional Neural Networks (CNNs), require time-consuming training of individual models for each experiment, impairing their applicability and generalization. In this study, we propose a novel imaging-transformer based model, Convolutional Neural Network Transformer (CNNT), that outperforms CNN based networks for image denoising. We train a general CNNT based backbone model from pairwise high-low Signal-to-Noise Ratio (SNR) image volumes, gathered from a single type of fluorescence microscope, an instant Structured Illumination Microscope. Fast adaptation to new microscopes is achieved by fine-tuning the backbone on only 5-10 image volume pairs per new experiment. Results show that the CNNT backbone and fine-tuning scheme significantly reduces training time and improves image quality, outperforming models trained using only CNNs such as 3D-RCAN and Noise2Fast. We show three examples of efficacy of this approach in wide-field, two-photon, and confocal fluorescence microscopy.
Collapse
Affiliation(s)
- Azaan Rehman
- Office of AI Research, National Heart, Lung and Blood Institute (NHLBI), National Institutes of Health (NIH), Bethesda, MD, 20892, USA
| | - Alexander Zhovmer
- Center for Biologics Evaluation and Research, U.S. Food and Drug Administration (FDA), Silver Spring, MD, 20903, USA
| | - Ryo Sato
- Laboratory of Stem Cell and Neurovascular Research, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Yoh-Suke Mukouyama
- Laboratory of Stem Cell and Neurovascular Research, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Jiji Chen
- Advanced Imaging and Microscopy Resource, NIBIB, NIH, Bethesda, MD, 20892, USA
| | - Alberto Rissone
- Laboratory of Protein Trafficking and Organelle Biology, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Rosa Puertollano
- Laboratory of Protein Trafficking and Organelle Biology, NHLBI, NIH, Bethesda, MD, 20892, USA
| | - Jiamin Liu
- Advanced Imaging and Microscopy Resource, NIBIB, NIH, Bethesda, MD, 20892, USA
| | | | - Hari Shroff
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Christian A Combs
- Light Microscopy Core, National Heart, Lung, and Blood Institute, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD, 20892, USA.
| | - Hui Xue
- Office of AI Research, National Heart, Lung and Blood Institute (NHLBI), National Institutes of Health (NIH), Bethesda, MD, 20892, USA
- Health Futures, Microsoft Research, Redmond, Washington, 98052, USA
| |
Collapse
|
4
|
Elmalam N, Ben Nedava L, Zaritsky A. In silico labeling in cell biology: Potential and limitations. Curr Opin Cell Biol 2024; 89:102378. [PMID: 38838549 DOI: 10.1016/j.ceb.2024.102378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 05/16/2024] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
In silico labeling is the computational cross-modality image translation where the output modality is a subcellular marker that is not specifically encoded in the input image, for example, in silico localization of organelles from transmitted light images. In principle, in silico labeling has the potential to facilitate rapid live imaging of multiple organelles with reduced photobleaching and phototoxicity, a technology enabling a major leap toward understanding the cell as an integrated complex system. However, five years have passed since feasibility was attained, without any demonstration of using in silico labeling to uncover new biological insight. In here, we discuss the current state of in silico labeling, the limitations preventing it from becoming a practical tool, and how we can overcome these limitations to reach its full potential.
Collapse
Affiliation(s)
- Nitsan Elmalam
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Lion Ben Nedava
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel.
| |
Collapse
|
5
|
Ma C, Tan W, He R, Yan B. Pretraining a foundation model for generalizable fluorescence microscopy-based image restoration. Nat Methods 2024; 21:1558-1567. [PMID: 38609490 DOI: 10.1038/s41592-024-02244-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 03/13/2024] [Indexed: 04/14/2024]
Abstract
Fluorescence microscopy-based image restoration has received widespread attention in the life sciences and has led to significant progress, benefiting from deep learning technology. However, most current task-specific methods have limited generalizability to different fluorescence microscopy-based image restoration problems. Here, we seek to improve generalizability and explore the potential of applying a pretrained foundation model to fluorescence microscopy-based image restoration. We provide a universal fluorescence microscopy-based image restoration (UniFMIR) model to address different restoration problems, and show that UniFMIR offers higher image restoration precision, better generalization and increased versatility. Demonstrations on five tasks and 14 datasets covering a wide range of microscopy imaging modalities and biological samples demonstrate that the pretrained UniFMIR can effectively transfer knowledge to a specific situation via fine-tuning, uncover clear nanoscale biomolecular structures and facilitate high-quality imaging. This work has the potential to inspire and trigger new research highlights for fluorescence microscopy-based image restoration.
Collapse
Affiliation(s)
- Chenxi Ma
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Weimin Tan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Ruian He
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Bo Yan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China.
| |
Collapse
|
6
|
Cam RM, Villa U, Anastasio MA. Learning a stable approximation of an existing but unknown inverse mapping: application to the half-time circular Radon transform. INVERSE PROBLEMS 2024; 40:085002. [PMID: 38933410 PMCID: PMC11197394 DOI: 10.1088/1361-6420/ad4f0a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 04/05/2024] [Accepted: 05/22/2024] [Indexed: 06/28/2024]
Abstract
Supervised deep learning-based methods have inspired a new wave of image reconstruction methods that implicitly learn effective regularization strategies from a set of training data. While they hold potential for improving image quality, they have also raised concerns regarding their robustness. Instabilities can manifest when learned methods are applied to find approximate solutions to ill-posed image reconstruction problems for which a unique and stable inverse mapping does not exist, which is a typical use case. In this study, we investigate the performance of supervised deep learning-based image reconstruction in an alternate use case in which a stable inverse mapping is known to exist but is not yet analytically available in closed form. For such problems, a deep learning-based method can learn a stable approximation of the unknown inverse mapping that generalizes well to data that differ significantly from the training set. The learned approximation of the inverse mapping eliminates the need to employ an implicit (optimization-based) reconstruction method and can potentially yield insights into the unknown analytic inverse formula. The specific problem addressed is image reconstruction from a particular case of radially truncated circular Radon transform (CRT) data, referred to as 'half-time' measurement data. For the half-time image reconstruction problem, we develop and investigate a learned filtered backprojection method that employs a convolutional neural network to approximate the unknown filtering operation. We demonstrate that this method behaves stably and readily generalizes to data that differ significantly from training data. The developed method may find application to wave-based imaging modalities that include photoacoustic computed tomography.
Collapse
Affiliation(s)
- Refik Mert Cam
- Department of Electrical and Computer Engineering, University of Illinois Urbana–Champaign, Urbana, IL 61801, United States of America
| | - Umberto Villa
- Oden Institute for Computational Engineering & Sciences, The University of Texas at Austin, Austin, TX 78712, United States of America
| | - Mark A Anastasio
- Department of Electrical and Computer Engineering, University of Illinois Urbana–Champaign, Urbana, IL 61801, United States of America
- Department of Bioengineering, University of Illinois Urbana–Champaign, Urbana, IL 61801, United States of America
| |
Collapse
|
7
|
Liu J, Gao F, Zhang L, Yang H. A Saturation Artifacts Inpainting Method Based on Two-Stage GAN for Fluorescence Microscope Images. MICROMACHINES 2024; 15:928. [PMID: 39064439 PMCID: PMC11279111 DOI: 10.3390/mi15070928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2024] [Revised: 07/10/2024] [Accepted: 07/18/2024] [Indexed: 07/28/2024]
Abstract
Fluorescence microscopic images of cells contain a large number of morphological features that are used as an unbiased source of quantitative information about cell status, through which researchers can extract quantitative information about cells and study the biological phenomena of cells through statistical and analytical analysis. As an important research object of phenotypic analysis, images have a great influence on the research results. Saturation artifacts present in the image result in a loss of grayscale information that does not reveal the true value of fluorescence intensity. From the perspective of data post-processing, we propose a two-stage cell image recovery model based on a generative adversarial network to solve the problem of phenotypic feature loss caused by saturation artifacts. The model is capable of restoring large areas of missing phenotypic features. In the experiment, we adopt the strategy of progressive restoration to improve the robustness of the training effect and add the contextual attention structure to enhance the stability of the restoration effect. We hope to use deep learning methods to mitigate the effects of saturation artifacts to reveal how chemical, genetic, and environmental factors affect cell state, providing an effective tool for studying the field of biological variability and improving image quality in analysis.
Collapse
Affiliation(s)
- Jihong Liu
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China; (F.G.); (L.Z.)
| | - Fei Gao
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China; (F.G.); (L.Z.)
| | - Lvheng Zhang
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China; (F.G.); (L.Z.)
| | - Haixu Yang
- Department of Biomedical Engineering, Zhejiang University, Hangzhou 310027, China;
| |
Collapse
|
8
|
Liu ML, Liu YP, Guo XX, Wu ZY, Zhang XT, Roe AW, Hu JM. Orientation selectivity mapping in the visual cortex. Prog Neurobiol 2024; 240:102656. [PMID: 39009108 DOI: 10.1016/j.pneurobio.2024.102656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 06/17/2024] [Accepted: 07/05/2024] [Indexed: 07/17/2024]
Abstract
The orientation map is one of the most well-studied functional maps of the visual cortex. However, results from the literature are of different qualities. Clear boundaries among different orientation domains and blurred uncertain distinctions were shown in different studies. These unclear imaging results will lead to an inaccuracy in depicting cortical structures, and the lack of consideration in experimental design will also lead to biased depictions of the cortical features. How we accurately define orientation domains will impact the entire field of research. In this study, we test how spatial frequency (SF), stimulus size, location, chromatic, and data processing methods affect the orientation functional maps (including a large area of dorsal V4, and parts of dorsal V1) acquired by intrinsic signal optical imaging. Our results indicate that, for large imaging fields, large grating stimuli with mixed SF components should be considered to acquire the orientation map. A diffusion model image enhancement based on the difference map could further improve the map quality. In addition, the similar outcomes of achromatic and chromatic gratings indicate two alternative types of afferents from LGN, pooling in V1 to generate cue-invariant orientation selectivity.
Collapse
Affiliation(s)
- Mei-Lan Liu
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China; Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Yi-Peng Liu
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China
| | - Xin-Xia Guo
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China
| | - Zhi-Yi Wu
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou 310010, China
| | - Xiao-Tong Zhang
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310012, China; College of Electrical Engineering, Zhejiang University, Hangzhou 310000, China
| | - Anna Wang Roe
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China; Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310012, China; The State Key Laboratory of Brain-Machine Intelligence, Zhejiang University, Hangzhou 310058, China.
| | - Jia-Ming Hu
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310012, China.
| |
Collapse
|
9
|
Ertürk A. Deep 3D histology powered by tissue clearing, omics and AI. Nat Methods 2024; 21:1153-1165. [PMID: 38997593 DOI: 10.1038/s41592-024-02327-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 05/28/2024] [Indexed: 07/14/2024]
Abstract
To comprehensively understand tissue and organism physiology and pathophysiology, it is essential to create complete three-dimensional (3D) cellular maps. These maps require structural data, such as the 3D configuration and positioning of tissues and cells, and molecular data on the constitution of each cell, spanning from the DNA sequence to protein expression. While single-cell transcriptomics is illuminating the cellular and molecular diversity across species and tissues, the 3D spatial context of these molecular data is often overlooked. Here, I discuss emerging 3D tissue histology techniques that add the missing third spatial dimension to biomedical research. Through innovations in tissue-clearing chemistry, labeling and volumetric imaging that enhance 3D reconstructions and their synergy with molecular techniques, these technologies will provide detailed blueprints of entire organs or organisms at the cellular level. Machine learning, especially deep learning, will be essential for extracting meaningful insights from the vast data. Further development of integrated structural, molecular and computational methods will unlock the full potential of next-generation 3D histology.
Collapse
Affiliation(s)
- Ali Ertürk
- Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Zentrum München, Neuherberg, Germany.
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians University, Munich, Germany.
- School of Medicine, Koç University, İstanbul, Turkey.
- Deep Piction GmbH, Munich, Germany.
| |
Collapse
|
10
|
Cao Y, Xu B, Li B, Fu H. Advanced Design of Soft Robots with Artificial Intelligence. NANO-MICRO LETTERS 2024; 16:214. [PMID: 38869734 DOI: 10.1007/s40820-024-01423-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 04/22/2024] [Indexed: 06/14/2024]
Affiliation(s)
- Ying Cao
- Nanotechnology Center, School of Fashion and Textiles, The Hong Kong Polytechnic University, Hong Kong, 999077, People's Republic of China
| | - Bingang Xu
- Nanotechnology Center, School of Fashion and Textiles, The Hong Kong Polytechnic University, Hong Kong, 999077, People's Republic of China.
| | - Bin Li
- Bioinspired Engineering and Biomechanics Center, Xi'an Jiaotong University, Xi'an, 710049, People's Republic of China
| | - Hong Fu
- Department of Mathematics and Information Technology, The Education University of Hong Kong, Hong Kong, 999077, People's Republic of China.
| |
Collapse
|
11
|
Perez-Lopez R, Ghaffari Laleh N, Mahmood F, Kather JN. A guide to artificial intelligence for cancer researchers. Nat Rev Cancer 2024; 24:427-441. [PMID: 38755439 DOI: 10.1038/s41568-024-00694-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/09/2024] [Indexed: 05/18/2024]
Abstract
Artificial intelligence (AI) has been commoditized. It has evolved from a specialty resource to a readily accessible tool for cancer researchers. AI-based tools can boost research productivity in daily workflows, but can also extract hidden information from existing data, thereby enabling new scientific discoveries. Building a basic literacy in these tools is useful for every cancer researcher. Researchers with a traditional biological science focus can use AI-based tools through off-the-shelf software, whereas those who are more computationally inclined can develop their own AI-based software pipelines. In this article, we provide a practical guide for non-computational cancer researchers to understand how AI-based tools can benefit them. We convey general principles of AI for applications in image analysis, natural language processing and drug discovery. In addition, we give examples of how non-computational researchers can get started on the journey to productively use AI in their own work.
Collapse
Affiliation(s)
- Raquel Perez-Lopez
- Radiomics Group, Vall d'Hebron Institute of Oncology, Vall d'Hebron Barcelona Hospital Campus, Barcelona, Spain
| | - Narmin Ghaffari Laleh
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany.
- Department of Medicine I, University Hospital Dresden, Dresden, Germany.
- Medical Oncology, National Center for Tumour Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
12
|
Aghigh A, Jargot G, Zaouter C, Preston SEJ, Mohammadi MS, Ibrahim H, Del Rincón SV, Patten K, Légaré F. A comparative study of CARE 2D and N2V 2D for tissue-specific denoising in second harmonic generation imaging. JOURNAL OF BIOPHOTONICS 2024; 17:e202300565. [PMID: 38566461 DOI: 10.1002/jbio.202300565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 03/11/2024] [Accepted: 03/17/2024] [Indexed: 04/04/2024]
Abstract
This study explored the application of deep learning in second harmonic generation (SHG) microscopy, a rapidly growing area. This study focuses on the impact of glycerol concentration on image noise in SHG microscopy and compares two image restoration techniques: Noise-to-Void 2D (N2V 2D, no reference image restoration) and content-aware image restoration (CARE 2D, full reference image restoration). We demonstrated that N2V 2D effectively restored the images affected by high glycerol concentrations. To reduce sample exposure and damage, this study further addresses low-power SHG imaging by reducing the laser power by 70% using deep learning techniques. CARE 2D excels in preserving detailed structures, whereas N2V 2D maintains natural muscle structure. This study highlights the strengths and limitations of these models in specific SHG microscopy applications, offering valuable insights and potential advancements in the field .
Collapse
Affiliation(s)
- Arash Aghigh
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| | - Gaëtan Jargot
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| | - Charlotte Zaouter
- Armand-Frappier Santé Biotechnologie Research Centre, Laval, Québec, Canada
| | - Samuel E J Preston
- Department of Experimental Medicine, Faculty of Medicine, McGill University, Montréal, Québec, Canada
- Gerald Bronfman Department of Oncology, Segal Cancer Centre, Lady Davis Institute and Jewish General Hospital, McGill University, Montréal, Québec, Canada
| | - Melika Saadat Mohammadi
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| | - Heide Ibrahim
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| | - Sonia V Del Rincón
- Department of Experimental Medicine, Faculty of Medicine, McGill University, Montréal, Québec, Canada
- Gerald Bronfman Department of Oncology, Segal Cancer Centre, Lady Davis Institute and Jewish General Hospital, McGill University, Montréal, Québec, Canada
| | - Kessen Patten
- Armand-Frappier Santé Biotechnologie Research Centre, Laval, Québec, Canada
| | - François Légaré
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| |
Collapse
|
13
|
Shroff H, Testa I, Jug F, Manley S. Live-cell imaging powered by computation. Nat Rev Mol Cell Biol 2024; 25:443-463. [PMID: 38378991 DOI: 10.1038/s41580-024-00702-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2024] [Indexed: 02/22/2024]
Abstract
The proliferation of microscopy methods for live-cell imaging offers many new possibilities for users but can also be challenging to navigate. The prevailing challenge in live-cell fluorescence microscopy is capturing intra-cellular dynamics while preserving cell viability. Computational methods can help to address this challenge and are now shifting the boundaries of what is possible to capture in living systems. In this Review, we discuss these computational methods focusing on artificial intelligence-based approaches that can be layered on top of commonly used existing microscopies as well as hybrid methods that integrate computation and microscope hardware. We specifically discuss how computational approaches can improve the signal-to-noise ratio, spatial resolution, temporal resolution and multi-colour capacity of live-cell imaging.
Collapse
Affiliation(s)
- Hari Shroff
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Ilaria Testa
- Department of Applied Physics and Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Florian Jug
- Fondazione Human Technopole (HT), Milan, Italy
| | - Suliana Manley
- Institute of Physics, School of Basic Sciences, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
14
|
Qiao C, Zeng Y, Meng Q, Chen X, Chen H, Jiang T, Wei R, Guo J, Fu W, Lu H, Li D, Wang Y, Qiao H, Wu J, Li D, Dai Q. Zero-shot learning enables instant denoising and super-resolution in optical fluorescence microscopy. Nat Commun 2024; 15:4180. [PMID: 38755148 PMCID: PMC11099110 DOI: 10.1038/s41467-024-48575-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 05/07/2024] [Indexed: 05/18/2024] Open
Abstract
Computational super-resolution methods, including conventional analytical algorithms and deep learning models, have substantially improved optical microscopy. Among them, supervised deep neural networks have demonstrated outstanding performance, however, demanding abundant high-quality training data, which are laborious and even impractical to acquire due to the high dynamics of living cells. Here, we develop zero-shot deconvolution networks (ZS-DeconvNet) that instantly enhance the resolution of microscope images by more than 1.5-fold over the diffraction limit with 10-fold lower fluorescence than ordinary super-resolution imaging conditions, in an unsupervised manner without the need for either ground truths or additional data acquisition. We demonstrate the versatile applicability of ZS-DeconvNet on multiple imaging modalities, including total internal reflection fluorescence microscopy, three-dimensional wide-field microscopy, confocal microscopy, two-photon microscopy, lattice light-sheet microscopy, and multimodal structured illumination microscopy, which enables multi-color, long-term, super-resolution 2D/3D imaging of subcellular bioprocesses from mitotic single cells to multicellular embryos of mouse and C. elegans.
Collapse
Affiliation(s)
- Chang Qiao
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Yunmin Zeng
- Department of Automation, Tsinghua University, 100084, Beijing, China
| | - Quan Meng
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Xingye Chen
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
- Research Institute for Frontier Science, Beihang University, 100191, Beijing, China
| | - Haoyu Chen
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Tao Jiang
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Rongfei Wei
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
| | - Jiabao Guo
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Wenfeng Fu
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Huaide Lu
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Di Li
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
| | - Yuwang Wang
- Beijing National Research Center for Information Science and Technology, Tsinghua University, 100084, Beijing, China
| | - Hui Qiao
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Dong Li
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China.
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, 100084, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China.
| |
Collapse
|
15
|
Hauser SL, Brosig J, Murthy B, Attardo A, Kist AM. Implicit neural representations in light microscopy. BIOMEDICAL OPTICS EXPRESS 2024; 15:2175-2186. [PMID: 38633078 PMCID: PMC11019677 DOI: 10.1364/boe.515517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 02/08/2024] [Accepted: 02/08/2024] [Indexed: 04/19/2024]
Abstract
Three-dimensional stacks acquired with confocal or two-photon microscopy are crucial for studying neuroanatomy. However, high-resolution image stacks acquired at multiple depths are time-consuming and susceptible to photobleaching. In vivo microscopy is further prone to motion artifacts. In this work, we suggest that deep neural networks with sine activation functions encoding implicit neural representations (SIRENs) are suitable for predicting intermediate planes and correcting motion artifacts, addressing the aforementioned shortcomings. We show that we can accurately estimate intermediate planes across multiple micrometers and fully automatically and unsupervised estimate a motion-corrected denoised picture. We show that noise statistics can be affected by SIRENs, however, rescued by a downstream denoising neural network, shown exemplarily with the recovery of dendritic spines. We believe that the application of these technologies will facilitate more efficient acquisition and superior post-processing in the future.
Collapse
Affiliation(s)
- Sophie Louise Hauser
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany
| | | | | | | | - Andreas M. Kist
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany
| |
Collapse
|
16
|
Chen R, Xu J, Wang B, Ding Y, Abdulla A, Li Y, Jiang L, Ding X. SpiDe-Sr: blind super-resolution network for precise cell segmentation and clustering in spatial proteomics imaging. Nat Commun 2024; 15:2708. [PMID: 38548720 PMCID: PMC10978886 DOI: 10.1038/s41467-024-46989-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 03/15/2024] [Indexed: 04/01/2024] Open
Abstract
Spatial proteomics elucidates cellular biochemical changes with unprecedented topological level. Imaging mass cytometry (IMC) is a high-dimensional single-cell resolution platform for targeted spatial proteomics. However, the precision of subsequent clinical analysis is constrained by imaging noise and resolution. Here, we propose SpiDe-Sr, a super-resolution network embedded with a denoising module for IMC spatial resolution enhancement. SpiDe-Sr effectively resists noise and improves resolution by 4 times. We demonstrate SpiDe-Sr respectively with cells, mouse and human tissues, resulting 18.95%/27.27%/21.16% increase in peak signal-to-noise ratio and 15.95%/31.63%/15.52% increase in cell extraction accuracy. We further apply SpiDe-Sr to study the tumor microenvironment of a 20-patient clinical breast cancer cohort with 269,556 single cells, and discover the invasion of Gram-negative bacteria is positively correlated with carcinogenesis markers and negatively correlated with immunological markers. Additionally, SpiDe-Sr is also compatible with fluorescence microscopy imaging, suggesting SpiDe-Sr an alternative tool for microscopy image super-resolution.
Collapse
Grants
- This work was supported by National Key R&D Program of China (2022YFC2601700, 2022YFF0710202) and NSFC Projects (T2122002, 22077079, 81871448), Shanghai Municipal Science and Technology Project(22Z510202478), Shanghai Municipal Education Commission Project(21SG10), Shanghai Jiao Tong University Projects (YG2021ZD19, Agri-X20200101, 2020 SJTU-HUJI), Shanghai Municipal Health Commission Project (2019CXJQ03). Thanks for AEMD SJTU, Shanghai Jiao Tong University Laboratory Animal Center for the supporting.
Collapse
Affiliation(s)
- Rui Chen
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jiasu Xu
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Boqian Wang
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yi Ding
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Aynur Abdulla
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiyang Li
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Lai Jiang
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xianting Ding
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
17
|
Tabata K, Kawagoe H, Taylor JN, Mochizuki K, Kubo T, Clement JE, Kumamoto Y, Harada Y, Nakamura A, Fujita K, Komatsuzaki T. On-the-fly Raman microscopy guaranteeing the accuracy of discrimination. Proc Natl Acad Sci U S A 2024; 121:e2304866121. [PMID: 38483992 PMCID: PMC10962959 DOI: 10.1073/pnas.2304866121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 12/15/2023] [Indexed: 03/19/2024] Open
Abstract
Accelerating the measurement for discrimination of samples, such as classification of cell phenotype, is crucial when faced with significant time and cost constraints. Spontaneous Raman microscopy offers label-free, rich chemical information but suffers from long acquisition time due to extremely small scattering cross-sections. One possible approach to accelerate the measurement is by measuring necessary parts with a suitable number of illumination points. However, how to design these points during measurement remains a challenge. To address this, we developed an imaging technique based on a reinforcement learning in machine learning (ML). This ML approach adaptively feeds back "optimal" illumination pattern during the measurement to detect the existence of specific characteristics of interest, allowing faster measurements while guaranteeing discrimination accuracy. Using a set of Raman images of human follicular thyroid and follicular thyroid carcinoma cells, we showed that our technique requires 3,333 to 31,683 times smaller number of illuminations for discriminating the phenotypes than raster scanning. To quantitatively evaluate the number of illuminations depending on the requisite discrimination accuracy, we prepared a set of polymer bead mixture samples to model anomalous and normal tissues. We then applied a home-built programmable-illumination microscope equipped with our algorithm, and confirmed that the system can discriminate the sample conditions with 104 to 4,350 times smaller number of illuminations compared to standard point illumination Raman microscopy. The proposed algorithm can be applied to other types of microscopy that can control measurement condition on the fly, offering an approach for the acceleration of accurate measurements in various applications including medical diagnosis.
Collapse
Affiliation(s)
- Koji Tabata
- Research Center of Mathematics for Social Creativity, Research Institute for Electronic Science, Hokkaido University, Sapporo001–0020, Hokkaido, Japan
- Institute for Chemical Reaction Design and Discovery, Hokkaido University, Sapporo001–0021, Hokkaido, Japan
| | - Hiroyuki Kawagoe
- Department of Applied Physics, Osaka University, Suita565–0871, Osaka, Japan
| | - J. Nicholas Taylor
- Research Center of Mathematics for Social Creativity, Research Institute for Electronic Science, Hokkaido University, Sapporo001–0020, Hokkaido, Japan
| | - Kentaro Mochizuki
- Department of Pathology and Cell Regulation, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto602–8566, Kyoto, Japan
| | - Toshiki Kubo
- Department of Applied Physics, Osaka University, Suita565–0871, Osaka, Japan
| | - Jean-Emmanuel Clement
- Institute for Chemical Reaction Design and Discovery, Hokkaido University, Sapporo001–0021, Hokkaido, Japan
| | - Yasuaki Kumamoto
- Department of Applied Physics, Osaka University, Suita565–0871, Osaka, Japan
- Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita565–0871, Osaka, Japan
| | - Yoshinori Harada
- Department of Pathology and Cell Regulation, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto602–8566, Kyoto, Japan
| | - Atsuyoshi Nakamura
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo060–0814, Hokkaido, Japan
| | - Katsumasa Fujita
- Department of Applied Physics, Osaka University, Suita565–0871, Osaka, Japan
- Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita565–0871, Osaka, Japan
- Advanced Photonics and Biosensing Open Innovation Laboratory, AIST-Osaka University, Suita565–0871, Osaka, Japan
| | - Tamiki Komatsuzaki
- Research Center of Mathematics for Social Creativity, Research Institute for Electronic Science, Hokkaido University, Sapporo001–0020, Hokkaido, Japan
- Institute for Chemical Reaction Design and Discovery, Hokkaido University, Sapporo001–0021, Hokkaido, Japan
- Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita565–0871, Osaka, Japan
- Graduate School of Chemical Sciences and Engineering Materials Chemistry, and Engineering Course, Hokkaido University, Sapporo060–0812, Hokkaido, Japan
- The Institute of Scientific and Industrial Research, Osaka University, Ibaraki567-0047, Osaka, Japan
| |
Collapse
|
18
|
Luo C, Pang W, Shen B, Zhao Z, Wang S, Hu R, Qu J, Gu B, Liu L. Data-driven coordinated attention deep learning for high-fidelity brain imaging denoising and inpainting. JOURNAL OF BIOPHOTONICS 2024; 17:e202300390. [PMID: 38168132 DOI: 10.1002/jbio.202300390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 12/07/2023] [Accepted: 12/07/2023] [Indexed: 01/05/2024]
Abstract
Deep learning offers promise in enhancing low-quality images by addressing weak fluorescence signals, especially in deep in vivo mouse brain imaging. However, current methods struggle with photon scarcity and noise within in vivo deep mouse brains, and often neglecting tissue preservation. In this study, we propose an innovative in vivo cortical fluorescence image restoration approach, combining signal enhancement, denoising, and inpainting. We curated a deep brain cortical image dataset and developed a novel deep brain coordinate attention restoration network (DeepCAR), integrating coordinate attention with optimized residual networks. Our method swiftly and accurately restores deep cortex images exceeding 800 μm, preserving small-scale tissue structures. It boosts the peak signal-to-noise ratio (PSNR) by 6.94 dB for weak signals and 11.22 dB for large noisy images. Crucially, we validate the effectiveness on external datasets with diverse noise distributions, structural features compared to those in our training data, showcasing real-time high-performance image restoration capabilities.
Collapse
Affiliation(s)
- Chenggui Luo
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Wen Pang
- Med-X Research Institute and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Binglin Shen
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Zewei Zhao
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Shiqi Wang
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Rui Hu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Junle Qu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Bobo Gu
- Med-X Research Institute and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Liwei Liu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| |
Collapse
|
19
|
Shen B, Li Z, Pan Y, Guo Y, Yin Z, Hu R, Qu J, Liu L. Noninvasive Nonlinear Optical Computational Histology. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2308630. [PMID: 38095543 PMCID: PMC10916666 DOI: 10.1002/advs.202308630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Revised: 11/28/2023] [Indexed: 03/07/2024]
Abstract
Cancer remains a global health challenge, demanding early detection and accurate diagnosis for improved patient outcomes. An intelligent paradigm is introduced that elevates label-free nonlinear optical imaging with contrastive patch-wise learning, yielding stain-free nonlinear optical computational histology (NOCH). NOCH enables swift, precise diagnostic analysis of fresh tissues, reducing patient anxiety and healthcare costs. Nonlinear modalities are evaluated, including stimulated Raman scattering and multiphoton imaging, for their ability to enhance tumor microenvironment sensitivity, pathological analysis, and cancer examination. Quantitative analysis confirmed that NOCH images accurately reproduce nuclear morphometric features across different cancer stages. Key diagnostic features, such as nuclear morphology, size, and nuclear-cytoplasmic contrast, are well preserved. NOCH models also demonstrate promising generalization when applied to other pathological tissues. The study unites label-free nonlinear optical imaging with histopathology using contrastive learning to establish stain-free computational histology. NOCH provides a rapid, non-invasive, and precise approach to surgical pathology, holding immense potential for revolutionizing cancer diagnosis and surgical interventions.
Collapse
Affiliation(s)
- Binglin Shen
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Zhenglin Li
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Ying Pan
- China–Japan Union Hospital of Jilin UniversityChangchun130033China
| | - Yuan Guo
- Shaanxi Provincial Cancer HospitalXi'an710065China
| | - Zongyi Yin
- Shenzhen University General HospitalShenzhen518055China
| | - Rui Hu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Junle Qu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Liwei Liu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of EducationCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| |
Collapse
|
20
|
Gómez-de-Mariscal E, Del Rosario M, Pylvänäinen JW, Jacquemet G, Henriques R. Harnessing artificial intelligence to reduce phototoxicity in live imaging. J Cell Sci 2024; 137:jcs261545. [PMID: 38324353 PMCID: PMC10912813 DOI: 10.1242/jcs.261545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024] Open
Abstract
Fluorescence microscopy is essential for studying living cells, tissues and organisms. However, the fluorescent light that switches on fluorescent molecules also harms the samples, jeopardizing the validity of results - particularly in techniques such as super-resolution microscopy, which demands extended illumination. Artificial intelligence (AI)-enabled software capable of denoising, image restoration, temporal interpolation or cross-modal style transfer has great potential to rescue live imaging data and limit photodamage. Yet we believe the focus should be on maintaining light-induced damage at levels that preserve natural cell behaviour. In this Opinion piece, we argue that a shift in role for AIs is needed - AI should be used to extract rich insights from gentle imaging rather than recover compromised data from harsh illumination. Although AI can enhance imaging, our ultimate goal should be to uncover biological truths, not just retrieve data. It is essential to prioritize minimizing photodamage over merely pushing technical limits. Our approach is aimed towards gentle acquisition and observation of undisturbed living systems, aligning with the essence of live-cell fluorescence microscopy.
Collapse
Affiliation(s)
| | | | - Joanna W. Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku 20100, Finland
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal
- UCL Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK
| |
Collapse
|
21
|
Priessner M, Gaboriau DCA, Sheridan A, Lenn T, Garzon-Coral C, Dunn AR, Chubb JR, Tousley AM, Majzner RG, Manor U, Vilar R, Laine RF. Content-aware frame interpolation (CAFI): deep learning-based temporal super-resolution for fast bioimaging. Nat Methods 2024; 21:322-330. [PMID: 38238557 PMCID: PMC10864186 DOI: 10.1038/s41592-023-02138-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 11/17/2023] [Indexed: 02/15/2024]
Abstract
The development of high-resolution microscopes has made it possible to investigate cellular processes in 3D and over time. However, observing fast cellular dynamics remains challenging because of photobleaching and phototoxicity. Here we report the implementation of two content-aware frame interpolation (CAFI) deep learning networks, Zooming SlowMo and Depth-Aware Video Frame Interpolation, that are highly suited for accurately predicting images in between image pairs, therefore improving the temporal resolution of image series post-acquisition. We show that CAFI is capable of understanding the motion context of biological structures and can perform better than standard interpolation methods. We benchmark CAFI's performance on 12 different datasets, obtained from four different microscopy modalities, and demonstrate its capabilities for single-particle tracking and nuclear segmentation. CAFI potentially allows for reduced light exposure and phototoxicity on the sample for improved long-term live-cell imaging. The models and the training and testing data are available via the ZeroCostDL4Mic platform.
Collapse
Affiliation(s)
- Martin Priessner
- Department of Chemistry, Imperial College London, London, UK.
- Centre of Excellence in Neurotechnology, Imperial College London, London, UK.
| | - David C A Gaboriau
- Facility for Imaging by Light Microscopy, NHLI, Imperial College London, London, UK
| | - Arlo Sheridan
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Tchern Lenn
- CRUK City of London Centre, UCL Cancer Institute, London, UK
| | - Carlos Garzon-Coral
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
- Institute of Human Biology, Roche Pharma Research & Early Development, Roche Innovation Center Basel, Basel, Switzerland
| | - Alexander R Dunn
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
| | - Jonathan R Chubb
- Laboratory for Molecular Cell Biology, University College London, London, UK
| | - Aidan M Tousley
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Robbie G Majzner
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Uri Manor
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
- Department of Cell & Developmental Biology, University of California, San Diego, CA, USA
| | - Ramon Vilar
- Department of Chemistry, Imperial College London, London, UK
| | - Romain F Laine
- Micrographia Bio, Translation and Innovation Hub, London, UK.
| |
Collapse
|
22
|
Chang GH, Wu MY, Yen LH, Huang DY, Lin YH, Luo YR, Liu YD, Xu B, Leong KW, Lai WS, Chiang AS, Wang KC, Lin CH, Wang SL, Chu LA. Isotropic multi-scale neuronal reconstruction from high-ratio expansion microscopy with contrastive unsupervised deep generative models. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107991. [PMID: 38185040 DOI: 10.1016/j.cmpb.2023.107991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 12/10/2023] [Accepted: 12/19/2023] [Indexed: 01/09/2024]
Abstract
BACKGROUND AND OBJECTIVE Current methods for imaging reconstruction from high-ratio expansion microscopy (ExM) data are limited by anisotropic optical resolution and the requirement for extensive manual annotation, creating a significant bottleneck in the analysis of complex neuronal structures. METHODS We devised an innovative approach called the IsoGAN model, which utilizes a contrastive unsupervised generative adversarial network to sidestep these constraints. This model leverages multi-scale and isotropic neuron/protein/blood vessel morphology data to generate high-fidelity 3D representations of these structures, eliminating the need for rigorous manual annotation and supervision. The IsoGAN model introduces simplified structures with idealized morphologies as shape priors to ensure high consistency in the generated neuronal profiles across all points in space and scalability for arbitrarily large volumes. RESULTS The efficacy of the IsoGAN model in accurately reconstructing complex neuronal structures was quantitatively assessed by examining the consistency between the axial and lateral views and identifying a reduction in erroneous imaging artifacts. The IsoGAN model accurately reconstructed complex neuronal structures, as evidenced by the consistency between the axial and lateral views and a reduction in erroneous imaging artifacts, and can be further applied to various biological samples. CONCLUSION With its ability to generate detailed 3D neurons/proteins/blood vessel structures using significantly fewer axial view images, IsoGAN can streamline the process of imaging reconstruction while maintaining the necessary detail, offering a transformative solution to the existing limitations in high-throughput morphology analysis across different structures.
Collapse
Affiliation(s)
- Gary Han Chang
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC; Graduate School of Advanced Technology, National Taiwan University, Taipei, Taiwan, ROC.
| | - Meng-Yun Wu
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC
| | - Ling-Hui Yen
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Da-Yu Huang
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC
| | - Ya-Hui Lin
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Yi-Ru Luo
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Ya-Ding Liu
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Bin Xu
- Department of Psychiatry, Columbia University, New York, NY 10032, USA
| | - Kam W Leong
- Department of Biomedical Engineering, Columbia University, New York, NY 10032, USA
| | - Wen-Sung Lai
- Department of Psychology, National Taiwan University, Taipei, Taiwan, ROC
| | - Ann-Shyn Chiang
- Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC; Institute of System Neuroscience, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Kuo-Chuan Wang
- Department of Neurosurgery, National Taiwan University Hospital, Taipei, Taiwan, ROC
| | - Chin-Hsien Lin
- Department of Neurosurgery, National Taiwan University Hospital, Taipei, Taiwan, ROC
| | - Shih-Luen Wang
- Department of Physics and Center for Interdisciplinary Research on Complex Systems, Northeastern University, Boston, MA 02115, USA
| | - Li-An Chu
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC.
| |
Collapse
|
23
|
Gritti N, Power RM, Graves A, Huisken J. Image restoration of degraded time-lapse microscopy data mediated by near-infrared imaging. Nat Methods 2024; 21:311-321. [PMID: 38177507 PMCID: PMC10864180 DOI: 10.1038/s41592-023-02127-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 11/10/2023] [Indexed: 01/06/2024]
Abstract
Time-lapse fluorescence microscopy is key to unraveling biological development and function; however, living systems, by their nature, permit only limited interrogation and contain untapped information that can only be captured by more invasive methods. Deep-tissue live imaging presents a particular challenge owing to the spectral range of live-cell imaging probes/fluorescent proteins, which offer only modest optical penetration into scattering tissues. Herein, we employ convolutional neural networks to augment live-imaging data with deep-tissue images taken on fixed samples. We demonstrate that convolutional neural networks may be used to restore deep-tissue contrast in GFP-based time-lapse imaging using paired final-state datasets acquired using near-infrared dyes, an approach termed InfraRed-mediated Image Restoration (IR2). Notably, the networks are remarkably robust over a wide range of developmental times. We employ IR2 to enhance the information content of green fluorescent protein time-lapse images of zebrafish and Drosophila embryo/larval development and demonstrate its quantitative potential in increasing the fidelity of cell tracking/lineaging in developing pescoids. Thus, IR2 is poised to extend live imaging to depths otherwise inaccessible.
Collapse
Affiliation(s)
- Nicola Gritti
- Morgridge Institute for Research, Madison, WI, USA
- Mesoscopic Imaging Facility, European Molecular Biology Laboratory Barcelona, Barcelona, Spain
| | - Rory M Power
- Morgridge Institute for Research, Madison, WI, USA
- EMBL Imaging Center, European Molecular Biology Laboratory Heidelberg, Heidelberg, Germany
| | | | - Jan Huisken
- Morgridge Institute for Research, Madison, WI, USA.
- Department of Integrative Biology, University of Wisconsin Madison, Madison, WI, USA.
- Department of Biology and Psychology, Georg-August-University Göttingen, Göttingen, Germany.
- Cluster of Excellence 'Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells' (MBExC), University of Göttingen, Göttingen, Germany.
| |
Collapse
|
24
|
Wang Q, Li Z, Zhang S, Chi N, Dai Q. A versatile Wavelet-Enhanced CNN-Transformer for improved fluorescence microscopy image restoration. Neural Netw 2024; 170:227-241. [PMID: 37992510 DOI: 10.1016/j.neunet.2023.11.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 11/06/2023] [Accepted: 11/17/2023] [Indexed: 11/24/2023]
Abstract
Fluorescence microscopes are indispensable tools for the life science research community. Nevertheless, the presence of optical component limitations, coupled with the maximum photon budget that the specimen can tolerate, inevitably leads to a decline in imaging quality and a lack of useful signals. Therefore, image restoration becomes essential for ensuring high-quality and accurate analyses. This paper presents the Wavelet-Enhanced Convolutional-Transformer (WECT), a novel deep learning technique developed specifically for the purpose of reducing noise in microscopy images and attaining super-resolution. Unlike traditional approaches, WECT integrates wavelet transform and inverse-transform for multi-resolution image decomposition and reconstruction, resulting in an expanded receptive field for the network without compromising information integrity. Subsequently, multiple consecutive parallel CNN-Transformer modules are utilized to collaboratively model local and global dependencies, thus facilitating the extraction of more comprehensive and diversified deep features. In addition, the incorporation of generative adversarial networks (GANs) into WECT enhances its capacity to generate high perceptual quality microscopic images. Extensive experiments have demonstrated that the WECT framework outperforms current state-of-the-art restoration methods on real fluorescence microscopy data under various imaging modalities and conditions, in terms of quantitative and qualitative analysis.
Collapse
Affiliation(s)
- Qinghua Wang
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China.
| | - Ziwei Li
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Shanghai ERC of LEO Satellite Communication and Applications, Shanghai CIC of LEO Satellite Communication Technology, Fudan University, Shanghai, 200433, China; Pujiang Laboratory, Shanghai, China.
| | - Shuqi Zhang
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China.
| | - Nan Chi
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Shanghai ERC of LEO Satellite Communication and Applications, Shanghai CIC of LEO Satellite Communication Technology, Fudan University, Shanghai, 200433, China; Shanghai Collaborative Innovation Center of Low-Earth-Orbit Satellite Communication Technology, Shanghai, 200433, China.
| | - Qionghai Dai
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Department of Automation, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
25
|
Jahangiri L. Predicting Neuroblastoma Patient Risk Groups, Outcomes, and Treatment Response Using Machine Learning Methods: A Review. Med Sci (Basel) 2024; 12:5. [PMID: 38249081 PMCID: PMC10801560 DOI: 10.3390/medsci12010005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 12/28/2023] [Accepted: 01/03/2024] [Indexed: 01/23/2024] Open
Abstract
Neuroblastoma, a paediatric malignancy with high rates of cancer-related morbidity and mortality, is of significant interest to the field of paediatric cancers. High-risk NB tumours are usually metastatic and result in survival rates of less than 50%. Machine learning approaches have been applied to various neuroblastoma patient data to retrieve relevant clinical and biological information and develop predictive models. Given this background, this study will catalogue and summarise the literature that has used machine learning and statistical methods to analyse data such as multi-omics, histological sections, and medical images to make clinical predictions. Furthermore, the question will be turned on its head, and the use of machine learning to accurately stratify NB patients by risk groups and to predict outcomes, including survival and treatment response, will be summarised. Overall, this study aims to catalogue and summarise the important work conducted to date on the subject of expression-based predictor models and machine learning in neuroblastoma for risk stratification and patient outcomes including survival, and treatment response which may assist and direct future diagnostic and therapeutic efforts.
Collapse
Affiliation(s)
- Leila Jahangiri
- School of Science and Technology, Nottingham Trent University, Clifton Site, Nottingham NG11 8NS, UK;
- Division of Cellular and Molecular Pathology, Addenbrookes Hospital, University of Cambridge, Cambridge CB2 0QQ, UK
| |
Collapse
|
26
|
Shah ZH, Müller M, Hübner W, Wang TC, Telman D, Huser T, Schenck W. Evaluation of Swin Transformer and knowledge transfer for denoising of super-resolution structured illumination microscopy data. Gigascience 2024; 13:giad109. [PMID: 38217407 PMCID: PMC10787368 DOI: 10.1093/gigascience/giad109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 07/13/2023] [Accepted: 12/05/2023] [Indexed: 01/15/2024] Open
Abstract
BACKGROUND Convolutional neural network (CNN)-based methods have shown excellent performance in denoising and reconstruction of super-resolved structured illumination microscopy (SR-SIM) data. Therefore, CNN-based architectures have been the focus of existing studies. However, Swin Transformer, an alternative and recently proposed deep learning-based image restoration architecture, has not been fully investigated for denoising SR-SIM images. Furthermore, it has not been fully explored how well transfer learning strategies work for denoising SR-SIM images with different noise characteristics and recorded cell structures for these different types of deep learning-based methods. Currently, the scarcity of publicly available SR-SIM datasets limits the exploration of the performance and generalization capabilities of deep learning methods. RESULTS In this work, we present SwinT-fairSIM, a novel method based on the Swin Transformer for restoring SR-SIM images with a low signal-to-noise ratio. The experimental results show that SwinT-fairSIM outperforms previous CNN-based denoising methods. Furthermore, as a second contribution, two types of transfer learning-namely, direct transfer and fine-tuning-were benchmarked in combination with SwinT-fairSIM and CNN-based methods for denoising SR-SIM data. Direct transfer did not prove to be a viable strategy, but fine-tuning produced results comparable to conventional training from scratch while saving computational time and potentially reducing the amount of training data required. As a third contribution, we publish four datasets of raw SIM images and already reconstructed SR-SIM images. These datasets cover two different types of cell structures, tubulin filaments and vesicle structures. Different noise levels are available for the tubulin filaments. CONCLUSION The SwinT-fairSIM method is well suited for denoising SR-SIM images. By fine-tuning, already trained models can be easily adapted to different noise characteristics and cell structures. Furthermore, the provided datasets are structured in a way that the research community can readily use them for research on denoising, super-resolution, and transfer learning strategies.
Collapse
Affiliation(s)
- Zafran Hussain Shah
- Faculty of Engineering and Mathematics, Bielefeld University of Applied Sciences and Arts, 33619 Bielefeld, Germany
| | - Marcel Müller
- Faculty of Physics, Bielefeld University, 33615 Bielefeld, Germany
| | - Wolfgang Hübner
- Faculty of Physics, Bielefeld University, 33615 Bielefeld, Germany
| | - Tung-Cheng Wang
- Faculty of Physics, Bielefeld University, 33615 Bielefeld, Germany
- Leica Microsystems CMS GmbH, 68165 Mannheim, Germany
| | - Daniel Telman
- Faculty of Engineering and Mathematics, Bielefeld University of Applied Sciences and Arts, 33619 Bielefeld, Germany
| | - Thomas Huser
- Faculty of Physics, Bielefeld University, 33615 Bielefeld, Germany
| | - Wolfram Schenck
- Faculty of Engineering and Mathematics, Bielefeld University of Applied Sciences and Arts, 33619 Bielefeld, Germany
| |
Collapse
|
27
|
Hu X, Jia X, Zhang K, Lo TW, Fan Y, Liu D, Wen J, Yong H, Rahmani M, Zhang L, Lei D. Deep-learning-augmented microscopy for super-resolution imaging of nanoparticles. OPTICS EXPRESS 2024; 32:879-890. [PMID: 38175110 DOI: 10.1364/oe.505060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 12/04/2023] [Indexed: 01/05/2024]
Abstract
Conventional optical microscopes generally provide blurry and indistinguishable images for subwavelength nanostructures. However, a wealth of intensity and phase information is hidden in the corresponding diffraction-limited optical patterns and can be used for the recognition of structural features, such as size, shape, and spatial arrangement. Here, we apply a deep-learning framework to improve the spatial resolution of optical imaging for metal nanostructures with regular shapes yet varied arrangement. A convolutional neural network (CNN) is constructed and pre-trained by the optical images of randomly distributed gold nanoparticles as input and the corresponding scanning-electron microscopy images as ground truth. The CNN is then learned to recover reversely the non-diffracted super-resolution images of both regularly arranged nanoparticle dimers and randomly clustered nanoparticle multimers from their blurry optical images. The profiles and orientations of these structures can also be reconstructed accurately. Moreover, the same network is extended to deblur the optical images of randomly cross-linked silver nanowires. Most sections of these intricate nanowire nets are recovered well with a slight discrepancy near their intersections. This deep-learning augmented framework opens new opportunities for computational super-resolution optical microscopy with many potential applications in the fields of bioimaging and nanoscale fabrication and characterization. It could also be applied to significantly enhance the resolving capability of low-magnification scanning-electron microscopy.
Collapse
|
28
|
García López de Haro C, Dallongeville S, Musset T, Gómez-de-Mariscal E, Sage D, Ouyang W, Muñoz-Barrutia A, Tinevez JY, Olivo-Marin JC. JDLL: a library to run deep learning models on Java bioimage informatics platforms. Nat Methods 2024; 21:7-8. [PMID: 38191929 DOI: 10.1038/s41592-023-02129-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Affiliation(s)
| | - Stéphane Dallongeville
- Bioimage Analysis Unit, Institut Pasteur, Université Paris Cité, Paris, France
- CNRS UMR 3691, Institut Pasteur, Paris, France
| | - Thomas Musset
- Bioimage Analysis Unit, Institut Pasteur, Université Paris Cité, Paris, France
- CNRS UMR 3691, Institut Pasteur, Paris, France
| | | | - Daniel Sage
- Biomedical Imaging Group and Center for Imaging, Ecole Polytechnique de Lausanne (EPFL), Lausanne, Switzerland
| | - Wei Ouyang
- Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Arrate Muñoz-Barrutia
- Biomedical Sciences and Engineering Laboratory, Universidad Carlos III de Madrid, Leganés, Spain
| | - Jean-Yves Tinevez
- Image Analysis Hub, Institut Pasteur, Université Paris Cité, Paris, France.
| | - Jean-Christophe Olivo-Marin
- Bioimage Analysis Unit, Institut Pasteur, Université Paris Cité, Paris, France.
- CNRS UMR 3691, Institut Pasteur, Paris, France.
| |
Collapse
|
29
|
Ahn C, Kim JH. AntiHalluciNet: A Potential Auditing Tool of the Behavior of Deep Learning Denoising Models in Low-Dose Computed Tomography. Diagnostics (Basel) 2023; 14:96. [PMID: 38201404 PMCID: PMC10795730 DOI: 10.3390/diagnostics14010096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 12/14/2023] [Accepted: 12/30/2023] [Indexed: 01/12/2024] Open
Abstract
Gaining the ability to audit the behavior of deep learning (DL) denoising models is of crucial importance to prevent potential hallucinations and adversarial clinical consequences. We present a preliminary version of AntiHalluciNet, which is designed to predict spurious structural components embedded in the residual noise from DL denoising models in low-dose CT and assess its feasibility for auditing the behavior of DL denoising models. We created a paired set of structure-embedded and pure noise images and trained AntiHalluciNet to predict spurious structures in the structure-embedded noise images. The performance of AntiHalluciNet was evaluated by using a newly devised residual structure index (RSI), which represents the prediction confidence based on the presence of structural components in the residual noise image. We also evaluated whether AntiHalluciNet could assess the image fidelity of a denoised image by using only a noise component instead of measuring the SSIM, which requires both reference and test images. Then, we explored the potential of AntiHalluciNet for auditing the behavior of DL denoising models. AntiHalluciNet was applied to three DL denoising models (two pre-trained models, RED-CNN and CTformer, and a commercial software, ClariCT.AI [version 1.2.3]), and whether AntiHalluciNet could discriminate between the noise purity performances of DL denoising models was assessed. AntiHalluciNet demonstrated an excellent performance in predicting the presence of structural components. The RSI values for the structural-embedded and pure noise images measured using the 50% low-dose dataset were 0.57 ± 31 and 0.02 ± 0.02, respectively, showing a substantial difference with a p-value < 0.0001. The AntiHalluciNet-derived RSI could differentiate between the quality of the degraded denoised images, with measurement values of 0.27, 0.41, 0.48, and 0.52 for the 25%, 50%, 75%, and 100% mixing rates of the degradation component, which showed a higher differentiation potential compared with the SSIM values of 0.9603, 0.9579, 0.9490, and 0.9333. The RSI measurements from the residual images of the three DL denoising models showed a distinct distribution, being 0.28 ± 0.06, 0.21 ± 0.06, and 0.15 ± 0.03 for RED-CNN, CTformer, and ClariCT.AI, respectively. AntiHalluciNet has the potential to predict the structural components embedded in the residual noise from DL denoising models in low-dose CT. With AntiHalluciNet, it is feasible to audit the performance and behavior of DL denoising models in clinical environments where only residual noise images are available.
Collapse
Affiliation(s)
- Chulkyun Ahn
- Department of Transdisciplinary Studies, Program in Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul 08826, Republic of Korea;
- ClariPi Research, ClariPi, Seoul 03088, Republic of Korea
| | - Jong Hyo Kim
- Department of Transdisciplinary Studies, Program in Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul 08826, Republic of Korea;
- ClariPi Research, ClariPi, Seoul 03088, Republic of Korea
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul 08826, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul 03080, Republic of Korea
- Department of Radiology, Seoul National University Hospital, Seoul 03080, Republic of Korea
- Center for Medical-IT Convergence Technology Research, Advanced Institutes of Convergence Technology, Suwon-si 16229, Republic of Korea
| |
Collapse
|
30
|
Seifert R, Markert SM, Britz S, Perschin V, Erbacher C, Stigloher C, Kollmannsberger P. DeepCLEM: automated registration for correlative light and electron microscopy using deep learning. F1000Res 2023; 9:1275. [PMID: 37397873 PMCID: PMC10311120 DOI: 10.12688/f1000research.27158.2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 12/27/2023] [Indexed: 08/25/2023] Open
Abstract
In correlative light and electron microscopy (CLEM), the fluorescent images must be registered to the EM images with high precision. Due to the different contrast of EM and fluorescence images, automated correlation-based alignment is not directly possible, and registration is often done by hand using a fluorescent stain, or semi-automatically with fiducial markers. We introduce "DeepCLEM", a fully automated CLEM registration workflow. A convolutional neural network predicts the fluorescent signal from the EM images, which is then automatically registered to the experimentally measured chromatin signal from the sample using correlation-based alignment. The complete workflow is available as a Fiji plugin and could in principle be adapted for other imaging modalities as well as for 3D stacks.
Collapse
Affiliation(s)
- Rick Seifert
- Center for Computational and Theoretical Biology, University of Würzburg, Würzburg, 97074, Germany
- Imaging Core Facility, Biocenter, University of Würzburg, Würzburg, 97074, Germany
| | - Sebastian M. Markert
- Imaging Core Facility, Biocenter, University of Würzburg, Würzburg, 97074, Germany
| | - Sebastian Britz
- Imaging Core Facility, Biocenter, University of Würzburg, Würzburg, 97074, Germany
| | - Veronika Perschin
- Imaging Core Facility, Biocenter, University of Würzburg, Würzburg, 97074, Germany
| | - Christoph Erbacher
- Department of Neurology, University of Würzburg, Würzburg, 97074, Germany
| | - Christian Stigloher
- Imaging Core Facility, Biocenter, University of Würzburg, Würzburg, 97074, Germany
| | - Philip Kollmannsberger
- Center for Computational and Theoretical Biology, University of Würzburg, Würzburg, 97074, Germany
| |
Collapse
|
31
|
Xypakis E, de Turris V, Gala F, Ruocco G, Leonetti M. Physics-informed deep neural network for image denoising. OPTICS EXPRESS 2023; 31:43838-43849. [PMID: 38178470 DOI: 10.1364/oe.504606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/14/2023] [Indexed: 01/06/2024]
Abstract
Image enhancement deep neural networks (DNN) can improve signal to noise ratio or resolution of optically collected visual information. The literature reports a variety of approaches with varying effectiveness. All these algorithms rely on arbitrary data (the pixels' count-rate) normalization, making their performance strngly affected by dataset or user-specific data pre-manipulation. We developed a DNN algorithm capable to enhance images signal-to-noise surpassing previous algorithms. Our model stems from the nature of the photon detection process which is characterized by an inherently Poissonian statistics. Our algorithm is thus driven by distance between probability functions instead than relying on the sole count-rate, producing high performance results especially in high-dynamic-range images. Moreover, it does not require any arbitrary image renormalization other than the transformation of the camera's count-rate into photon-number.
Collapse
|
32
|
Panconi L, Tansell A, Collins AJ, Makarova M, Owen DM. Three-dimensional topology-based analysis segments volumetric and spatiotemporal fluorescence microscopy. BIOLOGICAL IMAGING 2023; 4:e1. [PMID: 38516632 PMCID: PMC10951800 DOI: 10.1017/s2633903x23000260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 11/13/2023] [Accepted: 12/01/2023] [Indexed: 03/23/2024]
Abstract
Image analysis techniques provide objective and reproducible statistics for interpreting microscopy data. At higher dimensions, three-dimensional (3D) volumetric and spatiotemporal data highlight additional properties and behaviors beyond the static 2D focal plane. However, increased dimensionality carries increased complexity, and existing techniques for general segmentation of 3D data are either primitive, or highly specialized to specific biological structures. Borrowing from the principles of 2D topological data analysis (TDA), we formulate a 3D segmentation algorithm that implements persistent homology to identify variations in image intensity. From this, we derive two separate variants applicable to spatial and spatiotemporal data, respectively. We demonstrate that this analysis yields both sensitive and specific results on simulated data and can distinguish prominent biological structures in fluorescence microscopy images, regardless of their shape. Furthermore, we highlight the efficacy of temporal TDA in tracking cell lineage and the frequency of cell and organelle replication.
Collapse
Affiliation(s)
- Luca Panconi
- Institute of Immunology and Immunotherapy, University of Birmingham, Birmingham, UK
- College of Engineering and Physical Sciences, University of Birmingham, Birmingham, UK
- Centre of Membrane Proteins and Receptors, University of Birmingham, Birmingham, UK
| | - Amy Tansell
- College of Engineering and Physical Sciences, University of Birmingham, Birmingham, UK
- School of Mathematics, University of Birmingham, Birmingham, UK
| | | | - Maria Makarova
- School of Biosciences, College of Life and Environmental Science, University of Birmingham, Birmingham, UK
- Institute of Metabolism and Systems Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Dylan M. Owen
- Institute of Immunology and Immunotherapy, University of Birmingham, Birmingham, UK
- Centre of Membrane Proteins and Receptors, University of Birmingham, Birmingham, UK
- School of Mathematics, University of Birmingham, Birmingham, UK
| |
Collapse
|
33
|
Imboden S, Liu X, Payne MC, Hsieh CJ, Lin NY. Trustworthy in silico cell labeling via ensemble-based image translation. BIOPHYSICAL REPORTS 2023; 3:100133. [PMID: 38026685 PMCID: PMC10663640 DOI: 10.1016/j.bpr.2023.100133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023]
Abstract
Artificial intelligence (AI) image translation has been a valuable tool for processing image data in biological and medical research. To apply such a tool in mission-critical applications, including drug screening, toxicity study, and clinical diagnostics, it is essential to ensure that the AI prediction is trustworthy. Here, we demonstrate that an ensemble learning method can quantify the uncertainty of AI image translation. We tested the uncertainty evaluation using experimentally acquired images of mesenchymal stromal cells. We find that the ensemble method reports a prediction standard deviation that correlates with the prediction error, estimating the prediction uncertainty. We show that this uncertainty is in agreement with the prediction error and Pearson correlation coefficient. We further show that the ensemble method can detect out-of-distribution input images by reporting increased uncertainty. Altogether, these results suggest that the ensemble-estimated uncertainty can be a useful indicator for identifying erroneous AI image translations.
Collapse
Affiliation(s)
- Sara Imboden
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Xuanqing Liu
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Marie C. Payne
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Cho-Jui Hsieh
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Neil Y.C. Lin
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
- Department of Bioengineering, University of California, Los Angeles, Los Angeles, California
- Institute for Quantitative and Computational Biosciences, University of California, Los Angeles, Los Angeles, California
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, California
- Jonsson Comprehensive Cancer Center, University of California, Los Angeles, Los Angeles, California
- Broad Stem Cell Center, University of California, Los Angeles, Los Angeles, California
| |
Collapse
|
34
|
Wang Z, Zhang Q, Wang Y, Zhu M, Li Q. A framework for immunofluorescence image augmentation and classification based on unsupervised attention mechanism. JOURNAL OF BIOPHOTONICS 2023; 16:e202300209. [PMID: 37559356 DOI: 10.1002/jbio.202300209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 07/16/2023] [Accepted: 08/07/2023] [Indexed: 08/11/2023]
Abstract
Autoimmune encephalitis (AE) is a common neurological disorder. As a standard method for neuroautoantibody detection, pathologists use tissue matrix assays (TBA) for initial disease screening. In this study, microscopic fluorescence imaging was combined with deep learning to improve AE diagnostic accuracy. Due to the inter-class imbalance of medical data, we propose an innovative generative adversarial network supplemented with attention mechanisms to highlight key regions in images to synthesize high-quality fluorescence images. However, securing annotated medical data is both time-consuming and costly. To circumvent this problem, we employ a self-supervised learning approach that utilizes unlabeled fluorescence data to support downstream classification tasks. To better understand the fluorescence properties in the data, we introduce a multichannel input convolutional neural network that adds additional channels of fluorescence intensity. This study builds an AE immunofluorescence dataset and obtains the classification accuracy of 88.5% using our method, thus confirming the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Ziyi Wang
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
- Engineering Research Center of Nanophotonics & Advanced Instrument, Ministry of Education, East China Normal University, Shanghai, China
| | - Qing Zhang
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
- Engineering Research Center of Nanophotonics & Advanced Instrument, Ministry of Education, East China Normal University, Shanghai, China
| | - Yan Wang
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
- Engineering Center of SHMEC for Space Information and GNSS, Shanghai, China
| | - Min Zhu
- Department of Dermatology, Huashan Hospital, Fudan University, Shanghai, China
| | - Qingli Li
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
- Engineering Research Center of Nanophotonics & Advanced Instrument, Ministry of Education, East China Normal University, Shanghai, China
- Engineering Center of SHMEC for Space Information and GNSS, Shanghai, China
| |
Collapse
|
35
|
Li X, Hu X, Chen X, Fan J, Zhao Z, Wu J, Wang H, Dai Q. Spatial redundancy transformer for self-supervised fluorescence image denoising. NATURE COMPUTATIONAL SCIENCE 2023; 3:1067-1080. [PMID: 38177722 PMCID: PMC10766531 DOI: 10.1038/s43588-023-00568-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 11/07/2023] [Indexed: 01/06/2024]
Abstract
Fluorescence imaging with high signal-to-noise ratios has become the foundation of accurate visualization and analysis of biological phenomena. However, the inevitable noise poses a formidable challenge to imaging sensitivity. Here we provide the spatial redundancy denoising transformer (SRDTrans) to remove noise from fluorescence images in a self-supervised manner. First, a sampling strategy based on spatial redundancy is proposed to extract adjacent orthogonal training pairs, which eliminates the dependence on high imaging speed. Second, we designed a lightweight spatiotemporal transformer architecture to capture long-range dependencies and high-resolution features at low computational cost. SRDTrans can restore high-frequency information without producing oversmoothed structures and distorted fluorescence traces. Finally, we demonstrate the state-of-the-art denoising performance of SRDTrans on single-molecule localization microscopy and two-photon volumetric calcium imaging. SRDTrans does not contain any assumptions about the imaging process and the sample, thus can be easily extended to various imaging modalities and biological applications.
Collapse
Affiliation(s)
- Xinyang Li
- Department of Automation, Tsinghua University, Beijing, China
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Xiaowan Hu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Xingye Chen
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Jiaqi Fan
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
- Department of Electronic Engineering, Tsinghua University, Beijing, China
| | - Zhifeng Zhao
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Multi-dimension and Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| | - Haoqian Wang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China.
- The Shenzhen Institute of Future Media Technology, Shenzhen, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- Beijing Key Laboratory of Multi-dimension and Multi-scale Computational Photography (MMCP), Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| |
Collapse
|
36
|
Pylvänäinen JW, Gómez-de-Mariscal E, Henriques R, Jacquemet G. Live-cell imaging in the deep learning era. Curr Opin Cell Biol 2023; 85:102271. [PMID: 37897927 DOI: 10.1016/j.ceb.2023.102271] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 10/30/2023]
Abstract
Live imaging is a powerful tool, enabling scientists to observe living organisms in real time. In particular, when combined with fluorescence microscopy, live imaging allows the monitoring of cellular components with high sensitivity and specificity. Yet, due to critical challenges (i.e., drift, phototoxicity, dataset size), implementing live imaging and analyzing the resulting datasets is rarely straightforward. Over the past years, the development of bioimage analysis tools, including deep learning, is changing how we perform live imaging. Here we briefly cover important computational methods aiding live imaging and carrying out key tasks such as drift correction, denoising, super-resolution imaging, artificial labeling, tracking, and time series analysis. We also cover recent advances in self-driving microscopy.
Collapse
Affiliation(s)
- Joanna W Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland
| | | | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal; University College London, London WC1E 6BT, United Kingdom
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland; Turku Bioscience Centre, University of Turku and Åbo Akademi University, 20520, Turku, Finland; InFLAMES Research Flagship Center, University of Turku and Åbo Akademi University, 20520 Turku, Finland; Turku Bioimaging, University of Turku and Åbo Akademi University, FI- 20520 Turku, Finland.
| |
Collapse
|
37
|
Astratov VN, Sahel YB, Eldar YC, Huang L, Ozcan A, Zheludev N, Zhao J, Burns Z, Liu Z, Narimanov E, Goswami N, Popescu G, Pfitzner E, Kukura P, Hsiao YT, Hsieh CL, Abbey B, Diaspro A, LeGratiet A, Bianchini P, Shaked NT, Simon B, Verrier N, Debailleul M, Haeberlé O, Wang S, Liu M, Bai Y, Cheng JX, Kariman BS, Fujita K, Sinvani M, Zalevsky Z, Li X, Huang GJ, Chu SW, Tzang O, Hershkovitz D, Cheshnovsky O, Huttunen MJ, Stanciu SG, Smolyaninova VN, Smolyaninov II, Leonhardt U, Sahebdivan S, Wang Z, Luk’yanchuk B, Wu L, Maslov AV, Jin B, Simovski CR, Perrin S, Montgomery P, Lecler S. Roadmap on Label-Free Super-Resolution Imaging. LASER & PHOTONICS REVIEWS 2023; 17:2200029. [PMID: 38883699 PMCID: PMC11178318 DOI: 10.1002/lpor.202200029] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Indexed: 06/18/2024]
Abstract
Label-free super-resolution (LFSR) imaging relies on light-scattering processes in nanoscale objects without a need for fluorescent (FL) staining required in super-resolved FL microscopy. The objectives of this Roadmap are to present a comprehensive vision of the developments, the state-of-the-art in this field, and to discuss the resolution boundaries and hurdles which need to be overcome to break the classical diffraction limit of the LFSR imaging. The scope of this Roadmap spans from the advanced interference detection techniques, where the diffraction-limited lateral resolution is combined with unsurpassed axial and temporal resolution, to techniques with true lateral super-resolution capability which are based on understanding resolution as an information science problem, on using novel structured illumination, near-field scanning, and nonlinear optics approaches, and on designing superlenses based on nanoplasmonics, metamaterials, transformation optics, and microsphere-assisted approaches. To this end, this Roadmap brings under the same umbrella researchers from the physics and biomedical optics communities in which such studies have often been developing separately. The ultimate intent of this paper is to create a vision for the current and future developments of LFSR imaging based on its physical mechanisms and to create a great opening for the series of articles in this field.
Collapse
Affiliation(s)
- Vasily N. Astratov
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Yair Ben Sahel
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Yonina C. Eldar
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
- David Geffen School of Medicine, University of California, Los Angeles, California 90095, USA
| | - Nikolay Zheludev
- Optoelectronics Research Centre, University of Southampton, Southampton, SO17 1BJ, UK
- Centre for Disruptive Photonic Technologies, The Photonics Institute, School of Physical and Mathematical Sciences, Nanyang Technological University, 637371, Singapore
| | - Junxiang Zhao
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zachary Burns
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zhaowei Liu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
- Material Science and Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Evgenii Narimanov
- School of Electrical Engineering, and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA
| | - Neha Goswami
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Gabriel Popescu
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Emanuel Pfitzner
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Philipp Kukura
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Yi-Teng Hsiao
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Chia-Lung Hsieh
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Brian Abbey
- Australian Research Council Centre of Excellence for Advanced Molecular Imaging, La Trobe University, Melbourne, Victoria, Australia
- Department of Chemistry and Physics, La Trobe Institute for Molecular Science (LIMS), La Trobe University, Melbourne, Victoria, Australia
| | - Alberto Diaspro
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Aymeric LeGratiet
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- Université de Rennes, CNRS, Institut FOTON - UMR 6082, F-22305 Lannion, France
| | - Paolo Bianchini
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Natan T. Shaked
- Tel Aviv University, Faculty of Engineering, Department of Biomedical Engineering, Tel Aviv 6997801, Israel
| | - Bertrand Simon
- LP2N, Institut d’Optique Graduate School, CNRS UMR 5298, Université de Bordeaux, Talence France
| | - Nicolas Verrier
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | | | - Olivier Haeberlé
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | - Sheng Wang
- School of Physics and Technology, Wuhan University, China
- Wuhan Institute of Quantum Technology, China
| | - Mengkun Liu
- Department of Physics and Astronomy, Stony Brook University, USA
- National Synchrotron Light Source II, Brookhaven National Laboratory, USA
| | - Yeran Bai
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Ji-Xin Cheng
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Behjat S. Kariman
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Katsumasa Fujita
- Department of Applied Physics and the Advanced Photonics and Biosensing Open Innovation Laboratory (AIST); and the Transdimensional Life Imaging Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| | - Moshe Sinvani
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Zeev Zalevsky
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Xiangping Li
- Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Institute of Photonics Technology, Jinan University, Guangzhou 510632, China
| | - Guan-Jie Huang
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Shi-Wei Chu
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Omer Tzang
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Dror Hershkovitz
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Ori Cheshnovsky
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Mikko J. Huttunen
- Laboratory of Photonics, Physics Unit, Tampere University, FI-33014, Tampere, Finland
| | - Stefan G. Stanciu
- Center for Microscopy – Microanalysis and Information Processing, Politehnica University of Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
| | - Vera N. Smolyaninova
- Department of Physics Astronomy and Geosciences, Towson University, 8000 York Rd., Towson, MD 21252, USA
| | - Igor I. Smolyaninov
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - Ulf Leonhardt
- Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Sahar Sahebdivan
- EMTensor GmbH, TechGate, Donau-City-Strasse 1, 1220 Wien, Austria
| | - Zengbo Wang
- School of Computer Science and Electronic Engineering, Bangor University, Bangor, LL57 1UT, United Kingdom
| | - Boris Luk’yanchuk
- Faculty of Physics, Lomonosov Moscow State University, Moscow 119991, Russia
| | - Limin Wu
- Department of Materials Science and State Key Laboratory of Molecular Engineering of Polymers, Fudan University, Shanghai 200433, China
| | - Alexey V. Maslov
- Department of Radiophysics, University of Nizhny Novgorod, Nizhny Novgorod, 603022, Russia
| | - Boya Jin
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Constantin R. Simovski
- Department of Electronics and Nano-Engineering, Aalto University, FI-00076, Espoo, Finland
- Faculty of Physics and Engineering, ITMO University, 199034, St-Petersburg, Russia
| | - Stephane Perrin
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Paul Montgomery
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Sylvain Lecler
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| |
Collapse
|
38
|
Guo X, Zhao F, Zhu J, Zhu D, Zhao Y, Fei P. Rapid 3D isotropic imaging of whole organ with double-ring light-sheet microscopy and self-learning side-lobe elimination. BIOMEDICAL OPTICS EXPRESS 2023; 14:6206-6221. [PMID: 38420327 PMCID: PMC10898557 DOI: 10.1364/boe.505217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 10/26/2023] [Accepted: 10/30/2023] [Indexed: 03/02/2024]
Abstract
Bessel-like plane illumination forms a new type of light-sheet microscopy with ultra-long optical sectioning distance that enables rapid 3D imaging of fine cellular structures across an entire large tissue. However, the side-lobe excitation of conventional Bessel light sheets severely impairs the quality of the reconstructed 3D image. Here, we propose a self-supervised deep learning (DL) approach that can completely eliminate the residual side lobes for a double-ring-modulated non-diffraction light-sheet microscope, thereby substantially improving the axial resolution of the 3D image. This lightweight DL model utilizes the own point spread function (PSF) of the microscope as prior information without the need for external high-resolution microscopy data. After a quick training process based on a small number of datasets, the grown-up model can restore sidelobe-free 3D images with near isotropic resolution for diverse samples. Using an advanced double-ring light-sheet microscope in conjunction with this efficient restoration approach, we demonstrate 5-minute rapid imaging of an entire mouse brain with a size of ∼12 mm × 8 mm × 6 mm and achieve uniform isotropic resolution of ∼4 µm (1.6-µm voxel) capable of discerning the single neurons and vessels across the whole brain.
Collapse
Affiliation(s)
- Xinyi Guo
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Fang Zhao
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jingtan Zhu
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, 430074, Wuhan, China
| | - Dan Zhu
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, 430074, Wuhan, China
- Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Yuxuan Zhao
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Peng Fei
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, 430074, Wuhan, China
| |
Collapse
|
39
|
Zhou H, Li Y, Chen B, Yang H, Zou M, Wen W, Ma Y, Chen M. Registration-free 3D super-resolution generative deep-learning network for fluorescence microscopy imaging. OPTICS LETTERS 2023; 48:6300-6303. [PMID: 38039252 DOI: 10.1364/ol.503238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 11/02/2023] [Indexed: 12/03/2023]
Abstract
Volumetric fluorescence microscopy has a great demand for high-resolution (HR) imaging and comes at the cost of sophisticated imaging solutions. Image super-resolution (SR) methods offer an effective way to recover HR images from low-resolution (LR) images. Nevertheless, these methods require pixel-level registered LR and HR images, posing a challenge in accurate image registration. To address these issues, we propose a novel registration-free image SR method. Our method conducts SR training and prediction directly on unregistered LR and HR volume neuronal images. The network is built on the CycleGAN framework and the 3D UNet based on attention mechanism. We evaluated our method on LR (5×/0.16-NA) and HR (20×/1.0-NA) fluorescence volume neuronal images collected by light-sheet microscopy. Compared to other super-resolution methods, our approach achieved the best reconstruction results. Our method shows promise for wide applications in the field of neuronal image super-resolution.
Collapse
|
40
|
Saguy A, Alalouf O, Opatovski N, Jang S, Heilemann M, Shechtman Y. DBlink: dynamic localization microscopy in super spatiotemporal resolution via deep learning. Nat Methods 2023; 20:1939-1948. [PMID: 37500760 DOI: 10.1038/s41592-023-01966-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 06/26/2023] [Indexed: 07/29/2023]
Abstract
Single-molecule localization microscopy (SMLM) has revolutionized biological imaging, improving the spatial resolution of traditional microscopes by an order of magnitude. However, SMLM techniques require long acquisition times, typically a few minutes, to yield a single super-resolved image, because they depend on accumulation of many localizations over thousands of recorded frames. Hence, the capability of SMLM to observe dynamics at high temporal resolution has always been limited. In this work, we present DBlink, a deep-learning-based method for super spatiotemporal resolution reconstruction from SMLM data. The input to DBlink is a recorded video of SMLM data and the output is a super spatiotemporal resolution video reconstruction. We use a convolutional neural network combined with a bidirectional long short-term memory network architecture, designed for capturing long-term dependencies between different input frames. We demonstrate DBlink performance on simulated filaments and mitochondria-like structures, on experimental SMLM data under controlled motion conditions and on live-cell dynamic SMLM. DBlink's spatiotemporal interpolation constitutes an important advance in super-resolution imaging of dynamic processes in live cells.
Collapse
Affiliation(s)
- Alon Saguy
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, Israel
| | - Onit Alalouf
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, Israel
| | - Nadav Opatovski
- Russell Berrie Nanotechnology Institute, Technion-Israel Institute of Technology, Haifa, Israel
| | - Soohyen Jang
- Institute of Physical and Theoretical Chemistry, Goethe-University Frankfurt, Frankfurt, Germany
- Institute of Physical and Theoretical Chemistry, IMPRS on Cellular Biophysics, Goethe-University Frankfurt, Frankfurt, Germany
| | - Mike Heilemann
- Institute of Physical and Theoretical Chemistry, Goethe-University Frankfurt, Frankfurt, Germany
- Institute of Physical and Theoretical Chemistry, IMPRS on Cellular Biophysics, Goethe-University Frankfurt, Frankfurt, Germany
| | - Yoav Shechtman
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, Israel.
| |
Collapse
|
41
|
Ibrahim KA, Grußmayer KS, Riguet N, Feletti L, Lashuel HA, Radenovic A. Label-free identification of protein aggregates using deep learning. Nat Commun 2023; 14:7816. [PMID: 38016971 PMCID: PMC10684545 DOI: 10.1038/s41467-023-43440-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 11/09/2023] [Indexed: 11/30/2023] Open
Abstract
Protein misfolding and aggregation play central roles in the pathogenesis of various neurodegenerative diseases (NDDs), including Huntington's disease, which is caused by a genetic mutation in exon 1 of the Huntingtin protein (Httex1). The fluorescent labels commonly used to visualize and monitor the dynamics of protein expression have been shown to alter the biophysical properties of proteins and the final ultrastructure, composition, and toxic properties of the formed aggregates. To overcome this limitation, we present a method for label-free identification of NDD-associated aggregates (LINA). Our approach utilizes deep learning to detect unlabeled and unaltered Httex1 aggregates in living cells from transmitted-light images, without the need for fluorescent labeling. Our models are robust across imaging conditions and on aggregates formed by different constructs of Httex1. LINA enables the dynamic identification of label-free aggregates and measurement of their dry mass and area changes during their growth process, offering high speed, specificity, and simplicity to analyze protein aggregation dynamics and obtain high-fidelity information.
Collapse
Affiliation(s)
- Khalid A Ibrahim
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Laboratory of Molecular and Chemical Biology of Neurodegeneration, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Kristin S Grußmayer
- Department of Bionanoscience and Kavli Institute of Nanoscience Delft, Delft University of Technology, Delft, Netherlands.
| | - Nathan Riguet
- Laboratory of Molecular and Chemical Biology of Neurodegeneration, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Lely Feletti
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Hilal A Lashuel
- Laboratory of Molecular and Chemical Biology of Neurodegeneration, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| | - Aleksandra Radenovic
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
42
|
Tatsugami F, Nakaura T, Yanagawa M, Fujita S, Kamagata K, Ito R, Kawamura M, Fushimi Y, Ueda D, Matsui Y, Yamada A, Fujima N, Fujioka T, Nozaki T, Tsuboyama T, Hirata K, Naganawa S. Recent advances in artificial intelligence for cardiac CT: Enhancing diagnosis and prognosis prediction. Diagn Interv Imaging 2023; 104:521-528. [PMID: 37407346 DOI: 10.1016/j.diii.2023.06.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 06/20/2023] [Indexed: 07/07/2023]
Abstract
Recent advances in artificial intelligence (AI) for cardiac computed tomography (CT) have shown great potential in enhancing diagnosis and prognosis prediction in patients with cardiovascular disease. Deep learning, a type of machine learning, has revolutionized radiology by enabling automatic feature extraction and learning from large datasets, particularly in image-based applications. Thus, AI-driven techniques have enabled a faster analysis of cardiac CT examinations than when they are analyzed by humans, while maintaining reproducibility. However, further research and validation are required to fully assess the diagnostic performance, radiation dose-reduction capabilities, and clinical correctness of these AI-driven techniques in cardiac CT. This review article presents recent advances of AI in the field of cardiac CT, including deep-learning-based image reconstruction, coronary artery motion correction, automatic calcium scoring, automatic epicardial fat measurement, coronary artery stenosis diagnosis, fractional flow reserve prediction, and prognosis prediction, analyzes current limitations of these techniques and discusses future challenges.
Collapse
Affiliation(s)
- Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan.
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, 1-1-1 Honjo Chuo-ku, Kumamoto, 860-8556, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Shohei Fujita
- Departmen of Radiology, Graduate School of Medicine and Faculty of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo 113-8421, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawaharacho, Sakyoku, Kyoto, 606-8507, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata-cho, Kita-ku, Okayama, 700-8558, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, 3-1-1 Asahi, Matsumoto, Nagano, 390-8621, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital N15, W5, Kita-Ku, Sapporo 060-8638, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8519, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, 160-0016, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Kita 15 Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
43
|
Balasubramanian H, Hobson CM, Chew TL, Aaron JS. Imagining the future of optical microscopy: everything, everywhere, all at once. Commun Biol 2023; 6:1096. [PMID: 37898673 PMCID: PMC10613274 DOI: 10.1038/s42003-023-05468-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 10/16/2023] [Indexed: 10/30/2023] Open
Abstract
The optical microscope has revolutionized biology since at least the 17th Century. Since then, it has progressed from a largely observational tool to a powerful bioanalytical platform. However, realizing its full potential to study live specimens is hindered by a daunting array of technical challenges. Here, we delve into the current state of live imaging to explore the barriers that must be overcome and the possibilities that lie ahead. We venture to envision a future where we can visualize and study everything, everywhere, all at once - from the intricate inner workings of a single cell to the dynamic interplay across entire organisms, and a world where scientists could access the necessary microscopy technologies anywhere.
Collapse
Affiliation(s)
| | - Chad M Hobson
- Advanced Imaging Center; Howard Hughes Medical Institute Janelia Research Campus, Ashburn, VA, 20147, USA
| | - Teng-Leong Chew
- Advanced Imaging Center; Howard Hughes Medical Institute Janelia Research Campus, Ashburn, VA, 20147, USA
| | - Jesse S Aaron
- Advanced Imaging Center; Howard Hughes Medical Institute Janelia Research Campus, Ashburn, VA, 20147, USA.
| |
Collapse
|
44
|
Shimizu K. Near-Infrared Transillumination for Macroscopic Functional Imaging of Animal Bodies. BIOLOGY 2023; 12:1362. [PMID: 37997961 PMCID: PMC10668962 DOI: 10.3390/biology12111362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 10/17/2023] [Accepted: 10/18/2023] [Indexed: 11/25/2023]
Abstract
The classical transillumination technique has been revitalized through recent advancements in optical technology, enhancing its applicability in the realm of biomedical research. With a new perspective on near-axis scattered light, we have harnessed near-infrared (NIR) light to visualize intricate internal light-absorbing structures within animal bodies. By leveraging the principle of differentiation, we have extended the applicability of the Beer-Lambert law even in cases of scattering-dominant media, such as animal body tissues. This approach facilitates the visualization of dynamic physiological changes occurring within animal bodies, thereby enabling noninvasive, real-time imaging of macroscopic functionality in vivo. An important challenge inherent to transillumination imaging lies in the image blur caused by pronounced light scattering within body tissues. By extracting near-axis scattered components from the predominant diffusely scattered light, we have achieved cross-sectional imaging of animal bodies. Furthermore, we have introduced software-based techniques encompassing deconvolution using the point spread function and the application of deep learning principles to counteract the scattering effect. Finally, transillumination imaging has been elevated from two-dimensional to three-dimensional imaging. The effectiveness and applicability of these proposed techniques have been validated through comprehensive simulations and experiments involving human and animal subjects. As demonstrated through these studies, transillumination imaging coupled with emerging technologies offers a promising avenue for future biomedical applications.
Collapse
Affiliation(s)
- Koichi Shimizu
- School of Optoelectronic Engineering, Xidian University, Xi’an 710071, China;
- IPS Research Center, Waseda University, Kitakyushu 808-0135, Japan
| |
Collapse
|
45
|
Liu Z, Xu F. Interpretable neural networks: principles and applications. Front Artif Intell 2023; 6:974295. [PMID: 37899962 PMCID: PMC10606258 DOI: 10.3389/frai.2023.974295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 09/25/2023] [Indexed: 10/31/2023] Open
Abstract
In recent years, with the rapid development of deep learning technology, great progress has been made in computer vision, image recognition, pattern recognition, and speech signal processing. However, due to the black-box nature of deep neural networks (DNNs), one cannot explain the parameters in the deep network and why it can perfectly perform the assigned tasks. The interpretability of neural networks has now become a research hotspot in the field of deep learning. It covers a wide range of topics in speech and text signal processing, image processing, differential equation solving, and other fields. There are subtle differences in the definition of interpretability in different fields. This paper divides interpretable neural network (INN) methods into the following two directions: model decomposition neural networks, and semantic INNs. The former mainly constructs an INN by converting the analytical model of a conventional method into different layers of neural networks and combining the interpretability of the conventional model-based method with the powerful learning capability of the neural network. This type of INNs is further classified into different subtypes depending on which type of models they are derived from, i.e., mathematical models, physical models, and other models. The second type is the interpretable network with visual semantic information for user understanding. Its basic idea is to use the visualization of the whole or partial network structure to assign semantic information to the network structure, which further includes convolutional layer output visualization, decision tree extraction, semantic graph, etc. This type of method mainly uses human visual logic to explain the structure of a black-box neural network. So it is a post-network-design method that tries to assign interpretability to a black-box network structure afterward, as opposed to the pre-network-design method of model-based INNs, which designs interpretable network structure beforehand. This paper reviews recent progress in these areas as well as various application scenarios of INNs and discusses existing problems and future development directions.
Collapse
Affiliation(s)
- Zhuoyang Liu
- Key Lab of Information Science of Electromagnetic Waves, Fudan University, Shanghai, China
- Faculty of Math and Computer Science, Weizmann Institute of Science, Rehovot, Israel
| | - Feng Xu
- Key Lab of Information Science of Electromagnetic Waves, Fudan University, Shanghai, China
| |
Collapse
|
46
|
Luo Z, Zhu G, Xu H, Lin D, Li J, Qu J. Combination of deep learning and 2D CARS figures for identification of amyloid-β plaques. OPTICS EXPRESS 2023; 31:34413-34427. [PMID: 37859198 DOI: 10.1364/oe.500136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 09/18/2023] [Indexed: 10/21/2023]
Abstract
In vivo imaging and accurate identification of amyloid-β (Aβ) plaque are crucial in Alzheimer's disease (AD) research. In this work, we propose to combine the coherent anti-Stokes Raman scattering (CARS) microscopy, a powerful detection technology for providing Raman spectra and label-free imaging, with deep learning to distinguish Aβ from non-Aβ regions in AD mice brains in vivo. The 1D CARS spectra is firstly converted to 2D CARS figures by using two different methods: spectral recurrence plot (SRP) and spectral Gramian angular field (SGAF). This can provide more learnable information to the network, improving the classification precision. We then devise a cross-stage attention network (CSAN) that automatically learns the features of Aβ plaques and non-Aβ regions by taking advantage of the computational advances in deep learning. Our algorithm yields higher accuracy, precision, sensitivity and specificity than the results of conventional multivariate statistical analysis method and 1D CARS spectra combined with deep learning, demonstrating its competence in identifying Aβ plaques. Last but not least, the CSAN framework requires no prior information on the imaging modality and may be applicable to other spectroscopy analytical fields.
Collapse
|
47
|
Abstract
Multiplex imaging has emerged as an invaluable tool for immune-oncologists and translational researchers, enabling them to examine intricate interactions among immune cells, stroma, matrix, and malignant cells within the tumor microenvironment (TME). It holds significant promise in the quest to discover improved biomarkers for treatment stratification and identify novel therapeutic targets. Nonetheless, several challenges exist in the realms of study design, experiment optimization, and data analysis. In this review, our aim is to present an overview of the utilization of multiplex imaging in immuno-oncology studies and inform novice researchers about the fundamental principles at each stage of the imaging and analysis process.
Collapse
Affiliation(s)
- Chen Zhao
- Thoracic and GI Malignancies Branch, CCR, NCI, Bethesda, Maryland, USA
- Lymphocyte Biology Section, Laboratory of Immune System Biology, NIAID, Bethesda, Maryland, USA
| | - Ronald N Germain
- Lymphocyte Biology Section, Laboratory of Immune System Biology, NIAID, Bethesda, Maryland, USA
| |
Collapse
|
48
|
Zheng H, Huang S, Zhang J, Zhang R, Wang J, Yuan J, Li A, Yang X, Zhang Z. C1M2: a universal algorithm for 3D instance segmentation, annotation, and quantification of irregular cells. SCIENCE CHINA. LIFE SCIENCES 2023; 66:2415-2428. [PMID: 37243949 DOI: 10.1007/s11427-022-2327-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Accepted: 03/17/2023] [Indexed: 05/29/2023]
Abstract
Cell instance segmentation is a fundamental task for many biological applications, especially for packed cells in three-dimensional (3D) microscope images that can fully display cellular morphology. Image processing algorithms based on neural networks and feature engineering have enabled great progress in two-dimensional (2D) instance segmentation. However, current methods cannot achieve high segmentation accuracy for irregular cells in 3D images. In this study, we introduce a universal, morphology-based 3D instance segmentation algorithm called Crop Once Merge Twice (C1M2), which can segment cells from a wide range of image types and does not require nucleus images. C1M2 can be extended to quantify the fluorescence intensity of fluorescent proteins and antibodies and automatically annotate their expression levels in individual cells. Our results suggest that C1M2 can serve as a tissue cytometry for 3D histopathological assays by quantifying fluorescence intensity with spatial localization and morphological information.
Collapse
Affiliation(s)
- Hao Zheng
- Britton Chance Center and MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Songlin Huang
- Britton Chance Center and MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, China
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou, 570228, China
| | - Jing Zhang
- Britton Chance Center and MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Ren Zhang
- Britton Chance Center and MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jialu Wang
- Britton Chance Center and MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jing Yuan
- Britton Chance Center and MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Anan Li
- Britton Chance Center and MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Xin Yang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Zhihong Zhang
- Britton Chance Center and MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, China.
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou, 570228, China.
| |
Collapse
|
49
|
Ramos AP, Szalapak A, Ferme LC, Modes CD. From cells to form: A roadmap to study shape emergence in vivo. Biophys J 2023; 122:3587-3599. [PMID: 37243338 PMCID: PMC10541488 DOI: 10.1016/j.bpj.2023.05.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 04/25/2023] [Accepted: 05/18/2023] [Indexed: 05/28/2023] Open
Abstract
Organogenesis arises from the collective arrangement of cells into progressively 3D-shaped tissue. The acquisition of a correctly shaped organ is then the result of a complex interplay between molecular cues, responsible for differentiation and patterning, and the mechanical properties of the system, which generate the necessary forces that drive correct shape emergence. Nowadays, technological advances in the fields of microscopy, molecular biology, and computer science are making it possible to see and record such complex interactions in incredible, unforeseen detail within the global context of the developing embryo. A quantitative and interdisciplinary perspective of developmental biology becomes then necessary for a comprehensive understanding of morphogenesis. Here, we provide a roadmap to quantify the events that lead to morphogenesis from imaging to image analysis, quantification, and modeling, focusing on the discrete cellular and tissue shape changes, as well as their mechanical properties.
Collapse
Affiliation(s)
| | - Alicja Szalapak
- Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany; Center for Systems Biology Dresden, Dresden, Germany
| | | | - Carl D Modes
- Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany; Center for Systems Biology Dresden, Dresden, Germany; Cluster of Excellence Physics of Life, TU Dresden, Dresden, Germany
| |
Collapse
|
50
|
Yang D, Yu Z, Zheng M, Yang W, Liu Z, Zhou J, Huang L. Artificial intelligence-accelerated high-throughput screening of antibiotic combinations on a microfluidic combinatorial droplet system. LAB ON A CHIP 2023; 23:3961-3977. [PMID: 37605875 DOI: 10.1039/d3lc00647f] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/23/2023]
Abstract
Microfluidic platforms have been employed as an effective tool for drug screening and exhibit the advantages of lower reagent consumption, higher throughput and a higher degree of automation. Despite the great advancement, it remains challenging to screen complex antibiotic combinations in a simple, high-throughput and systematic manner. Meanwhile, the large amounts of datasets generated during the screening process generally outpace the abilities of the conventional manual or semi-automatic data analysis. To address these issues, we propose an artificial intelligence-accelerated high-throughput combinatorial drug evaluation system (AI-HTCDES), which not only allows high-throughput production of antibiotic combinations with varying concentrations, but can also automatically analyze the dynamic growth of bacteria under the action of different antibiotic combinations. Based on this system, several antibiotic combinations displaying an additive effect are discovered, and the dosage regimens of each component in the combinations are determined. This strategy not only provides useful guidance in the clinical use of antibiotic combination therapy and personalized medicine, but also offers a promising tool for the combinatorial screenings of other medicines.
Collapse
Affiliation(s)
- Deyu Yang
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China.
| | - Ziming Yu
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China.
| | - Mengxin Zheng
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China.
| | - Wei Yang
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China.
| | - Zhangcai Liu
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China.
| | - Jianhua Zhou
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China.
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510275, China
| | - Lu Huang
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China.
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510275, China
| |
Collapse
|