1
|
Cao R, Divekar NS, Nuñez JK, Upadhyayula S, Waller L. Neural space-time model for dynamic multi-shot imaging. Nat Methods 2024:10.1038/s41592-024-02417-0. [PMID: 39317729 DOI: 10.1038/s41592-024-02417-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Accepted: 08/15/2024] [Indexed: 09/26/2024]
Abstract
Computational imaging reconstructions from multiple measurements that are captured sequentially often suffer from motion artifacts if the scene is dynamic. We propose a neural space-time model (NSTM) that jointly estimates the scene and its motion dynamics, without data priors or pre-training. Hence, we can both remove motion artifacts and resolve sample dynamics from the same set of raw measurements used for the conventional reconstruction. We demonstrate NSTM in three computational imaging systems: differential phase-contrast microscopy, three-dimensional structured illumination microscopy and rolling-shutter DiffuserCam. We show that NSTM can recover subcellular motion dynamics and thus reduce the misinterpretation of living systems caused by motion artifacts.
Collapse
Affiliation(s)
- Ruiming Cao
- Department of Bioengineering, UC Berkeley, Berkeley, CA, USA.
| | - Nikita S Divekar
- Department of Molecular and Cell Biology, UC Berkeley, Berkeley, CA, USA
| | - James K Nuñez
- Department of Molecular and Cell Biology, UC Berkeley, Berkeley, CA, USA
| | | | - Laura Waller
- Department of Electrical Engineering and Computer Sciences, UC Berkeley, Berkeley, CA, USA.
| |
Collapse
|
2
|
Zhang T, Liu S, Durojaye O, Xiong F, Fang Z, Ullah T, Fu C, Sun B, Jiang H, Xia P, Wang Z, Yao X, Liu X. Dynamic phosphorylation of FOXA1 by Aurora B guides post-mitotic gene reactivation. Cell Rep 2024; 43:114739. [PMID: 39276350 DOI: 10.1016/j.celrep.2024.114739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 06/10/2024] [Accepted: 08/23/2024] [Indexed: 09/17/2024] Open
Abstract
FOXA1 serves as a crucial pioneer transcription factor during developmental processes and plays a pivotal role as a mitotic bookmarking factor to perpetuate gene expression profiles and maintain cellular identity. During mitosis, the majority of FOXA1 dissociates from specific DNA binding sites and redistributes to non-specific binding sites; however, the regulatory mechanisms governing molecular dynamics and activity of FOXA1 remain elusive. Here, we show that mitotic kinase Aurora B specifies the different DNA binding modes of FOXA1 and guides FOXA1 biomolecular condensation in mitosis. Mechanistically, Aurora B kinase phosphorylates FOXA1 at Serine 221 (S221) to liberate the specific, but not the non-specific, DNA binding. Interestingly, the phosphorylation of S221 attenuates the FOXA1 condensation that requires specific DNA binding. Importantly, perturbation of the dynamic phosphorylation impairs accurate gene reactivation and cell proliferation, suggesting that reversible mitotic protein phosphorylation emerges as a fundamental mechanism for the spatiotemporal control of mitotic bookmarking.
Collapse
Affiliation(s)
- Ting Zhang
- MOE Key Laboratory for Cellular Dynamics, Center for Advanced Interdisciplinary Science and Biomedicine of IHM, Hefei National Research Center for Interdisciplinary Sciences at the Microscale, University of Science and Technology of China, Hefei 230027, China
| | - Shuaiyu Liu
- MOE Key Laboratory for Cellular Dynamics, Center for Advanced Interdisciplinary Science and Biomedicine of IHM, Hefei National Research Center for Interdisciplinary Sciences at the Microscale, University of Science and Technology of China, Hefei 230027, China; Anhui Key Laboratory of Cellular Dynamics and Chemical Biology, University of Science and Technology of China, Hefei 230027, China
| | - Olanrewaju Durojaye
- MOE Key Laboratory for Cellular Dynamics, Center for Advanced Interdisciplinary Science and Biomedicine of IHM, Hefei National Research Center for Interdisciplinary Sciences at the Microscale, University of Science and Technology of China, Hefei 230027, China
| | - Fangyuan Xiong
- MOE Key Laboratory for Cellular Dynamics, Center for Advanced Interdisciplinary Science and Biomedicine of IHM, Hefei National Research Center for Interdisciplinary Sciences at the Microscale, University of Science and Technology of China, Hefei 230027, China; Hefei Cancer Hospital, Chinese Academy of Sciences, Hefei 230027, China
| | - Zhiyou Fang
- Hefei Cancer Hospital, Chinese Academy of Sciences, Hefei 230027, China
| | - Tahir Ullah
- MOE Key Laboratory for Cellular Dynamics, Center for Advanced Interdisciplinary Science and Biomedicine of IHM, Hefei National Research Center for Interdisciplinary Sciences at the Microscale, University of Science and Technology of China, Hefei 230027, China
| | - Chuanhai Fu
- MOE Key Laboratory for Cellular Dynamics, Center for Advanced Interdisciplinary Science and Biomedicine of IHM, Hefei National Research Center for Interdisciplinary Sciences at the Microscale, University of Science and Technology of China, Hefei 230027, China; Anhui Key Laboratory of Cellular Dynamics and Chemical Biology, University of Science and Technology of China, Hefei 230027, China
| | - Bo Sun
- School of Life Science and Technology, ShanghaiTech University, Shanghai 200031, China
| | - Hao Jiang
- West China Hospital, Sichuan University, Chengdu 610041, China
| | - Peng Xia
- MOE Key Laboratory for Cellular Dynamics, Center for Advanced Interdisciplinary Science and Biomedicine of IHM, Hefei National Research Center for Interdisciplinary Sciences at the Microscale, University of Science and Technology of China, Hefei 230027, China; Institute of Life Sciences, Zhejiang University, Hangzhou 310058, China
| | - Zhikai Wang
- MOE Key Laboratory for Cellular Dynamics, Center for Advanced Interdisciplinary Science and Biomedicine of IHM, Hefei National Research Center for Interdisciplinary Sciences at the Microscale, University of Science and Technology of China, Hefei 230027, China; Anhui Key Laboratory of Cellular Dynamics and Chemical Biology, University of Science and Technology of China, Hefei 230027, China.
| | - Xuebiao Yao
- MOE Key Laboratory for Cellular Dynamics, Center for Advanced Interdisciplinary Science and Biomedicine of IHM, Hefei National Research Center for Interdisciplinary Sciences at the Microscale, University of Science and Technology of China, Hefei 230027, China; Anhui Key Laboratory of Cellular Dynamics and Chemical Biology, University of Science and Technology of China, Hefei 230027, China.
| | - Xing Liu
- MOE Key Laboratory for Cellular Dynamics, Center for Advanced Interdisciplinary Science and Biomedicine of IHM, Hefei National Research Center for Interdisciplinary Sciences at the Microscale, University of Science and Technology of China, Hefei 230027, China; Anhui Key Laboratory of Cellular Dynamics and Chemical Biology, University of Science and Technology of China, Hefei 230027, China.
| |
Collapse
|
3
|
Shah ZH, Müller M, Hübner W, Ortkrass H, Hammer B, Huser T, Schenck W. Image restoration in frequency space using complex-valued CNNs. Front Artif Intell 2024; 7:1353873. [PMID: 39376505 PMCID: PMC11456741 DOI: 10.3389/frai.2024.1353873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 09/03/2024] [Indexed: 10/09/2024] Open
Abstract
Real-valued convolutional neural networks (RV-CNNs) in the spatial domain have outperformed classical approaches in many image restoration tasks such as image denoising and super-resolution. Fourier analysis of the results produced by these spatial domain models reveals the limitations of these models in properly processing the full frequency spectrum. This lack of complete spectral information can result in missing textural and structural elements. To address this limitation, we explore the potential of complex-valued convolutional neural networks (CV-CNNs) for image restoration tasks. CV-CNNs have shown remarkable performance in tasks such as image classification and segmentation. However, CV-CNNs for image restoration problems in the frequency domain have not been fully investigated to address the aforementioned issues. Here, we propose several novel CV-CNN-based models equipped with complex-valued attention gates for image denoising and super-resolution in the frequency domains. We also show that our CV-CNN-based models outperform their real-valued counterparts for denoising super-resolution structured illumination microscopy (SR-SIM) and conventional image datasets. Furthermore, the experimental results show that our proposed CV-CNN-based models preserve the frequency spectrum better than their real-valued counterparts in the denoising task. Based on these findings, we conclude that CV-CNN-based methods provide a plausible and beneficial deep learning approach for image restoration in the frequency domain.
Collapse
Affiliation(s)
- Zafran Hussain Shah
- Center for Applied Data Science, Faculty of Engineering and Mathematics, Bielefeld University of Applied Sciences and Arts, Bielefeld, Germany
| | - Marcel Müller
- Biomolecular Photonics Group, Faculty of Physics, Bielefeld University, Bielefeld, Germany
| | - Wolfgang Hübner
- Biomolecular Photonics Group, Faculty of Physics, Bielefeld University, Bielefeld, Germany
| | - Henning Ortkrass
- Biomolecular Photonics Group, Faculty of Physics, Bielefeld University, Bielefeld, Germany
| | - Barbara Hammer
- CITEC—Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| | - Thomas Huser
- Biomolecular Photonics Group, Faculty of Physics, Bielefeld University, Bielefeld, Germany
| | - Wolfram Schenck
- Center for Applied Data Science, Faculty of Engineering and Mathematics, Bielefeld University of Applied Sciences and Arts, Bielefeld, Germany
| |
Collapse
|
4
|
Qu L, Zhao S, Huang Y, Ye X, Wang K, Liu Y, Liu X, Mao H, Hu G, Chen W, Guo C, He J, Tan J, Li H, Chen L, Zhao W. Self-inspired learning for denoising live-cell super-resolution microscopy. Nat Methods 2024:10.1038/s41592-024-02400-9. [PMID: 39261639 DOI: 10.1038/s41592-024-02400-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 07/31/2024] [Indexed: 09/13/2024]
Abstract
Every collected photon is precious in live-cell super-resolution (SR) microscopy. Here, we describe a data-efficient, deep learning-based denoising solution to improve diverse SR imaging modalities. The method, SN2N, is a Self-inspired Noise2Noise module with self-supervised data generation and self-constrained learning process. SN2N is fully competitive with supervised learning methods and circumvents the need for large training set and clean ground truth, requiring only a single noisy frame for training. We show that SN2N improves photon efficiency by one-to-two orders of magnitude and is compatible with multiple imaging modalities for volumetric, multicolor, time-lapse SR microscopy. We further integrated SN2N into different SR reconstruction algorithms to effectively mitigate image artifacts. We anticipate SN2N will enable improved live-SR imaging and inspire further advances.
Collapse
Affiliation(s)
- Liying Qu
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Shiqun Zhao
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Yuanyuan Huang
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Xianxin Ye
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Kunhao Wang
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Yuzhen Liu
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Xianming Liu
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Heng Mao
- School of Mathematical Sciences, Peking University, Beijing, China
| | - Guangwei Hu
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Wei Chen
- School of Mechanical Science and Engineering, Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, China
| | - Changliang Guo
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Jiaye He
- National Innovation Center for Advanced Medical Devices, Shenzhen, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Jiubin Tan
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China
| | - Haoyu Li
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China
- Frontiers Science Center for Matter Behave in Space Environment, Harbin Institute of Technology, Harbin, China
- Key Laboratory of Micro-Systems and Micro-Structures Manufacturing of Ministry of Education, Harbin Institute of Technology, Harbin, China
| | - Liangyi Chen
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
- PKU-IDG/McGovern Institute for Brain Research, Beijing, China
- Beijing Academy of Artificial Intelligence, Beijing, China
| | - Weisong Zhao
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China.
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China.
- Frontiers Science Center for Matter Behave in Space Environment, Harbin Institute of Technology, Harbin, China.
- Key Laboratory of Micro-Systems and Micro-Structures Manufacturing of Ministry of Education, Harbin Institute of Technology, Harbin, China.
| |
Collapse
|
5
|
Nani JV, Muotri AR, Hayashi MAF. Peering into the mind: unraveling schizophrenia's secrets using models. Mol Psychiatry 2024:10.1038/s41380-024-02728-w. [PMID: 39245692 DOI: 10.1038/s41380-024-02728-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 08/21/2024] [Accepted: 08/27/2024] [Indexed: 09/10/2024]
Abstract
Schizophrenia (SCZ) is a complex mental disorder characterized by a range of symptoms, including positive and negative symptoms, as well as cognitive impairments. Despite the extensive research, the underlying neurobiology of SCZ remain elusive. To overcome this challenge, the use of diverse laboratory modeling techniques, encompassing cellular and animal models, and innovative approaches like induced pluripotent stem cell (iPSC)-derived neuronal cultures or brain organoids and genetically engineered animal models, has been crucial. Immortalized cellular models provide controlled environments for investigating the molecular and neurochemical pathways involved in neuronal function, while iPSCs and brain organoids, derived from patient-specific sources, offer significant advantage in translational research by facilitating direct comparisons of cellular phenotypes between patient-derived neurons and healthy-control neurons. Animal models can recapitulate the different psychopathological aspects that should be modeled, offering valuable insights into the neurobiology of SCZ. In addition, invertebrates' models are genetically tractable and offer a powerful approach to dissect the core genetic underpinnings of SCZ, while vertebrate models, especially mammals, with their more complex nervous systems and behavioral repertoire, provide a closer approximation of the human condition to study SCZ-related traits. This narrative review provides a comprehensive overview of the diverse modeling approaches, critically evaluating their strengths and limitations. By synthesizing knowledge from these models, this review offers a valuable source for researchers, clinicians, and stakeholders alike. Integrating findings across these different models may allow us to build a more holistic picture of SCZ pathophysiology, facilitating the exploration of new research avenues and informed decision-making for interventions.
Collapse
Affiliation(s)
- João V Nani
- Department of Pharmacology, Escola Paulista de Medicina (EPM), Universidade Federal de São Paulo (UNIFESP), São Paulo, SP, Brazil.
- National Institute for Translational Medicine (INCT-TM, CNPq/FAPESP/CAPES), Ribeirão Preto, Brazil.
| | - Alysson R Muotri
- Department of Pediatrics and Department of Molecular and Cellular Medicine, University of California, San Diego, La Jolla, CA, USA
| | - Mirian A F Hayashi
- Department of Pharmacology, Escola Paulista de Medicina (EPM), Universidade Federal de São Paulo (UNIFESP), São Paulo, SP, Brazil.
- National Institute for Translational Medicine (INCT-TM, CNPq/FAPESP/CAPES), Ribeirão Preto, Brazil.
| |
Collapse
|
6
|
Jin L, Liu J, Zhang H, Zhu Y, Yang H, Wang J, Zhang L, Kuang C, Ji B, Zhang J, Liu X, Xu Y. Deep learning permits imaging of multiple structures with the same fluorophores. Biophys J 2024:S0006-3495(24)00593-9. [PMID: 39233442 DOI: 10.1016/j.bpj.2024.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 07/03/2024] [Accepted: 09/03/2024] [Indexed: 09/06/2024] Open
Abstract
Fluorescence microscopy, which employs fluorescent tags to label and observe cellular structures and their dynamics, is a powerful tool for life sciences. However, due to the spectral overlap between different dyes, a limited number of structures can be separately labeled and imaged for live-cell applications. In addition, the conventional sequential channel imaging procedure is quite time consuming, as it needs to switch either different lasers or filters. Here, we propose a novel double-structure network (DBSN) that consists of multiple connected models, which can extract six distinct subcellular structures from three raw images with only two separate fluorescent labels. DBSN combines the intensity-balance model to compensate for uneven fluorescent labels for different structures and the structure-separation model to extract multiple different structures with the same fluorescent labels. Therefore, DBSN breaks the bottleneck of the existing technologies and holds immense potential applications in the field of cell biology.
Collapse
Affiliation(s)
- Luhong Jin
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China; Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, China
| | - Jingfang Liu
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, China
| | - Heng Zhang
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, China
| | - Yunqi Zhu
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, China
| | - Haixu Yang
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, China; Binjiang Institute of Zhejiang University, Hangzhou, China
| | - Jianhang Wang
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, China
| | - Luhao Zhang
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, China; Binjiang Institute of Zhejiang University, Hangzhou, China
| | - Cuifang Kuang
- State Key Laboratory of Extreme Photonics and Instrumentation, Department of Optical Engineering, Zhejiang University, Hangzhou, China
| | - Baohua Ji
- Institute of Biomechanics and Applications, Department of Engineering Mechanics, Zhejiang University, Hangzhou, China
| | - Ju Zhang
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China
| | - Xu Liu
- State Key Laboratory of Extreme Photonics and Instrumentation, Department of Optical Engineering, Zhejiang University, Hangzhou, China
| | - Yingke Xu
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, China; Binjiang Institute of Zhejiang University, Hangzhou, China; Department of Endocrinology, Children's Hospital of Zhejiang University School of Medicine, National Clinical Research Center for Children's Health, Hangzhou, China.
| |
Collapse
|
7
|
Zhao T, Lei M. Fast, faster, and the fastest structured illumination microscopy. LIGHT, SCIENCE & APPLICATIONS 2024; 13:186. [PMID: 39134519 PMCID: PMC11319336 DOI: 10.1038/s41377-024-01505-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2024]
Abstract
Parallel acquisition-readout structured-illumination microscopy (PAR-SIM) was designed for high-speed raw data acquisition. By utilizing an xy-scan galvo mirror set, the raw data is projected onto different areas of the camera, enabling a fundamentally stupendous information spatial-temporal flux.
Collapse
Affiliation(s)
- Tianyu Zhao
- MOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, School of Physics, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Ming Lei
- MOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, School of Physics, Xi'an Jiaotong University, Xi'an, 710049, China.
| |
Collapse
|
8
|
Ma C, Tan W, He R, Yan B. Pretraining a foundation model for generalizable fluorescence microscopy-based image restoration. Nat Methods 2024; 21:1558-1567. [PMID: 38609490 DOI: 10.1038/s41592-024-02244-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 03/13/2024] [Indexed: 04/14/2024]
Abstract
Fluorescence microscopy-based image restoration has received widespread attention in the life sciences and has led to significant progress, benefiting from deep learning technology. However, most current task-specific methods have limited generalizability to different fluorescence microscopy-based image restoration problems. Here, we seek to improve generalizability and explore the potential of applying a pretrained foundation model to fluorescence microscopy-based image restoration. We provide a universal fluorescence microscopy-based image restoration (UniFMIR) model to address different restoration problems, and show that UniFMIR offers higher image restoration precision, better generalization and increased versatility. Demonstrations on five tasks and 14 datasets covering a wide range of microscopy imaging modalities and biological samples demonstrate that the pretrained UniFMIR can effectively transfer knowledge to a specific situation via fine-tuning, uncover clear nanoscale biomolecular structures and facilitate high-quality imaging. This work has the potential to inspire and trigger new research highlights for fluorescence microscopy-based image restoration.
Collapse
Affiliation(s)
- Chenxi Ma
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Weimin Tan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Ruian He
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Bo Yan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China.
| |
Collapse
|
9
|
Liu ML, Liu YP, Guo XX, Wu ZY, Zhang XT, Roe AW, Hu JM. Orientation selectivity mapping in the visual cortex. Prog Neurobiol 2024; 240:102656. [PMID: 39009108 DOI: 10.1016/j.pneurobio.2024.102656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 06/17/2024] [Accepted: 07/05/2024] [Indexed: 07/17/2024]
Abstract
The orientation map is one of the most well-studied functional maps of the visual cortex. However, results from the literature are of different qualities. Clear boundaries among different orientation domains and blurred uncertain distinctions were shown in different studies. These unclear imaging results will lead to an inaccuracy in depicting cortical structures, and the lack of consideration in experimental design will also lead to biased depictions of the cortical features. How we accurately define orientation domains will impact the entire field of research. In this study, we test how spatial frequency (SF), stimulus size, location, chromatic, and data processing methods affect the orientation functional maps (including a large area of dorsal V4, and parts of dorsal V1) acquired by intrinsic signal optical imaging. Our results indicate that, for large imaging fields, large grating stimuli with mixed SF components should be considered to acquire the orientation map. A diffusion model image enhancement based on the difference map could further improve the map quality. In addition, the similar outcomes of achromatic and chromatic gratings indicate two alternative types of afferents from LGN, pooling in V1 to generate cue-invariant orientation selectivity.
Collapse
Affiliation(s)
- Mei-Lan Liu
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China; Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Yi-Peng Liu
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China
| | - Xin-Xia Guo
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China
| | - Zhi-Yi Wu
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou 310010, China
| | - Xiao-Tong Zhang
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310012, China; College of Electrical Engineering, Zhejiang University, Hangzhou 310000, China
| | - Anna Wang Roe
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China; Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310012, China; The State Key Laboratory of Brain-Machine Intelligence, Zhejiang University, Hangzhou 310058, China.
| | - Jia-Ming Hu
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou 310029, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310012, China.
| |
Collapse
|
10
|
Saurabh A, Brown PT, Bryan JS, Fox ZR, Kruithoff R, Thompson C, Kural C, Shepherd DP, Pressé S. Approaching Maximum Resolution in Structured Illumination Microscopy via Accurate Noise Modeling. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.07.570701. [PMID: 38106139 PMCID: PMC10723446 DOI: 10.1101/2023.12.07.570701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Biological images captured by microscopes are characterized by heterogeneous signal-to-noise ratios (SNRs) due to spatially varying photon emission across the field of view convoluted with camera noise. State-of-the-art unsupervised structured illumination microscopy (SIM) reconstruction algorithms, commonly implemented in the Fourier domain, do not accurately model this noise and suffer from high-frequency artifacts, user-dependent choices of smoothness constraints making assumptions on biological features, and unphysical negative values in the recovered fluorescence intensity map. On the other hand, supervised methods rely on large datasets for training, and often require retraining for new sample structures. Consequently, achieving high contrast near the maximum theoretical resolution in an unsupervised, physically principled, manner remains an open problem. Here, we propose Bayesian-SIM (B-SIM), an unsupervised Bayesian framework to quantitatively reconstruct SIM data, rectifying these shortcomings by accurately incorporating known noise sources in the spatial domain. To accelerate the reconstruction process, we use the finite extent of the point-spread-function to devise a parallelized Monte Carlo strategy involving chunking and restitching of the inferred fluorescence intensity. We benchmark our framework on both simulated and experimental images, and demonstrate improved contrast permitting feature recovery at up to 25% shorter length scales over state-of-the-art methods at both high- and low-SNR. B-SIM enables unsupervised, quantitative, physically accurate reconstruction without the need for labeled training data, democratizing high-quality SIM reconstruction and expands the capabilities of live-cell SIM to lower SNR, potentially revealing biological features in previously inaccessible regimes.
Collapse
Affiliation(s)
- Ayush Saurabh
- Center for Biological Physics, Arizona State University, Tempe, AZ, USA
- Department of Physics, Arizona State University, Tempe, AZ, USA
| | - Peter T. Brown
- Center for Biological Physics, Arizona State University, Tempe, AZ, USA
- Department of Physics, Arizona State University, Tempe, AZ, USA
| | - J. Shepard Bryan
- Center for Biological Physics, Arizona State University, Tempe, AZ, USA
- Department of Physics, Arizona State University, Tempe, AZ, USA
| | - Zachary R. Fox
- Computational Science and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Rory Kruithoff
- Center for Biological Physics, Arizona State University, Tempe, AZ, USA
- Department of Physics, Arizona State University, Tempe, AZ, USA
| | | | - Comert Kural
- Department of Physics, The Ohio State University, Columbus, OH, USA
- Interdisciplinary Biophysics Graduate Program, The Ohio State University, Columbus, OH, USA
| | - Douglas P. Shepherd
- Center for Biological Physics, Arizona State University, Tempe, AZ, USA
- Department of Physics, Arizona State University, Tempe, AZ, USA
| | - Steve Pressé
- Center for Biological Physics, Arizona State University, Tempe, AZ, USA
- Department of Physics, Arizona State University, Tempe, AZ, USA
- School of Molecular Sciences, Arizona State University, Tempe, AZ, USA
| |
Collapse
|
11
|
Yu S, Wu H, Kang S, Ma J, Xie M, Dai L. Model-free robust motion control for biological optical microscopy using time-delay estimation with an adaptive RBFNN compensator. ISA TRANSACTIONS 2024; 149:365-372. [PMID: 38724294 DOI: 10.1016/j.isatra.2024.04.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 04/06/2024] [Accepted: 04/19/2024] [Indexed: 06/05/2024]
Abstract
The field of large numerical aperture microscopy has witnessed significant advancements in spatial and temporal resolution, as well as improvements in optical microscope imaging quality. However, these advancements have concurrently raised the demand for enhanced precision, extended range, and increased load-bearing capacity in objective motion carrier (OMC). To address this challenge, this study introduces an innovative OMC that employs a ball screw mechanism as its primary driving component. Furthermore, a robust nonlinear motion control strategy has been developed, which integrates fast nonsingular terminal sliding mode, experimental estimation techniques, and adaptive radial basis neural network, to mitigate the impact of nonlinear friction within the ball screw mechanism on motion precision. The stability of the closed-loop control system has been rigorously demonstrated through Lyapunov theory. Compared with other enhanced sliding mode control strategies, the maximum error and root mean square error of this controller are improved by 33% and 34% respectively. The implementation of the novel OMC has enabled the establishment of a high-resolution bio-optical microscope, which has proven its effectiveness in the microscopic imaging of retinal organoids.
Collapse
Affiliation(s)
- Shengdong Yu
- Wenzhou Key Laboratory of Biomaterials and Engineering, Wenzhou Key Laboratory of Biomedical Imaging, Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou 325000, China
| | - Hongyuan Wu
- College of Mechanical and Electrical Engineering, Wenzhou University, Wenzhou 325000, China
| | - Shengzheng Kang
- School of Automation in Nanjing University of Information Science and Technology, China
| | - Jinyu Ma
- School of Intelligent Manufacturing, Wenzhou Polytechnic, Wenzhou 325000, China.
| | - Mingyang Xie
- Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Luru Dai
- Wenzhou Key Laboratory of Biomaterials and Engineering, Wenzhou Key Laboratory of Biomedical Imaging, Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou 325000, China.
| |
Collapse
|
12
|
Aghigh A, Jargot G, Zaouter C, Preston SEJ, Mohammadi MS, Ibrahim H, Del Rincón SV, Patten K, Légaré F. A comparative study of CARE 2D and N2V 2D for tissue-specific denoising in second harmonic generation imaging. JOURNAL OF BIOPHOTONICS 2024; 17:e202300565. [PMID: 38566461 DOI: 10.1002/jbio.202300565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 03/11/2024] [Accepted: 03/17/2024] [Indexed: 04/04/2024]
Abstract
This study explored the application of deep learning in second harmonic generation (SHG) microscopy, a rapidly growing area. This study focuses on the impact of glycerol concentration on image noise in SHG microscopy and compares two image restoration techniques: Noise-to-Void 2D (N2V 2D, no reference image restoration) and content-aware image restoration (CARE 2D, full reference image restoration). We demonstrated that N2V 2D effectively restored the images affected by high glycerol concentrations. To reduce sample exposure and damage, this study further addresses low-power SHG imaging by reducing the laser power by 70% using deep learning techniques. CARE 2D excels in preserving detailed structures, whereas N2V 2D maintains natural muscle structure. This study highlights the strengths and limitations of these models in specific SHG microscopy applications, offering valuable insights and potential advancements in the field .
Collapse
Affiliation(s)
- Arash Aghigh
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| | - Gaëtan Jargot
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| | - Charlotte Zaouter
- Armand-Frappier Santé Biotechnologie Research Centre, Laval, Québec, Canada
| | - Samuel E J Preston
- Department of Experimental Medicine, Faculty of Medicine, McGill University, Montréal, Québec, Canada
- Gerald Bronfman Department of Oncology, Segal Cancer Centre, Lady Davis Institute and Jewish General Hospital, McGill University, Montréal, Québec, Canada
| | - Melika Saadat Mohammadi
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| | - Heide Ibrahim
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| | - Sonia V Del Rincón
- Department of Experimental Medicine, Faculty of Medicine, McGill University, Montréal, Québec, Canada
- Gerald Bronfman Department of Oncology, Segal Cancer Centre, Lady Davis Institute and Jewish General Hospital, McGill University, Montréal, Québec, Canada
| | - Kessen Patten
- Armand-Frappier Santé Biotechnologie Research Centre, Laval, Québec, Canada
| | - François Légaré
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, Québec, Canada
| |
Collapse
|
13
|
Sun T, Zhao H, Hu L, Shao X, Lu Z, Wang Y, Ling P, Li Y, Zeng K, Chen Q. Enhanced optical imaging and fluorescent labeling for visualizing drug molecules within living organisms. Acta Pharm Sin B 2024; 14:2428-2446. [PMID: 38828150 PMCID: PMC11143489 DOI: 10.1016/j.apsb.2024.01.018] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 01/07/2024] [Accepted: 01/25/2024] [Indexed: 06/05/2024] Open
Abstract
The visualization of drugs in living systems has become key techniques in modern therapeutics. Recent advancements in optical imaging technologies and molecular design strategies have revolutionized drug visualization. At the subcellular level, super-resolution microscopy has allowed exploration of the molecular landscape within individual cells and the cellular response to drugs. Moving beyond subcellular imaging, researchers have integrated multiple modes, like optical near-infrared II imaging, to study the complex spatiotemporal interactions between drugs and their surroundings. By combining these visualization approaches, researchers gain supplementary information on physiological parameters, metabolic activity, and tissue composition, leading to a comprehensive understanding of drug behavior. This review focuses on cutting-edge technologies in drug visualization, particularly fluorescence imaging, and the main types of fluorescent molecules used. Additionally, we discuss current challenges and prospects in targeted drug research, emphasizing the importance of multidisciplinary cooperation in advancing drug visualization. With the integration of advanced imaging technology and molecular design, drug visualization has the potential to redefine our understanding of pharmacology, enabling the analysis of drug micro-dynamics in subcellular environments from new perspectives and deepening pharmacological research to the levels of the cell and organelles.
Collapse
Affiliation(s)
- Ting Sun
- School of Pharmaceutical Sciences, National Key Laboratory of Advanced Drug Delivery System, Medical Science and Technology Innovation Center, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan 250062, China
- Institute of Biochemical and Biotechnological Drugs, School of Pharmaceutical Sciences, Cheeloo College of Medicine, Shandong University, Jinan 250012, China
| | - Huanxin Zhao
- School of Pharmaceutical Sciences, National Key Laboratory of Advanced Drug Delivery System, Medical Science and Technology Innovation Center, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan 250062, China
| | - Luyao Hu
- School of Chinese Materia Medica, Tianjin University of Traditional Chinese Medicine, Tianjin 301617, China
| | - Xintian Shao
- School of Pharmaceutical Sciences, National Key Laboratory of Advanced Drug Delivery System, Medical Science and Technology Innovation Center, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan 250062, China
- School of Life Sciences, Science and Technology Innovation Center, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan 250062, China
| | - Zhiyuan Lu
- School of Pharmaceutical Sciences, National Key Laboratory of Advanced Drug Delivery System, Medical Science and Technology Innovation Center, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan 250062, China
| | - Yuli Wang
- Tianjin Pharmaceutical DA REN TANG Group Corporation Limited Traditional Chinese Pharmacy Research Institute, Tianjin 300457, China
- Key Laboratory of Systems Bioengineering (Ministry of Education), School of Chemistry Engineering and Technology, Tianjin University, Tianjin 300072, China
| | - Peixue Ling
- Institute of Biochemical and Biotechnological Drugs, School of Pharmaceutical Sciences, Cheeloo College of Medicine, Shandong University, Jinan 250012, China
- Key Laboratory of Biopharmaceuticals, Postdoctoral Scientific Research Workstation, Shandong Academy of Pharmaceutical Science, Jinan 250098, China
| | - Yubo Li
- School of Chinese Materia Medica, Tianjin University of Traditional Chinese Medicine, Tianjin 301617, China
| | - Kewu Zeng
- School of Pharmaceutical Sciences, National Key Laboratory of Advanced Drug Delivery System, Medical Science and Technology Innovation Center, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan 250062, China
- State Key Laboratory of Natural and Biomimetic Drugs, School of Pharmaceutical Sciences, Peking University, Beijing 100191, China
| | - Qixin Chen
- School of Pharmaceutical Sciences, National Key Laboratory of Advanced Drug Delivery System, Medical Science and Technology Innovation Center, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan 250062, China
- Departments of Diagnostic Radiology, Surgery, Chemical and Biomolecular Engineering, and Biomedical Engineering, Yong Loo Lin School of Medicine and College of Design and Engineering, National University of Singapore, Singapore 119074, Singapore
| |
Collapse
|
14
|
Zou Z, Zou B, Kui X, Chen Z, Li Y. DGCBG-Net: A dual-branch network with global cross-modal interaction and boundary guidance for tumor segmentation in PET/CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108125. [PMID: 38631130 DOI: 10.1016/j.cmpb.2024.108125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 02/24/2024] [Accepted: 03/07/2024] [Indexed: 04/19/2024]
Abstract
BACKGROUND AND OBJECTIVES Automatic tumor segmentation plays a crucial role in cancer diagnosis and treatment planning. Computed tomography (CT) and positron emission tomography (PET) are extensively employed for their complementary medical information. However, existing methods ignore bilateral cross-modal interaction of global features during feature extraction, and they underutilize multi-stage tumor boundary features. METHODS To address these limitations, we propose a dual-branch tumor segmentation network based on global cross-modal interaction and boundary guidance in PET/CT images (DGCBG-Net). DGCBG-Net consists of 1) a global cross-modal interaction module that extracts global contextual information from PET/CT images and promotes bilateral cross-modal interaction of global feature; 2) a shared multi-path downsampling module that learns complementary features from PET/CT modalities to mitigate the impact of misleading features and decrease the loss of discriminative features during downsampling; 3) a boundary prior-guided branch that extracts potential boundary features from CT images at multiple stages, assisting the semantic segmentation branch in improving the accuracy of tumor boundary segmentation. RESULTS Extensive experiments are conducted on STS and Hecktor 2022 datasets to evaluate the proposed method. The average Dice scores of our DGCB-Net on the two datasets are 80.33% and 79.29%, with average IOU scores of 67.64% and 70.18%. DGCB-Net outperformed the current state-of-the-art methods with a 1.77% higher Dice score and a 2.12% higher IOU score. CONCLUSIONS Extensive experimental results demonstrate that DGCBG-Net outperforms existing segmentation methods, and is competitive to state-of-arts.
Collapse
Affiliation(s)
- Ziwei Zou
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China
| | - Beiji Zou
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China
| | - Xiaoyan Kui
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China.
| | - Zhi Chen
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China
| | - Yang Li
- School of Informatics, Hunan University of Chinese Medicine, No. 300, Xueshi Road, ChangSha, 410208, China
| |
Collapse
|
15
|
Shroff H, Testa I, Jug F, Manley S. Live-cell imaging powered by computation. Nat Rev Mol Cell Biol 2024; 25:443-463. [PMID: 38378991 DOI: 10.1038/s41580-024-00702-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2024] [Indexed: 02/22/2024]
Abstract
The proliferation of microscopy methods for live-cell imaging offers many new possibilities for users but can also be challenging to navigate. The prevailing challenge in live-cell fluorescence microscopy is capturing intra-cellular dynamics while preserving cell viability. Computational methods can help to address this challenge and are now shifting the boundaries of what is possible to capture in living systems. In this Review, we discuss these computational methods focusing on artificial intelligence-based approaches that can be layered on top of commonly used existing microscopies as well as hybrid methods that integrate computation and microscope hardware. We specifically discuss how computational approaches can improve the signal-to-noise ratio, spatial resolution, temporal resolution and multi-colour capacity of live-cell imaging.
Collapse
Affiliation(s)
- Hari Shroff
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Ilaria Testa
- Department of Applied Physics and Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Florian Jug
- Fondazione Human Technopole (HT), Milan, Italy
| | - Suliana Manley
- Institute of Physics, School of Basic Sciences, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
16
|
Lu C, Chen K, Qiu H, Chen X, Chen G, Qi X, Jiang H. Diffusion-based deep learning method for augmenting ultrastructural imaging and volume electron microscopy. Nat Commun 2024; 15:4677. [PMID: 38824146 PMCID: PMC11144272 DOI: 10.1038/s41467-024-49125-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 05/20/2024] [Indexed: 06/03/2024] Open
Abstract
Electron microscopy (EM) revolutionized the way to visualize cellular ultrastructure. Volume EM (vEM) has further broadened its three-dimensional nanoscale imaging capacity. However, intrinsic trade-offs between imaging speed and quality of EM restrict the attainable imaging area and volume. Isotropic imaging with vEM for large biological volumes remains unachievable. Here, we developed EMDiffuse, a suite of algorithms designed to enhance EM and vEM capabilities, leveraging the cutting-edge image generation diffusion model. EMDiffuse generates realistic predictions with high resolution ultrastructural details and exhibits robust transferability by taking only one pair of images of 3 megapixels to fine-tune in denoising and super-resolution tasks. EMDiffuse also demonstrated proficiency in the isotropic vEM reconstruction task, generating isotropic volume even in the absence of isotropic training data. We demonstrated the robustness of EMDiffuse by generating isotropic volumes from seven public datasets obtained from different vEM techniques and instruments. The generated isotropic volume enables accurate three-dimensional nanoscale ultrastructure analysis. EMDiffuse also features self-assessment functionalities on predictions' reliability. We envision EMDiffuse to pave the way for investigations of the intricate subcellular nanoscale ultrastructure within large volumes of biological systems.
Collapse
Affiliation(s)
- Chixiang Lu
- Department of Chemistry, The University of Hong Kong, Hong Kong, China
| | - Kai Chen
- Department of Chemistry, The University of Hong Kong, Hong Kong, China
- School of Molecular Sciences, The University of Western Australia, Perth, WA, Australia
| | - Heng Qiu
- Department of Chemistry, The University of Hong Kong, Hong Kong, China
| | - Xiaojun Chen
- School of Molecular Sciences, The University of Western Australia, Perth, WA, Australia
| | - Gu Chen
- Department of Chemistry, The University of Hong Kong, Hong Kong, China
| | - Xiaojuan Qi
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China.
| | - Haibo Jiang
- Department of Chemistry, The University of Hong Kong, Hong Kong, China.
| |
Collapse
|
17
|
Ma J, Li Z, Cheng J, An P, Liang D, Huang L. Light field image super-resolution based on dual learning and deep Fourier channel attention. OPTICS LETTERS 2024; 49:2886-2889. [PMID: 38824284 DOI: 10.1364/ol.522701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 04/24/2024] [Indexed: 06/03/2024]
Abstract
Light field (LF) imaging has gained significant attention in the field of computational imaging due to its unique capability to capture both spatial and angular information of a scene. In recent years, super-resolution (SR) techniques based on deep learning have shown considerable advantages in enhancing LF image resolution. However, the inherent challenges of obtaining rich structural information and reconstructing complex texture details persist, particularly in scenarios where spatial and angular information are intricately interwoven. This Letter introduces a novel, to the best of our knowledge, approach for Disentangling LF Image SR Network (DLISN) by leveraging the synergy of dual learning and Fourier channel attention (FCA) mechanisms. Dual learning strategies are employed to enhance reconstruction results, addressing limitations in model generalization caused by the difficulty in acquiring paired datasets in real-world LF scenarios. The integration of FCA facilitates the extraction of high-frequency information associated with different structures, contributing to improved spatial resolution. Experimental results consistently demonstrate superior performance in enhancing the resolution of LF images.
Collapse
|
18
|
Liu S, Weng X, Gao X, Xu X, Zhou L. A Residual Dense Attention Generative Adversarial Network for Microscopic Image Super-Resolution. SENSORS (BASEL, SWITZERLAND) 2024; 24:3560. [PMID: 38894350 PMCID: PMC11175225 DOI: 10.3390/s24113560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 05/27/2024] [Accepted: 05/29/2024] [Indexed: 06/21/2024]
Abstract
With the development of deep learning, the Super-Resolution (SR) reconstruction of microscopic images has improved significantly. However, the scarcity of microscopic images for training, the underutilization of hierarchical features in original Low-Resolution (LR) images, and the high-frequency noise unrelated with the image structure generated during the reconstruction process are still challenges in the Single Image Super-Resolution (SISR) field. Faced with these issues, we first collected sufficient microscopic images through Motic, a company engaged in the design and production of optical and digital microscopes, to establish a dataset. Secondly, we proposed a Residual Dense Attention Generative Adversarial Network (RDAGAN). The network comprises a generator, an image discriminator, and a feature discriminator. The generator includes a Residual Dense Block (RDB) and a Convolutional Block Attention Module (CBAM), focusing on extracting the hierarchical features of the original LR image. Simultaneously, the added feature discriminator enables the network to generate high-frequency features pertinent to the image's structure. Finally, we conducted experimental analysis and compared our model with six classic models. Compared with the best model, our model improved PSNR and SSIM by about 1.5 dB and 0.2, respectively.
Collapse
Affiliation(s)
- Sanya Liu
- Xiamen Key Laboratory of Mobile Multimedia Communications, College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China; (S.L.); (X.W.)
| | - Xiao Weng
- Xiamen Key Laboratory of Mobile Multimedia Communications, College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China; (S.L.); (X.W.)
| | - Xingen Gao
- School of Opto-Electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China;
| | - Xiaoxin Xu
- Institute of Microelectronics Chinese Academy of Sciences, Beijing 100029, China;
| | - Lin Zhou
- Xiamen Key Laboratory of Mobile Multimedia Communications, College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China; (S.L.); (X.W.)
| |
Collapse
|
19
|
Xu X, Wang W, Qiao L, Fu Y, Ge X, Zhao K, Zhanghao K, Guan M, Chen X, Li M, Jin D, Xi P. Ultra-high spatio-temporal resolution imaging with parallel acquisition-readout structured illumination microscopy (PAR-SIM). LIGHT, SCIENCE & APPLICATIONS 2024; 13:125. [PMID: 38806501 PMCID: PMC11133488 DOI: 10.1038/s41377-024-01464-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 04/08/2024] [Accepted: 04/24/2024] [Indexed: 05/30/2024]
Abstract
Structured illumination microscopy (SIM) has emerged as a promising super-resolution fluorescence imaging technique, offering diverse configurations and computational strategies to mitigate phototoxicity during real-time imaging of biological specimens. Traditional efforts to enhance system frame rates have concentrated on processing algorithms, like rolling reconstruction or reduced frame reconstruction, or on investments in costly sCMOS cameras with accelerated row readout rates. In this article, we introduce an approach to elevate SIM frame rates and region of interest (ROI) coverage at the hardware level, without necessitating an upsurge in camera expenses or intricate algorithms. Here, parallel acquisition-readout SIM (PAR-SIM) achieves the highest imaging speed for fluorescence imaging at currently available detector sensitivity. By using the full frame-width of the detector through synchronizing the pattern generation and image exposure-readout process, we have achieved a fundamentally stupendous information spatial-temporal flux of 132.9 MPixels · s-1, 9.6-fold that of the latest techniques, with the lowest SNR of -2.11 dB and 100 nm resolution. PAR-SIM demonstrates its proficiency in successfully reconstructing diverse cellular organelles in dual excitations, even under conditions of low signal due to ultra-short exposure times. Notably, mitochondrial dynamic tubulation and ongoing membrane fusion processes have been captured in live COS-7 cell, recorded with PAR-SIM at an impressive 408 Hz. We posit that this novel parallel exposure-readout mode not only augments SIM pattern modulation for superior frame rates but also holds the potential to benefit other complex imaging systems with a strategic controlling approach.
Collapse
Affiliation(s)
- Xinzhu Xu
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, 100871, China
- Wallace H. Coulter Dept. of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, 30332, GA, USA
- National Biomedical Imaging Center, Peking University, Beijing, 100871, China
- Department of Biomedical Engineering, College of Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China
| | - Wenyi Wang
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, 100871, China
- National Biomedical Imaging Center, Peking University, Beijing, 100871, China
- Airy Technologies Co., Ltd., Beijing, 100086, China
| | - Liang Qiao
- Department of Biomedical Engineering, College of Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China
- Airy Technologies Co., Ltd., Beijing, 100086, China
| | - Yunzhe Fu
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, 100871, China
- National Biomedical Imaging Center, Peking University, Beijing, 100871, China
| | - Xichuan Ge
- Airy Technologies Co., Ltd., Beijing, 100086, China
| | - Kun Zhao
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, 100871, China
- Wallace H. Coulter Dept. of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, 30332, GA, USA
| | - Karl Zhanghao
- Department of Biomedical Engineering, College of Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China
- Eastern Institute for Advanced Study, Eastern Institute of Technology, Ningbo, Zhejiang, 315200, China
| | - Meiling Guan
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, 100871, China
- National Biomedical Imaging Center, Peking University, Beijing, 100871, China
| | - Xin Chen
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, 100871, China
- National Biomedical Imaging Center, Peking University, Beijing, 100871, China
| | - Meiqi Li
- National Biomedical Imaging Center, Peking University, Beijing, 100871, China
- School of Life Science, Peking University, Beijing, 100871, China
| | - Dayong Jin
- Department of Biomedical Engineering, College of Engineering, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China.
- Eastern Institute for Advanced Study, Eastern Institute of Technology, Ningbo, Zhejiang, 315200, China.
- Institute for Biomedical Materials and Devices (IBMD), Faculty of Science, University of Technology Sydney, Sydney, NSW, 2007, Australia.
| | - Peng Xi
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, 100871, China.
- National Biomedical Imaging Center, Peking University, Beijing, 100871, China.
- Airy Technologies Co., Ltd., Beijing, 100086, China.
| |
Collapse
|
20
|
Ren W, Ge X, Li M, Sun J, Li S, Gao S, Shan C, Gao B, Xi P. Visualization of cristae and mtDNA interactions via STED nanoscopy using a low saturation power probe. LIGHT, SCIENCE & APPLICATIONS 2024; 13:116. [PMID: 38782912 PMCID: PMC11116397 DOI: 10.1038/s41377-024-01463-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 04/12/2024] [Accepted: 04/20/2024] [Indexed: 05/25/2024]
Abstract
Mitochondria are crucial organelles closely associated with cellular metabolism and function. Mitochondrial DNA (mtDNA) encodes a variety of transcripts and proteins essential for cellular function. However, the interaction between the inner membrane (IM) and mtDNA remains elusive due to the limitations in spatiotemporal resolution offered by conventional microscopy and the absence of suitable in vivo probes specifically targeting the IM. Here, we have developed a novel fluorescence probe called HBmito Crimson, characterized by exceptional photostability, fluorogenicity within lipid membranes, and low saturation power. We successfully achieved over 500 frames of low-power stimulated emission depletion microscopy (STED) imaging to visualize the IM dynamics, with a spatial resolution of 40 nm. By utilizing dual-color imaging of the IM and mtDNA, it has been uncovered that mtDNA tends to habitat at mitochondrial tips or branch points, exhibiting an overall spatially uniform distribution. Notably, the dynamics of mitochondria are intricately associated with the positioning of mtDNA, and fusion consistently occurs in close proximity to mtDNA to minimize pressure during cristae remodeling. In healthy cells, >66% of the mitochondria are Class III (i.e., mitochondria >5 μm or with >12 cristae), while it dropped to <18% in ferroptosis. Mitochondrial dynamics, orchestrated by cristae remodeling, foster the even distribution of mtDNA. Conversely, in conditions of apoptosis and ferroptosis where the cristae structure is compromised, mtDNA distribution becomes irregular. These findings, achieved with unprecedented spatiotemporal resolution, reveal the intricate interplay between cristae and mtDNA and provide insights into the driving forces behind mtDNA distribution.
Collapse
Affiliation(s)
- Wei Ren
- Department of Biomedical Engineering, National Biomedical Imaging Center, College of Future Technology, Peking University, Beijing, 100871, China
| | - Xichuan Ge
- Key Laboratory of Analytical Science and Technology of Hebei Province, College of Chemistry and Material Science, Hebei University, Baoding, 071002, China
| | - Meiqi Li
- School of Life Sciences, Peking University, Beijing, 100871, China
| | - Jing Sun
- Key Laboratory of Analytical Science and Technology of Hebei Province, College of Chemistry and Material Science, Hebei University, Baoding, 071002, China
| | - Shiyi Li
- Key Laboratory of Analytical Science and Technology of Hebei Province, College of Chemistry and Material Science, Hebei University, Baoding, 071002, China
| | - Shu Gao
- Department of Biomedical Engineering, National Biomedical Imaging Center, College of Future Technology, Peking University, Beijing, 100871, China
| | - Chunyan Shan
- School of Life Sciences, Peking University, Beijing, 100871, China.
- National Center for Protein Sciences, Peking University, Beijing, 100871, China.
| | - Baoxiang Gao
- Key Laboratory of Analytical Science and Technology of Hebei Province, College of Chemistry and Material Science, Hebei University, Baoding, 071002, China.
| | - Peng Xi
- Department of Biomedical Engineering, National Biomedical Imaging Center, College of Future Technology, Peking University, Beijing, 100871, China.
| |
Collapse
|
21
|
Qiao C, Zeng Y, Meng Q, Chen X, Chen H, Jiang T, Wei R, Guo J, Fu W, Lu H, Li D, Wang Y, Qiao H, Wu J, Li D, Dai Q. Zero-shot learning enables instant denoising and super-resolution in optical fluorescence microscopy. Nat Commun 2024; 15:4180. [PMID: 38755148 PMCID: PMC11099110 DOI: 10.1038/s41467-024-48575-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 05/07/2024] [Indexed: 05/18/2024] Open
Abstract
Computational super-resolution methods, including conventional analytical algorithms and deep learning models, have substantially improved optical microscopy. Among them, supervised deep neural networks have demonstrated outstanding performance, however, demanding abundant high-quality training data, which are laborious and even impractical to acquire due to the high dynamics of living cells. Here, we develop zero-shot deconvolution networks (ZS-DeconvNet) that instantly enhance the resolution of microscope images by more than 1.5-fold over the diffraction limit with 10-fold lower fluorescence than ordinary super-resolution imaging conditions, in an unsupervised manner without the need for either ground truths or additional data acquisition. We demonstrate the versatile applicability of ZS-DeconvNet on multiple imaging modalities, including total internal reflection fluorescence microscopy, three-dimensional wide-field microscopy, confocal microscopy, two-photon microscopy, lattice light-sheet microscopy, and multimodal structured illumination microscopy, which enables multi-color, long-term, super-resolution 2D/3D imaging of subcellular bioprocesses from mitotic single cells to multicellular embryos of mouse and C. elegans.
Collapse
Affiliation(s)
- Chang Qiao
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Yunmin Zeng
- Department of Automation, Tsinghua University, 100084, Beijing, China
| | - Quan Meng
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Xingye Chen
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
- Research Institute for Frontier Science, Beihang University, 100191, Beijing, China
| | - Haoyu Chen
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Tao Jiang
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Rongfei Wei
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
| | - Jiabao Guo
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Wenfeng Fu
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Huaide Lu
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Di Li
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
| | - Yuwang Wang
- Beijing National Research Center for Information Science and Technology, Tsinghua University, 100084, Beijing, China
| | - Hui Qiao
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Dong Li
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China.
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, 100084, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China.
| |
Collapse
|
22
|
Li G, Cui Z, Li M, Han Y, Li T. Multi-attention fusion transformer for single-image super-resolution. Sci Rep 2024; 14:10222. [PMID: 38702417 PMCID: PMC11068767 DOI: 10.1038/s41598-024-60579-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Accepted: 04/24/2024] [Indexed: 05/06/2024] Open
Abstract
Recently, Transformer-based methods have gained prominence in image super-resolution (SR) tasks, addressing the challenge of long-range dependence through the incorporation of cross-layer connectivity and local attention mechanisms. However, the analysis of these networks using local attribution maps has revealed significant limitations in leveraging the spatial extent of input information. To unlock the inherent potential of Transformer in image SR, we propose the Multi-Attention Fusion Transformer (MAFT), a novel model designed to integrate multiple attention mechanisms with the objective of expanding the number and range of pixels activated during image reconstruction. This integration enhances the effective utilization of input information space. At the core of our model lies the Multi-attention Adaptive Integration Groups, which facilitate the transition from dense local attention to sparse global attention through the introduction of Local Attention Aggregation and Global Attention Aggregation blocks with alternating connections, effectively broadening the network's receptive field. The effectiveness of our proposed algorithm has been validated through comprehensive quantitative and qualitative evaluation experiments conducted on benchmark datasets. Compared to state-of-the-art methods (e.g. HAT), the proposed MAFT achieves 0.09 dB gains on Urban100 dataset for × 4 SR task while containing 32.55% and 38.01% fewer parameters and FLOPs, respectively.
Collapse
Affiliation(s)
- Guanxing Li
- School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, China
| | - Zhaotong Cui
- School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, China
| | - Meng Li
- School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, China
| | - Yu Han
- School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, China
| | - Tianping Li
- School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, China.
| |
Collapse
|
23
|
Du Y, Li D, Hu Z, Liu S, Xia Q, Zhu J, Xu J, Yu T, Zhu D. Dual-Channel in Spatial-Frequency Domain CycleGAN for perceptual enhancement of transcranial cortical vascular structure and function. Comput Biol Med 2024; 173:108377. [PMID: 38569233 DOI: 10.1016/j.compbiomed.2024.108377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 02/20/2024] [Accepted: 03/24/2024] [Indexed: 04/05/2024]
Abstract
Observing cortical vascular structures and functions using laser speckle contrast imaging (LSCI) at high resolution plays a crucial role in understanding cerebral pathologies. Usually, open-skull window techniques have been applied to reduce scattering of skull and enhance image quality. However, craniotomy surgeries inevitably induce inflammation, which may obstruct observations in certain scenarios. In contrast, image enhancement algorithms provide popular tools for improving the signal-to-noise ratio (SNR) of LSCI. The current methods were less than satisfactory through intact skulls because the transcranial cortical images were of poor quality. Moreover, existing algorithms do not guarantee the accuracy of dynamic blood flow mappings. In this study, we develop an unsupervised deep learning method, named Dual-Channel in Spatial-Frequency Domain CycleGAN (SF-CycleGAN), to enhance the perceptual quality of cortical blood flow imaging by LSCI. SF-CycleGAN enabled convenient, non-invasive, and effective cortical vascular structure observation and accurate dynamic blood flow mappings without craniotomy surgeries to visualize biodynamics in an undisturbed biological environment. Our experimental results showed that SF-CycleGAN achieved a SNR at least 4.13 dB higher than that of other unsupervised methods, imaged the complete vascular morphology, and enabled the functional observation of small cortical vessels. Additionally, the proposed method showed remarkable robustness and could be generalized to various imaging configurations and image modalities, including fluorescence images, without retraining.
Collapse
Affiliation(s)
- Yuwei Du
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Dongyu Li
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China; School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Zhengwu Hu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Shaojun Liu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Qing Xia
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jingtan Zhu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jianyi Xu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Tingting Yu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Dan Zhu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China.
| |
Collapse
|
24
|
Wang W, Yang L, Sun H, Peng X, Yuan J, Zhong W, Chen J, He X, Ye L, Zeng Y, Gao Z, Li Y, Qu X. Cellular nucleus image-based smarter microscope system for single cell analysis. Biosens Bioelectron 2024; 250:116052. [PMID: 38266616 DOI: 10.1016/j.bios.2024.116052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/31/2023] [Accepted: 01/18/2024] [Indexed: 01/26/2024]
Abstract
Cell imaging technology is undoubtedly a powerful tool for studying single-cell heterogeneity due to its non-invasive and visual advantages. It covers microscope hardware, software, and image analysis techniques, which are hindered by low throughput owing to abundant hands-on time and expertise. Herein, a cellular nucleus image-based smarter microscope system for single-cell analysis is reported to achieve high-throughput analysis and high-content detection of cells. By combining the hardware of an automatic fluorescence microscope and multi-object recognition/acquisition software, we have achieved more advanced process automation with the assistance of Robotic Process Automation (RPA), which realizes a high-throughput collection of single-cell images. Automated acquisition of single-cell images has benefits beyond ease and throughout and can lead to uniform standard and higher quality images. We further constructed a single-cell image database-based convolutional neural network (Efficient Convolutional Neural Network, E-CNN) exceeding 20618 single-cell nucleus images. Computational analysis of large and complex data sets enhances the content and efficiency of single-cell analysis with the assistance of Artificial Intelligence (AI), which breaks through the super-resolution microscope's hardware limitation, such as specialized light sources with specific wavelengths, advanced optical components, and high-performance graphics cards. Our system can identify single-cell nucleus images that cannot be artificially distinguished with an accuracy of 95.3%. Overall, we build an ordinary microscope into a high-throughput analysis and high-content smarter microscope system, making it a candidate tool for Imaging cytology.
Collapse
Affiliation(s)
- Wentao Wang
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Lin Yang
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Hang Sun
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Xiaohong Peng
- YueYang Central Hospital, YueYang, Hunan Province, 414000, China
| | - Junjie Yuan
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Wenhao Zhong
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Jinqi Chen
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Xin He
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Lingzhi Ye
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China
| | - Yi Zeng
- College of Chemistry and Chemical Engineering, Huanggang Normal University, Huanggang, 438000, China
| | - Zhifan Gao
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China.
| | - Yunhui Li
- Department of Laboratory Medical Center, General Hospital of Northern Theater Command, No.83, Wenhua Road, Shenhe District, Shenyang, Liaoning Province, 110016, China.
| | - Xiangmeng Qu
- Key Laboratory of Sensing Technology and Biomedical Instruments of Guangdong Province, School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, Guangdong Province, 518017, China.
| |
Collapse
|
25
|
Wang Y, Yue Z, Wang F, Song P, Liu J. Deep learning empowers photothermal microscopy with super-resolution capabilities. OPTICS LETTERS 2024; 49:1957-1960. [PMID: 38621050 DOI: 10.1364/ol.517164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 03/08/2024] [Indexed: 04/17/2024]
Abstract
In the past two decades, photothermal microscopy (PTM) has achieved sensitivity at the level of a single particle or molecule and has found applications in the fields of material science and biology. PTM is a far-field imaging method; its resolution is restricted by the diffraction limits. In our previous work, the modulated difference PTM (MDPTM) was proposed to improve the lateral resolution, but its resolution improvement was seriously constrained by information loss and artifacts. In this Letter, a deep learning approach of the cycle generative adversarial network (Cycle GAN) is employed for further improving the resolution of PTM, called DMDPTM. The point spread functions (PSFs) of both PTM and MDPTM are optimized and act as the second generator of Cycle GAN. Besides, the relationship between the sample's volume and the photothermal signal is utilized during dataset construction. The images of both PTM and MDPTM are utilized as the inputs of the Cycle GAN to incorporate more information. In the simulation, DMDPTM quantitatively distinguishes a distance of 60 nm between two nanoparticles (each with a diameter of 60 nm), demonstrating a 4.4-fold resolution enhancement over the conventional PTM. Experimentally, the super-resolution capability of DMDPTM is verified by restored images of Au nanoparticles, achieving the resolution of 114 nm. Finally, the DMDPTM is successfully employed for the imaging of carbon nanotubes. Therefore, the DMDPTM will serve as a powerful tool to improve the lateral resolution of PTM.
Collapse
|
26
|
Chen R, Xu J, Wang B, Ding Y, Abdulla A, Li Y, Jiang L, Ding X. SpiDe-Sr: blind super-resolution network for precise cell segmentation and clustering in spatial proteomics imaging. Nat Commun 2024; 15:2708. [PMID: 38548720 PMCID: PMC10978886 DOI: 10.1038/s41467-024-46989-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 03/15/2024] [Indexed: 04/01/2024] Open
Abstract
Spatial proteomics elucidates cellular biochemical changes with unprecedented topological level. Imaging mass cytometry (IMC) is a high-dimensional single-cell resolution platform for targeted spatial proteomics. However, the precision of subsequent clinical analysis is constrained by imaging noise and resolution. Here, we propose SpiDe-Sr, a super-resolution network embedded with a denoising module for IMC spatial resolution enhancement. SpiDe-Sr effectively resists noise and improves resolution by 4 times. We demonstrate SpiDe-Sr respectively with cells, mouse and human tissues, resulting 18.95%/27.27%/21.16% increase in peak signal-to-noise ratio and 15.95%/31.63%/15.52% increase in cell extraction accuracy. We further apply SpiDe-Sr to study the tumor microenvironment of a 20-patient clinical breast cancer cohort with 269,556 single cells, and discover the invasion of Gram-negative bacteria is positively correlated with carcinogenesis markers and negatively correlated with immunological markers. Additionally, SpiDe-Sr is also compatible with fluorescence microscopy imaging, suggesting SpiDe-Sr an alternative tool for microscopy image super-resolution.
Collapse
Grants
- This work was supported by National Key R&D Program of China (2022YFC2601700, 2022YFF0710202) and NSFC Projects (T2122002, 22077079, 81871448), Shanghai Municipal Science and Technology Project(22Z510202478), Shanghai Municipal Education Commission Project(21SG10), Shanghai Jiao Tong University Projects (YG2021ZD19, Agri-X20200101, 2020 SJTU-HUJI), Shanghai Municipal Health Commission Project (2019CXJQ03). Thanks for AEMD SJTU, Shanghai Jiao Tong University Laboratory Animal Center for the supporting.
Collapse
Affiliation(s)
- Rui Chen
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jiasu Xu
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Boqian Wang
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yi Ding
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Aynur Abdulla
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiyang Li
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Lai Jiang
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xianting Ding
- Department of Anesthesiology and Surgical Intensive Care Unit, Xinhua Hospital, School of Medicine and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
- State Key Laboratory of Systems Medicine for Cancer, Institute for Personalized Medicine, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
27
|
Zhou Y, Klintström E, Klintström B, Ferguson SJ, Helgason B, Persson C. A convolutional neural network-based method for the generation of super-resolution 3D models from clinical CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 245:108009. [PMID: 38219339 DOI: 10.1016/j.cmpb.2024.108009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/01/2023] [Accepted: 01/05/2024] [Indexed: 01/16/2024]
Abstract
BACKGROUND AND OBJECTIVE The accurate evaluation of bone mechanical properties is essential for predicting fracture risk based on clinical computed tomography (CT) images. However, blurring and noise in clinical CT images can compromise the accuracy of these predictions, leading to incorrect diagnoses. Although previous studies have explored enhancing trabecular bone CT images to super-resolution (SR), none of these studies have examined the possibility of using clinical CT images from different instruments, typically of lower resolution, as a basis for analysis. Additionally, previous studies rely on 2D SR images, which may not be sufficient for accurate mechanical property evaluation, due to the complex nature of the 3D trabecular bone structures. The objective of this study was to address these limitations. METHODS A workflow was developed that utilizes convolutional neural networks to generate SR 3D models across different clinical CT instruments. The morphological and finite-element-derived mechanical properties of these SR models were compared with ground truth models obtained from micro-CT scans. RESULTS A significant improvement in analysis accuracy was demonstrated, where the new SR models increased the accuracy by up to 700 % compared with the low-resolution data, i.e. clinical CT images. Additionally, we found that the mixture of different CT image datasets may improve the SR model performance. CONCLUSIONS SR images, generated by convolutional neural networks, outperformed clinical CT images in the determination of morphological and mechanical properties. The developed workflow could be implemented for fracture risk prediction, potentially leading to improved diagnoses and subsequent clinical decision making.
Collapse
Affiliation(s)
- Yijun Zhou
- Division of Biomedical Engineering, Department of Materials Science and Engineering, Ångströmlaboratoriet, Uppsala University, Lägerhyddsvägen 1, Uppsala 75237, Sweden
| | - Eva Klintström
- Center for Medical Image Science and Visualization (CMIV), Linköping University, Sweden; Department of Radiology and Department of Health, Medicine and Caring Sciences, Linköping University, Sweden
| | - Benjamin Klintström
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Huddinge, Sweden
| | | | | | - Cecilia Persson
- Division of Biomedical Engineering, Department of Materials Science and Engineering, Ångströmlaboratoriet, Uppsala University, Lägerhyddsvägen 1, Uppsala 75237, Sweden.
| |
Collapse
|
28
|
Bender SWB, Dreisler MW, Zhang M, Kæstel-Hansen J, Hatzakis NS. SEMORE: SEgmentation and MORphological fingErprinting by machine learning automates super-resolution data analysis. Nat Commun 2024; 15:1763. [PMID: 38409214 PMCID: PMC10897458 DOI: 10.1038/s41467-024-46106-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 02/13/2024] [Indexed: 02/28/2024] Open
Abstract
The morphology of protein assemblies impacts their behaviour and contributes to beneficial and aberrant cellular responses. While single-molecule localization microscopy provides the required spatial resolution to investigate these assemblies, the lack of universal robust analytical tools to extract and quantify underlying structures limits this powerful technique. Here we present SEMORE, a semi-automatic machine learning framework for universal, system- and input-dependent, analysis of super-resolution data. SEMORE implements a multi-layered density-based clustering module to dissect biological assemblies and a morphology fingerprinting module for quantification by multiple geometric and kinetics-based descriptors. We demonstrate SEMORE on simulations and diverse raw super-resolution data: time-resolved insulin aggregates, and published data of dSTORM imaging of nuclear pore complexes, fibroblast growth receptor 1, sptPALM of Syntaxin 1a and dynamic live-cell PALM of ryanodine receptors. SEMORE extracts and quantifies all protein assemblies, their temporal morphology evolution and provides quantitative insights, e.g. classification of heterogeneous insulin aggregation pathways and NPC geometry in minutes. SEMORE is a general analysis platform for super-resolution data, and being a time-aware framework can also support the rise of 4D super-resolution data.
Collapse
Affiliation(s)
- Steen W B Bender
- Department of Chemistry, University of Copenhagen, Copenhagen, Denmark
- Center for 4D cellular dynamics, University of Copenhagen, Copenhagen, Denmark
- Novo Nordisk Center for Optimised Oligo Escape and Control of Disease, University of Copenhagen, Copenhagen, Denmark
| | - Marcus W Dreisler
- Department of Chemistry, University of Copenhagen, Copenhagen, Denmark
- Center for 4D cellular dynamics, University of Copenhagen, Copenhagen, Denmark
- Novo Nordisk Center for Optimised Oligo Escape and Control of Disease, University of Copenhagen, Copenhagen, Denmark
| | - Min Zhang
- Department of Chemistry, University of Copenhagen, Copenhagen, Denmark
- Center for 4D cellular dynamics, University of Copenhagen, Copenhagen, Denmark
- Novo Nordisk Center for Optimised Oligo Escape and Control of Disease, University of Copenhagen, Copenhagen, Denmark
| | - Jacob Kæstel-Hansen
- Department of Chemistry, University of Copenhagen, Copenhagen, Denmark.
- Center for 4D cellular dynamics, University of Copenhagen, Copenhagen, Denmark.
- Novo Nordisk Center for Optimised Oligo Escape and Control of Disease, University of Copenhagen, Copenhagen, Denmark.
| | - Nikos S Hatzakis
- Department of Chemistry, University of Copenhagen, Copenhagen, Denmark.
- Center for 4D cellular dynamics, University of Copenhagen, Copenhagen, Denmark.
- Novo Nordisk Center for Optimised Oligo Escape and Control of Disease, University of Copenhagen, Copenhagen, Denmark.
- Novo Nordisk Center for Protein Research, University of Copenhagen, Copenhagen, Denmark.
| |
Collapse
|
29
|
Gómez-de-Mariscal E, Del Rosario M, Pylvänäinen JW, Jacquemet G, Henriques R. Harnessing artificial intelligence to reduce phototoxicity in live imaging. J Cell Sci 2024; 137:jcs261545. [PMID: 38324353 PMCID: PMC10912813 DOI: 10.1242/jcs.261545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024] Open
Abstract
Fluorescence microscopy is essential for studying living cells, tissues and organisms. However, the fluorescent light that switches on fluorescent molecules also harms the samples, jeopardizing the validity of results - particularly in techniques such as super-resolution microscopy, which demands extended illumination. Artificial intelligence (AI)-enabled software capable of denoising, image restoration, temporal interpolation or cross-modal style transfer has great potential to rescue live imaging data and limit photodamage. Yet we believe the focus should be on maintaining light-induced damage at levels that preserve natural cell behaviour. In this Opinion piece, we argue that a shift in role for AIs is needed - AI should be used to extract rich insights from gentle imaging rather than recover compromised data from harsh illumination. Although AI can enhance imaging, our ultimate goal should be to uncover biological truths, not just retrieve data. It is essential to prioritize minimizing photodamage over merely pushing technical limits. Our approach is aimed towards gentle acquisition and observation of undisturbed living systems, aligning with the essence of live-cell fluorescence microscopy.
Collapse
Affiliation(s)
| | | | - Joanna W. Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku 20100, Finland
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal
- UCL Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK
| |
Collapse
|
30
|
Priessner M, Gaboriau DCA, Sheridan A, Lenn T, Garzon-Coral C, Dunn AR, Chubb JR, Tousley AM, Majzner RG, Manor U, Vilar R, Laine RF. Content-aware frame interpolation (CAFI): deep learning-based temporal super-resolution for fast bioimaging. Nat Methods 2024; 21:322-330. [PMID: 38238557 PMCID: PMC10864186 DOI: 10.1038/s41592-023-02138-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 11/17/2023] [Indexed: 02/15/2024]
Abstract
The development of high-resolution microscopes has made it possible to investigate cellular processes in 3D and over time. However, observing fast cellular dynamics remains challenging because of photobleaching and phototoxicity. Here we report the implementation of two content-aware frame interpolation (CAFI) deep learning networks, Zooming SlowMo and Depth-Aware Video Frame Interpolation, that are highly suited for accurately predicting images in between image pairs, therefore improving the temporal resolution of image series post-acquisition. We show that CAFI is capable of understanding the motion context of biological structures and can perform better than standard interpolation methods. We benchmark CAFI's performance on 12 different datasets, obtained from four different microscopy modalities, and demonstrate its capabilities for single-particle tracking and nuclear segmentation. CAFI potentially allows for reduced light exposure and phototoxicity on the sample for improved long-term live-cell imaging. The models and the training and testing data are available via the ZeroCostDL4Mic platform.
Collapse
Affiliation(s)
- Martin Priessner
- Department of Chemistry, Imperial College London, London, UK.
- Centre of Excellence in Neurotechnology, Imperial College London, London, UK.
| | - David C A Gaboriau
- Facility for Imaging by Light Microscopy, NHLI, Imperial College London, London, UK
| | - Arlo Sheridan
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Tchern Lenn
- CRUK City of London Centre, UCL Cancer Institute, London, UK
| | - Carlos Garzon-Coral
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
- Institute of Human Biology, Roche Pharma Research & Early Development, Roche Innovation Center Basel, Basel, Switzerland
| | - Alexander R Dunn
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
| | - Jonathan R Chubb
- Laboratory for Molecular Cell Biology, University College London, London, UK
| | - Aidan M Tousley
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Robbie G Majzner
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Uri Manor
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
- Department of Cell & Developmental Biology, University of California, San Diego, CA, USA
| | - Ramon Vilar
- Department of Chemistry, Imperial College London, London, UK
| | - Romain F Laine
- Micrographia Bio, Translation and Innovation Hub, London, UK.
| |
Collapse
|
31
|
Wang Q, Li Z, Zhang S, Chi N, Dai Q. A versatile Wavelet-Enhanced CNN-Transformer for improved fluorescence microscopy image restoration. Neural Netw 2024; 170:227-241. [PMID: 37992510 DOI: 10.1016/j.neunet.2023.11.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 11/06/2023] [Accepted: 11/17/2023] [Indexed: 11/24/2023]
Abstract
Fluorescence microscopes are indispensable tools for the life science research community. Nevertheless, the presence of optical component limitations, coupled with the maximum photon budget that the specimen can tolerate, inevitably leads to a decline in imaging quality and a lack of useful signals. Therefore, image restoration becomes essential for ensuring high-quality and accurate analyses. This paper presents the Wavelet-Enhanced Convolutional-Transformer (WECT), a novel deep learning technique developed specifically for the purpose of reducing noise in microscopy images and attaining super-resolution. Unlike traditional approaches, WECT integrates wavelet transform and inverse-transform for multi-resolution image decomposition and reconstruction, resulting in an expanded receptive field for the network without compromising information integrity. Subsequently, multiple consecutive parallel CNN-Transformer modules are utilized to collaboratively model local and global dependencies, thus facilitating the extraction of more comprehensive and diversified deep features. In addition, the incorporation of generative adversarial networks (GANs) into WECT enhances its capacity to generate high perceptual quality microscopic images. Extensive experiments have demonstrated that the WECT framework outperforms current state-of-the-art restoration methods on real fluorescence microscopy data under various imaging modalities and conditions, in terms of quantitative and qualitative analysis.
Collapse
Affiliation(s)
- Qinghua Wang
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China.
| | - Ziwei Li
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Shanghai ERC of LEO Satellite Communication and Applications, Shanghai CIC of LEO Satellite Communication Technology, Fudan University, Shanghai, 200433, China; Pujiang Laboratory, Shanghai, China.
| | - Shuqi Zhang
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China.
| | - Nan Chi
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Shanghai ERC of LEO Satellite Communication and Applications, Shanghai CIC of LEO Satellite Communication Technology, Fudan University, Shanghai, 200433, China; Shanghai Collaborative Innovation Center of Low-Earth-Orbit Satellite Communication Technology, Shanghai, 200433, China.
| | - Qionghai Dai
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China; Department of Automation, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
32
|
Liu GY, Yu D, Fan MM, Zhang X, Jin ZY, Tang C, Liu XF. Antimicrobial resistance crisis: could artificial intelligence be the solution? Mil Med Res 2024; 11:7. [PMID: 38254241 PMCID: PMC10804841 DOI: 10.1186/s40779-024-00510-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
Antimicrobial resistance is a global public health threat, and the World Health Organization (WHO) has announced a priority list of the most threatening pathogens against which novel antibiotics need to be developed. The discovery and introduction of novel antibiotics are time-consuming and expensive. According to WHO's report of antibacterial agents in clinical development, only 18 novel antibiotics have been approved since 2014. Therefore, novel antibiotics are critically needed. Artificial intelligence (AI) has been rapidly applied to drug development since its recent technical breakthrough and has dramatically improved the efficiency of the discovery of novel antibiotics. Here, we first summarized recently marketed novel antibiotics, and antibiotic candidates in clinical development. In addition, we systematically reviewed the involvement of AI in antibacterial drug development and utilization, including small molecules, antimicrobial peptides, phage therapy, essential oils, as well as resistance mechanism prediction, and antibiotic stewardship.
Collapse
Affiliation(s)
- Guang-Yu Liu
- Department of Immunology and Pathogen Biology, School of Basic Medical Sciences, Hangzhou Normal University, Key Laboratory of Aging and Cancer Biology of Zhejiang Province, Key Laboratory of Inflammation and Immunoregulation of Hangzhou, Hangzhou Normal University, Hangzhou, 311121, China
| | - Dan Yu
- National Key Discipline of Pediatrics Key Laboratory of Major Diseases in Children Ministry of Education, Laboratory of Dermatology, Beijing Pediatric Research Institute, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing, 100045, China
| | - Mei-Mei Fan
- Department of Immunology and Pathogen Biology, School of Basic Medical Sciences, Hangzhou Normal University, Key Laboratory of Aging and Cancer Biology of Zhejiang Province, Key Laboratory of Inflammation and Immunoregulation of Hangzhou, Hangzhou Normal University, Hangzhou, 311121, China
| | - Xu Zhang
- Robert and Arlene Kogod Center on Aging, Mayo Clinic, Rochester, MN, 55905, USA
- Department of Biochemistry and Molecular Biology, Mayo Clinic, Rochester, MN, 55905, USA
| | - Ze-Yu Jin
- Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, TX, 77030, USA
| | - Christoph Tang
- Sir William Dunn School of Pathology, University of Oxford, Oxford, OX1 3RE, UK.
| | - Xiao-Fen Liu
- Institute of Antibiotics, Huashan Hospital, Fudan University, Key Laboratory of Clinical Pharmacology of Antibiotics, National Health Commission of the People's Republic of China, National Clinical Research Centre for Aging and Medicine, Huashan Hospital, Fudan University, Shanghai, 200040, China.
| |
Collapse
|
33
|
Shah ZH, Müller M, Hübner W, Wang TC, Telman D, Huser T, Schenck W. Evaluation of Swin Transformer and knowledge transfer for denoising of super-resolution structured illumination microscopy data. Gigascience 2024; 13:giad109. [PMID: 38217407 PMCID: PMC10787368 DOI: 10.1093/gigascience/giad109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 07/13/2023] [Accepted: 12/05/2023] [Indexed: 01/15/2024] Open
Abstract
BACKGROUND Convolutional neural network (CNN)-based methods have shown excellent performance in denoising and reconstruction of super-resolved structured illumination microscopy (SR-SIM) data. Therefore, CNN-based architectures have been the focus of existing studies. However, Swin Transformer, an alternative and recently proposed deep learning-based image restoration architecture, has not been fully investigated for denoising SR-SIM images. Furthermore, it has not been fully explored how well transfer learning strategies work for denoising SR-SIM images with different noise characteristics and recorded cell structures for these different types of deep learning-based methods. Currently, the scarcity of publicly available SR-SIM datasets limits the exploration of the performance and generalization capabilities of deep learning methods. RESULTS In this work, we present SwinT-fairSIM, a novel method based on the Swin Transformer for restoring SR-SIM images with a low signal-to-noise ratio. The experimental results show that SwinT-fairSIM outperforms previous CNN-based denoising methods. Furthermore, as a second contribution, two types of transfer learning-namely, direct transfer and fine-tuning-were benchmarked in combination with SwinT-fairSIM and CNN-based methods for denoising SR-SIM data. Direct transfer did not prove to be a viable strategy, but fine-tuning produced results comparable to conventional training from scratch while saving computational time and potentially reducing the amount of training data required. As a third contribution, we publish four datasets of raw SIM images and already reconstructed SR-SIM images. These datasets cover two different types of cell structures, tubulin filaments and vesicle structures. Different noise levels are available for the tubulin filaments. CONCLUSION The SwinT-fairSIM method is well suited for denoising SR-SIM images. By fine-tuning, already trained models can be easily adapted to different noise characteristics and cell structures. Furthermore, the provided datasets are structured in a way that the research community can readily use them for research on denoising, super-resolution, and transfer learning strategies.
Collapse
Affiliation(s)
- Zafran Hussain Shah
- Faculty of Engineering and Mathematics, Bielefeld University of Applied Sciences and Arts, 33619 Bielefeld, Germany
| | - Marcel Müller
- Faculty of Physics, Bielefeld University, 33615 Bielefeld, Germany
| | - Wolfgang Hübner
- Faculty of Physics, Bielefeld University, 33615 Bielefeld, Germany
| | - Tung-Cheng Wang
- Faculty of Physics, Bielefeld University, 33615 Bielefeld, Germany
- Leica Microsystems CMS GmbH, 68165 Mannheim, Germany
| | - Daniel Telman
- Faculty of Engineering and Mathematics, Bielefeld University of Applied Sciences and Arts, 33619 Bielefeld, Germany
| | - Thomas Huser
- Faculty of Physics, Bielefeld University, 33615 Bielefeld, Germany
| | - Wolfram Schenck
- Faculty of Engineering and Mathematics, Bielefeld University of Applied Sciences and Arts, 33619 Bielefeld, Germany
| |
Collapse
|
34
|
Zhao L, Chi H, Zhong T, Jia Y. Perception-oriented generative adversarial network for retinal fundus image super-resolution. Comput Biol Med 2024; 168:107708. [PMID: 37995535 DOI: 10.1016/j.compbiomed.2023.107708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 10/07/2023] [Accepted: 11/15/2023] [Indexed: 11/25/2023]
Abstract
Retinal fundus imaging is a crucial diagnostic tool in ophthalmology, enabling the early detection and monitoring of various ocular diseases. However, capturing high-resolution fundus images often presents challenges due to factors such as defocusing and diffraction in the digital imaging process, limited shutter speed, sensor unit density, and random noise in the image sensor or during image transmission. Super-resolution techniques offer a promising solution to overcome these limitations and enhance the visual details in retinal fundus images. Since the retina has rich texture details, the super-resolution images often introduce artifacts into texture details and lose some fine retinal vessel structures. To improve the perceptual quality of the retinal fundus image, a generative adversarial network that consists of a generator and a discriminator is proposed. The proposed generator mainly comprises 23 multi-scale feature extraction blocks, an image segmentation network, and 23 residual-in-residual dense blocks. These components are employed to extract features at different scales, acquire the retinal vessel grayscale image, and extract retinal vascular features, respectively. The generator has two branches that are mainly responsible for extracting global features and vascular features, respectively. The extracted features from the two branches are fused to better restore the super-resolution image. The proposed generator can restore more details and more accurate fine vessel structures in retinal images. The improved discriminator is proposed by introducing our designed attention modules to help the generator yield clearer super-resolution images. Additionally, an artifact loss function is also introduced to enhance the generative adversarial network, enabling more accurate measurement of the disparity between the high-resolution image and the restored image. Experimental results show that the generated images obtained by our proposed method have a better perceptual quality than the state-of-the-art image super-resolution methods.
Collapse
Affiliation(s)
- Liquan Zhao
- Key Laboratory of Modern Power System Simulation and Control & Renewable Energy Technology, Ministry of Education, Northeast Electric Power University, Jilin, China
| | - Haotian Chi
- Key Laboratory of Modern Power System Simulation and Control & Renewable Energy Technology, Ministry of Education, Northeast Electric Power University, Jilin, China
| | - Tie Zhong
- Key Laboratory of Modern Power System Simulation and Control & Renewable Energy Technology, Ministry of Education, Northeast Electric Power University, Jilin, China.
| | - Yanfei Jia
- College of Electric Power Engineering, Beihua University, Jilin, China
| |
Collapse
|
35
|
Xypakis E, de Turris V, Gala F, Ruocco G, Leonetti M. Physics-informed deep neural network for image denoising. OPTICS EXPRESS 2023; 31:43838-43849. [PMID: 38178470 DOI: 10.1364/oe.504606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/14/2023] [Indexed: 01/06/2024]
Abstract
Image enhancement deep neural networks (DNN) can improve signal to noise ratio or resolution of optically collected visual information. The literature reports a variety of approaches with varying effectiveness. All these algorithms rely on arbitrary data (the pixels' count-rate) normalization, making their performance strngly affected by dataset or user-specific data pre-manipulation. We developed a DNN algorithm capable to enhance images signal-to-noise surpassing previous algorithms. Our model stems from the nature of the photon detection process which is characterized by an inherently Poissonian statistics. Our algorithm is thus driven by distance between probability functions instead than relying on the sole count-rate, producing high performance results especially in high-dynamic-range images. Moreover, it does not require any arbitrary image renormalization other than the transformation of the camera's count-rate into photon-number.
Collapse
|
36
|
Hu K, Yang C, Wang Z, Wang J. Compound weighted fusion evaluation and optimization of intelligent tracking algorithm in radar seeker. iScience 2023; 26:108550. [PMID: 38162028 PMCID: PMC10757038 DOI: 10.1016/j.isci.2023.108550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 10/10/2023] [Accepted: 11/20/2023] [Indexed: 01/03/2024] Open
Abstract
This paper designs a hierarchical weighted fusion evaluation/optimization scheme for the radar seeker neural network (NN) tracking algorithm. The first weighted fusion of closed-loop performance index is carried out to exclude the hardware influence on algorithm evaluation. Then, according to different tracking scenarios, the tracking index is divided into different periods; a single period score is given by a linear-nonlinear hybrid scoring mechanism. Furthermore, in a single index, the internal scores of different time periods are weighted and fused for the second time to obtain the index overall score. Finally, the third weighted fusion of the multi-index scores obtains the comprehensive score of the algorithm. We design the parameter evaluation case sets and repeat the aforementioned compound weighting; hence the case with the highest comprehensive score is obtained. Finally, the algorithm is optimized by the highest-score case. The experiment using fuzzy NN radar seeker verifies the effectiveness of the method.
Collapse
Affiliation(s)
- Kaiyu Hu
- 304 Institute, China Aerospace Science and Industry Corporation, Beijing 100074, China
- Beijing Jinghang Institute of Computing and Communication, China Aerospace Science and Industry Corporation, Beijing 100074, China
| | - Chunxia Yang
- 304 Institute, China Aerospace Science and Industry Corporation, Beijing 100074, China
| | - Zhaoyang Wang
- 304 Institute, China Aerospace Science and Industry Corporation, Beijing 100074, China
| | - Jiaming Wang
- 304 Institute, China Aerospace Science and Industry Corporation, Beijing 100074, China
| |
Collapse
|
37
|
Yi C, Zhu L, Sun J, Wang Z, Zhang M, Zhong F, Yan L, Tang J, Huang L, Zhang YH, Li D, Fei P. Video-rate 3D imaging of living cells using Fourier view-channel-depth light field microscopy. Commun Biol 2023; 6:1259. [PMID: 38086994 PMCID: PMC10716377 DOI: 10.1038/s42003-023-05636-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 11/27/2023] [Indexed: 12/18/2023] Open
Abstract
Interrogation of subcellular biological dynamics occurring in a living cell often requires noninvasive imaging of the fragile cell with high spatiotemporal resolution across all three dimensions. It thereby poses big challenges to modern fluorescence microscopy implementations because the limited photon budget in a live-cell imaging task makes the achievable performance of conventional microscopy approaches compromise between their spatial resolution, volumetric imaging speed, and phototoxicity. Here, we incorporate a two-stage view-channel-depth (VCD) deep-learning reconstruction strategy with a Fourier light-field microscope based on diffractive optical element to realize fast 3D super-resolution reconstructions of intracellular dynamics from single diffraction-limited 2D light-filed measurements. This VCD-enabled Fourier light-filed imaging approach (F-VCD), achieves video-rate (50 volumes per second) 3D imaging of intracellular dynamics at a high spatiotemporal resolution of ~180 nm × 180 nm × 400 nm and strong noise-resistant capability, with which light field images with a signal-to-noise ratio (SNR) down to -1.62 dB could be well reconstructed. With this approach, we successfully demonstrate the 4D imaging of intracellular organelle dynamics, e.g., mitochondria fission and fusion, with ~5000 times of observation.
Collapse
Affiliation(s)
- Chengqiang Yi
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Lanxin Zhu
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jiahao Sun
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Zhaofei Wang
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Meng Zhang
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
- Britton Chance Center for Biomedical Photonics-MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Fenghe Zhong
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Luxin Yan
- State Education Commission Key Laboratory for Image Processing and Intelligent Control, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Jiang Tang
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Liang Huang
- Department of Hematology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, 430030, China
| | - Yu-Hui Zhang
- Britton Chance Center for Biomedical Photonics-MoE Key Laboratory for Biomedical Photonics, Advanced Biomedical Imaging Facility-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Dongyu Li
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Peng Fei
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics-Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| |
Collapse
|
38
|
Pylvänäinen JW, Gómez-de-Mariscal E, Henriques R, Jacquemet G. Live-cell imaging in the deep learning era. Curr Opin Cell Biol 2023; 85:102271. [PMID: 37897927 DOI: 10.1016/j.ceb.2023.102271] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 10/30/2023]
Abstract
Live imaging is a powerful tool, enabling scientists to observe living organisms in real time. In particular, when combined with fluorescence microscopy, live imaging allows the monitoring of cellular components with high sensitivity and specificity. Yet, due to critical challenges (i.e., drift, phototoxicity, dataset size), implementing live imaging and analyzing the resulting datasets is rarely straightforward. Over the past years, the development of bioimage analysis tools, including deep learning, is changing how we perform live imaging. Here we briefly cover important computational methods aiding live imaging and carrying out key tasks such as drift correction, denoising, super-resolution imaging, artificial labeling, tracking, and time series analysis. We also cover recent advances in self-driving microscopy.
Collapse
Affiliation(s)
- Joanna W Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland
| | | | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal; University College London, London WC1E 6BT, United Kingdom
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland; Turku Bioscience Centre, University of Turku and Åbo Akademi University, 20520, Turku, Finland; InFLAMES Research Flagship Center, University of Turku and Åbo Akademi University, 20520 Turku, Finland; Turku Bioimaging, University of Turku and Åbo Akademi University, FI- 20520 Turku, Finland.
| |
Collapse
|
39
|
Astratov VN, Sahel YB, Eldar YC, Huang L, Ozcan A, Zheludev N, Zhao J, Burns Z, Liu Z, Narimanov E, Goswami N, Popescu G, Pfitzner E, Kukura P, Hsiao YT, Hsieh CL, Abbey B, Diaspro A, LeGratiet A, Bianchini P, Shaked NT, Simon B, Verrier N, Debailleul M, Haeberlé O, Wang S, Liu M, Bai Y, Cheng JX, Kariman BS, Fujita K, Sinvani M, Zalevsky Z, Li X, Huang GJ, Chu SW, Tzang O, Hershkovitz D, Cheshnovsky O, Huttunen MJ, Stanciu SG, Smolyaninova VN, Smolyaninov II, Leonhardt U, Sahebdivan S, Wang Z, Luk’yanchuk B, Wu L, Maslov AV, Jin B, Simovski CR, Perrin S, Montgomery P, Lecler S. Roadmap on Label-Free Super-Resolution Imaging. LASER & PHOTONICS REVIEWS 2023; 17:2200029. [PMID: 38883699 PMCID: PMC11178318 DOI: 10.1002/lpor.202200029] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Indexed: 06/18/2024]
Abstract
Label-free super-resolution (LFSR) imaging relies on light-scattering processes in nanoscale objects without a need for fluorescent (FL) staining required in super-resolved FL microscopy. The objectives of this Roadmap are to present a comprehensive vision of the developments, the state-of-the-art in this field, and to discuss the resolution boundaries and hurdles which need to be overcome to break the classical diffraction limit of the LFSR imaging. The scope of this Roadmap spans from the advanced interference detection techniques, where the diffraction-limited lateral resolution is combined with unsurpassed axial and temporal resolution, to techniques with true lateral super-resolution capability which are based on understanding resolution as an information science problem, on using novel structured illumination, near-field scanning, and nonlinear optics approaches, and on designing superlenses based on nanoplasmonics, metamaterials, transformation optics, and microsphere-assisted approaches. To this end, this Roadmap brings under the same umbrella researchers from the physics and biomedical optics communities in which such studies have often been developing separately. The ultimate intent of this paper is to create a vision for the current and future developments of LFSR imaging based on its physical mechanisms and to create a great opening for the series of articles in this field.
Collapse
Affiliation(s)
- Vasily N. Astratov
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Yair Ben Sahel
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Yonina C. Eldar
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California 90095, USA
- Bioengineering Department, University of California, Los Angeles, California 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California 90095, USA
- David Geffen School of Medicine, University of California, Los Angeles, California 90095, USA
| | - Nikolay Zheludev
- Optoelectronics Research Centre, University of Southampton, Southampton, SO17 1BJ, UK
- Centre for Disruptive Photonic Technologies, The Photonics Institute, School of Physical and Mathematical Sciences, Nanyang Technological University, 637371, Singapore
| | - Junxiang Zhao
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zachary Burns
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Zhaowei Liu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
- Material Science and Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Evgenii Narimanov
- School of Electrical Engineering, and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA
| | - Neha Goswami
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Gabriel Popescu
- Quantitative Light Imaging Laboratory, Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, Illinois 61801, USA
| | - Emanuel Pfitzner
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Philipp Kukura
- Department of Chemistry, University of Oxford, Oxford OX1 3QZ, United Kingdom
| | - Yi-Teng Hsiao
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Chia-Lung Hsieh
- Institute of Atomic and Molecular Sciences (IAMS), Academia Sinica 1, Roosevelt Rd. Sec. 4, Taipei 10617 Taiwan
| | - Brian Abbey
- Australian Research Council Centre of Excellence for Advanced Molecular Imaging, La Trobe University, Melbourne, Victoria, Australia
- Department of Chemistry and Physics, La Trobe Institute for Molecular Science (LIMS), La Trobe University, Melbourne, Victoria, Australia
| | - Alberto Diaspro
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Aymeric LeGratiet
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- Université de Rennes, CNRS, Institut FOTON - UMR 6082, F-22305 Lannion, France
| | - Paolo Bianchini
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Natan T. Shaked
- Tel Aviv University, Faculty of Engineering, Department of Biomedical Engineering, Tel Aviv 6997801, Israel
| | - Bertrand Simon
- LP2N, Institut d’Optique Graduate School, CNRS UMR 5298, Université de Bordeaux, Talence France
| | - Nicolas Verrier
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | | | - Olivier Haeberlé
- IRIMAS UR UHA 7499, Université de Haute-Alsace, Mulhouse, France
| | - Sheng Wang
- School of Physics and Technology, Wuhan University, China
- Wuhan Institute of Quantum Technology, China
| | - Mengkun Liu
- Department of Physics and Astronomy, Stony Brook University, USA
- National Synchrotron Light Source II, Brookhaven National Laboratory, USA
| | - Yeran Bai
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Ji-Xin Cheng
- Boston University Photonics Center, Boston, MA 02215, USA
| | - Behjat S. Kariman
- Optical Nanoscopy and NIC@IIT, CHT, Istituto Italiano di Tecnologia, Via Enrico Melen 83B, 16152 Genoa, Italy
- DIFILAB, Department of Physics, University of Genoa, Via Dodecaneso 33, 16146 Genoa, Italy
| | - Katsumasa Fujita
- Department of Applied Physics and the Advanced Photonics and Biosensing Open Innovation Laboratory (AIST); and the Transdimensional Life Imaging Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| | - Moshe Sinvani
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Zeev Zalevsky
- Faculty of Engineering and the Nano-Technology Center, Bar-Ilan University, Ramat Gan, 52900 Israel
| | - Xiangping Li
- Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Institute of Photonics Technology, Jinan University, Guangzhou 510632, China
| | - Guan-Jie Huang
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Shi-Wei Chu
- Department of Physics and Molecular Imaging Center, National Taiwan University, Taipei 10617, Taiwan
- Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan
| | - Omer Tzang
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Dror Hershkovitz
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Ori Cheshnovsky
- School of Chemistry, The Sackler faculty of Exact Sciences, and the Center for Light matter Interactions, and the Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv 69978, Israel
| | - Mikko J. Huttunen
- Laboratory of Photonics, Physics Unit, Tampere University, FI-33014, Tampere, Finland
| | - Stefan G. Stanciu
- Center for Microscopy – Microanalysis and Information Processing, Politehnica University of Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
| | - Vera N. Smolyaninova
- Department of Physics Astronomy and Geosciences, Towson University, 8000 York Rd., Towson, MD 21252, USA
| | - Igor I. Smolyaninov
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - Ulf Leonhardt
- Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Sahar Sahebdivan
- EMTensor GmbH, TechGate, Donau-City-Strasse 1, 1220 Wien, Austria
| | - Zengbo Wang
- School of Computer Science and Electronic Engineering, Bangor University, Bangor, LL57 1UT, United Kingdom
| | - Boris Luk’yanchuk
- Faculty of Physics, Lomonosov Moscow State University, Moscow 119991, Russia
| | - Limin Wu
- Department of Materials Science and State Key Laboratory of Molecular Engineering of Polymers, Fudan University, Shanghai 200433, China
| | - Alexey V. Maslov
- Department of Radiophysics, University of Nizhny Novgorod, Nizhny Novgorod, 603022, Russia
| | - Boya Jin
- Department of Physics and Optical Science, University of North Carolina at Charlotte, Charlotte, North Carolina 28223-0001, USA
| | - Constantin R. Simovski
- Department of Electronics and Nano-Engineering, Aalto University, FI-00076, Espoo, Finland
- Faculty of Physics and Engineering, ITMO University, 199034, St-Petersburg, Russia
| | - Stephane Perrin
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Paul Montgomery
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| | - Sylvain Lecler
- ICube Research Institute, University of Strasbourg - CNRS - INSA de Strasbourg, 300 Bd. Sébastien Brant, 67412 Illkirch, France
| |
Collapse
|
40
|
Xu L, Kan S, Yu X, Liu Y, Fu Y, Peng Y, Liang Y, Cen Y, Zhu C, Jiang W. Deep learning enables stochastic optical reconstruction microscopy-like superresolution image reconstruction from conventional microscopy. iScience 2023; 26:108145. [PMID: 37867953 PMCID: PMC10587619 DOI: 10.1016/j.isci.2023.108145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 08/05/2023] [Accepted: 10/02/2023] [Indexed: 10/24/2023] Open
Abstract
Despite its remarkable potential for transforming low-resolution images, deep learning faces significant challenges in achieving high-quality superresolution microscopy imaging from wide-field (conventional) microscopy. Here, we present X-Microscopy, a computational tool comprising two deep learning subnets, UR-Net-8 and X-Net, which enables STORM-like superresolution microscopy image reconstruction from wide-field images with input-size flexibility. X-Microscopy was trained using samples of various subcellular structures, including cytoskeletal filaments, dot-like, beehive-like, and nanocluster-like structures, to generate prediction models capable of producing images of comparable quality to STORM-like images. In addition to enabling multicolour superresolution image reconstructions, X-Microscopy also facilitates superresolution image reconstruction from different conventional microscopic systems. The capabilities of X-Microscopy offer promising prospects for making superresolution microscopy accessible to a broader range of users, going beyond the confines of well-equipped laboratories.
Collapse
Affiliation(s)
- Lei Xu
- Department of Etiology and Carcinogenesis and State Key Laboratory of Molecular Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
- Key Laboratory of Molecular and Cellular Systems Biology, College of Life Sciences, Tianjin Normal University, Tianjin 300387, China
| | - Shichao Kan
- School of Computer Science and Engineering, Central South University, Changsha, Hunan 410083, China
| | - Xiying Yu
- Department of Etiology and Carcinogenesis and State Key Laboratory of Molecular Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ye Liu
- HAMD (Ningbo) Intelligent Medical Technology Co., Ltd, Ningbo 315194, China
| | - Yuxia Fu
- Department of Etiology and Carcinogenesis and State Key Laboratory of Molecular Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yiqiang Peng
- HAMD (Ningbo) Intelligent Medical Technology Co., Ltd, Ningbo 315194, China
| | - Yanhui Liang
- HAMD (Ningbo) Intelligent Medical Technology Co., Ltd, Ningbo 315194, China
| | - Yigang Cen
- Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China
| | - Changjun Zhu
- Key Laboratory of Molecular and Cellular Systems Biology, College of Life Sciences, Tianjin Normal University, Tianjin 300387, China
| | - Wei Jiang
- Department of Etiology and Carcinogenesis and State Key Laboratory of Molecular Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| |
Collapse
|
41
|
Luo Z, Zhu G, Xu H, Lin D, Li J, Qu J. Combination of deep learning and 2D CARS figures for identification of amyloid-β plaques. OPTICS EXPRESS 2023; 31:34413-34427. [PMID: 37859198 DOI: 10.1364/oe.500136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 09/18/2023] [Indexed: 10/21/2023]
Abstract
In vivo imaging and accurate identification of amyloid-β (Aβ) plaque are crucial in Alzheimer's disease (AD) research. In this work, we propose to combine the coherent anti-Stokes Raman scattering (CARS) microscopy, a powerful detection technology for providing Raman spectra and label-free imaging, with deep learning to distinguish Aβ from non-Aβ regions in AD mice brains in vivo. The 1D CARS spectra is firstly converted to 2D CARS figures by using two different methods: spectral recurrence plot (SRP) and spectral Gramian angular field (SGAF). This can provide more learnable information to the network, improving the classification precision. We then devise a cross-stage attention network (CSAN) that automatically learns the features of Aβ plaques and non-Aβ regions by taking advantage of the computational advances in deep learning. Our algorithm yields higher accuracy, precision, sensitivity and specificity than the results of conventional multivariate statistical analysis method and 1D CARS spectra combined with deep learning, demonstrating its competence in identifying Aβ plaques. Last but not least, the CSAN framework requires no prior information on the imaging modality and may be applicable to other spectroscopy analytical fields.
Collapse
|
42
|
Yang B, Liu W, Chen X, Chen G, Zhu X. A novel multi-frame wavelet generative adversarial network for scattering reconstruction of structured illumination microscopy. Phys Med Biol 2023; 68:185016. [PMID: 37619594 DOI: 10.1088/1361-6560/acf3cb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 08/24/2023] [Indexed: 08/26/2023]
Abstract
Objective. Structured illumination microscopy (SIM) is widely used in various fields of life science research. In clinical practice, it has low phototoxicity, fast imaging speed and no special fluorescent markers. However, SIM is still affected by the scattering medium of biological tissues, resulting in insufficient resolution of the obtained images, which limits the development of life sciences. A novel multi-frame wavelet generation adversarial network (MWGAN) is proposed to improve the scattering reconstruction capability of SIM.Approach. MWGAN is based on two components derived from the original image. A generative adversarial network constructed by wavelet transform is trained to reconstruct some complex details in the cell structure. Multi-frame adversarial network is used to obtain the inter-frame information of the image and use the complementary information of the before and after frames to improve the quality of the model reconstruction.Results. To demonstrate the robustness of MWGAN, multiple low-quality SIM image datasets are tested. Compared with the state-of-the-art methods, the proposed method achieves superior performance in both of the subjective and objective evaluation.Conclusion. MWGAN is effective for improving the clarity of SIM images. Meanwhile, the SIM images reconstructed by multiple frames improve the reconstruction quality of complex regions and allow clearer and dynamic observation of cellular functions.
Collapse
Affiliation(s)
- Bin Yang
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Weiping Liu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Xinghong Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Guannan Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, People's Republic of China
| | - Xiaoqin Zhu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, People's Republic of China
| |
Collapse
|
43
|
Li X, Wu Y, Su Y, Rey-Suarez I, Matthaeus C, Updegrove TB, Wei Z, Zhang L, Sasaki H, Li Y, Guo M, Giannini JP, Vishwasrao HD, Chen J, Lee SJJ, Shao L, Liu H, Ramamurthi KS, Taraska JW, Upadhyaya A, La Riviere P, Shroff H. Three-dimensional structured illumination microscopy with enhanced axial resolution. Nat Biotechnol 2023; 41:1307-1319. [PMID: 36702897 PMCID: PMC10497409 DOI: 10.1038/s41587-022-01651-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 12/16/2022] [Indexed: 01/27/2023]
Abstract
The axial resolution of three-dimensional structured illumination microscopy (3D SIM) is limited to ∼300 nm. Here we present two distinct, complementary methods to improve axial resolution in 3D SIM with minimal or no modification to the optical system. We show that placing a mirror directly opposite the sample enables four-beam interference with higher spatial frequency content than 3D SIM illumination, offering near-isotropic imaging with ∼120-nm lateral and 160-nm axial resolution. We also developed a deep learning method achieving ∼120-nm isotropic resolution. This method can be combined with denoising to facilitate volumetric imaging spanning dozens of timepoints. We demonstrate the potential of these advances by imaging a variety of cellular samples, delineating the nanoscale distribution of vimentin and microtubule filaments, observing the relative positions of caveolar coat proteins and lysosomal markers and visualizing cytoskeletal dynamics within T cells in the early stages of immune synapse formation.
Collapse
Affiliation(s)
- Xuesong Li
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA.
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA.
| | - Yicong Wu
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA.
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA.
| | - Yijun Su
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
- Leica Microsystems, Inc., Deerfield, IL, USA
- SVision, LLC, Bellevue, WA, USA
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Ivan Rey-Suarez
- Institute for Physical Science and Technology, University of Maryland, College Park, MD, USA
| | - Claudia Matthaeus
- Biochemistry and Biophysics Center, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA
| | - Taylor B Updegrove
- Laboratory of Molecular Biology, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Zhuang Wei
- Section on Biophotonics, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
| | - Lixia Zhang
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
| | - Hideki Sasaki
- Leica Microsystems, Inc., Deerfield, IL, USA
- SVision, LLC, Bellevue, WA, USA
| | - Yue Li
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Min Guo
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - John P Giannini
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
| | - Harshad D Vishwasrao
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
| | - Jiji Chen
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
| | - Shih-Jong J Lee
- Leica Microsystems, Inc., Deerfield, IL, USA
- SVision, LLC, Bellevue, WA, USA
| | - Lin Shao
- Department of Neuroscience and Department of Cell Biology, Yale University School of Medicine, New Haven, CT, USA
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Kumaran S Ramamurthi
- Laboratory of Molecular Biology, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Justin W Taraska
- Biochemistry and Biophysics Center, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA
| | - Arpita Upadhyaya
- Institute for Physical Science and Technology, University of Maryland, College Park, MD, USA
- Department of Physics, University of Maryland, College Park, MD, USA
| | - Patrick La Riviere
- Department of Radiology, University of Chicago, Chicago, IL, USA
- MBL Fellows, Marine Biological Laboratory, Woods Hole, MA, USA
| | - Hari Shroff
- Laboratory of High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Bethesda, MD, USA
- Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD, USA
- MBL Fellows, Marine Biological Laboratory, Woods Hole, MA, USA
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| |
Collapse
|
44
|
Song L, Liu Y, Fan J, Zhou DX. Approximation of smooth functionals using deep ReLU networks. Neural Netw 2023; 166:424-436. [PMID: 37549610 DOI: 10.1016/j.neunet.2023.07.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 04/11/2023] [Accepted: 07/11/2023] [Indexed: 08/09/2023]
Abstract
In recent years, deep neural networks have been employed to approximate nonlinear continuous functionals F defined on Lp([-1,1]s) for 1≤p≤∞. However, the existing theoretical analysis in the literature either is unsatisfactory due to the poor approximation results, or does not apply to the rectified linear unit (ReLU) activation function. This paper aims to investigate the approximation power of functional deep ReLU networks in two settings: F is continuous with restrictions on the modulus of continuity, and F has higher order Fréchet derivatives. A novel functional network structure is proposed to extract features of higher order smoothness harbored by the target functional F. Quantitative rates of approximation in terms of the depth, width and total number of weights of neural networks are derived for both settings. We give logarithmic rates when measuring the approximation error on the unit ball of a Hölder space. In addition, we establish nearly polynomial rates (i.e., rates of the form exp-a(logM)b with a>0,0
Collapse
Affiliation(s)
- Linhao Song
- School of Mathematical Science, Beihang University, Beijing, China; School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| | - Ying Liu
- Laboratory for AI-Powered Financial Technologies, Hong Kong Science Park, Shatin, New Territories, Hong Kong.
| | - Jun Fan
- Department of Mathematics, Hong Kong Baptist University, Kowloon, Hong Kong.
| | - Ding-Xuan Zhou
- School of Mathematics and Statistics, University of Sydney, Sydney, NSW 2006, Australia.
| |
Collapse
|
45
|
Chen R, Peng S, Zhu L, Meng J, Fan X, Feng Z, Zhang H, Qian J. Enhancing Total Optical Throughput of Microscopy with Deep Learning for Intravital Observation. SMALL METHODS 2023; 7:e2300172. [PMID: 37183924 DOI: 10.1002/smtd.202300172] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 04/17/2023] [Indexed: 05/16/2023]
Abstract
The significance of performing large-depth dynamic microscopic imaging in vivo for life science research cannot be overstated. However, the optical throughput of the microscope limits the available information per unit of time, i.e., it is difficult to obtain both high spatial and temporal resolution at once. Here, a method is proposed to construct a kind of intravital microscopy with high optical throughput, by making near-infrared-II (NIR-II, 900-1880 nm) wide-field fluorescence microscopy learn from two-photon fluorescence microscopy based on a scale-recurrent network. Using this upgraded NIR-II fluorescence microscope, vessels in the opaque brain of a rodent are reconstructed three-dimensionally. Five-fold axial and thirteen-fold lateral resolution improvements are achieved without sacrificing temporal resolution and light utilization. Also, tiny cerebral vessel dilatations in early acute respiratory failure mice are observed, with this high optical throughput NIR-II microscope at an imaging speed of 30 fps.
Collapse
Affiliation(s)
- Runze Chen
- College of Optical Science and Engineering, State Key Laboratory of Modern Optical Instrumentations, International Research Center for Advanced Photonics, Centre for Optical and Electromagnetic Research, Zhejiang University, 310058, Hangzhou, China
| | - Shiyi Peng
- College of Optical Science and Engineering, State Key Laboratory of Modern Optical Instrumentations, International Research Center for Advanced Photonics, Centre for Optical and Electromagnetic Research, Zhejiang University, 310058, Hangzhou, China
| | - Liang Zhu
- College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology (ZIINT), Zhejiang University, 310027, Hangzhou, China
| | - Jia Meng
- College of Optical Science and Engineering, State Key Laboratory of Modern Optical Instrumentations, International Research Center for Advanced Photonics, Centre for Optical and Electromagnetic Research, Zhejiang University, 310058, Hangzhou, China
| | - Xiaoxiao Fan
- College of Optical Science and Engineering, State Key Laboratory of Modern Optical Instrumentations, International Research Center for Advanced Photonics, Centre for Optical and Electromagnetic Research, Zhejiang University, 310058, Hangzhou, China
| | - Zhe Feng
- College of Optical Science and Engineering, State Key Laboratory of Modern Optical Instrumentations, International Research Center for Advanced Photonics, Centre for Optical and Electromagnetic Research, Zhejiang University, 310058, Hangzhou, China
- Dr. Li Dak Sum & Yip Yio Chin Center for Stem Cell and Regenerative Medicine, Zhejiang University, 310058, Hangzhou, China
| | - Hequn Zhang
- College of Optical Science and Engineering, State Key Laboratory of Modern Optical Instrumentations, International Research Center for Advanced Photonics, Centre for Optical and Electromagnetic Research, Zhejiang University, 310058, Hangzhou, China
| | - Jun Qian
- College of Optical Science and Engineering, State Key Laboratory of Modern Optical Instrumentations, International Research Center for Advanced Photonics, Centre for Optical and Electromagnetic Research, Zhejiang University, 310058, Hangzhou, China
- Dr. Li Dak Sum & Yip Yio Chin Center for Stem Cell and Regenerative Medicine, Zhejiang University, 310058, Hangzhou, China
| |
Collapse
|
46
|
Liao J, Zhang C, Xu X, Zhou L, Yu B, Lin D, Li J, Qu J. Deep-MSIM: Fast Image Reconstruction with Deep Learning in Multifocal Structured Illumination Microscopy. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2300947. [PMID: 37424045 PMCID: PMC10520669 DOI: 10.1002/advs.202300947] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 06/02/2023] [Indexed: 07/11/2023]
Abstract
Fast and precise reconstruction algorithm is desired for for multifocal structured illumination microscopy (MSIM) to obtain the super-resolution image. This work proposes a deep convolutional neural network (CNN) to learn a direct mapping from raw MSIM images to super-resolution image, which takes advantage of the computational advances of deep learning to accelerate the reconstruction. The method is validated on diverse biological structures and in vivo imaging of zebrafish at a depth of 100 µm. The results show that high-quality, super-resolution images can be reconstructed in one-third of the runtime consumed by conventional MSIM method, without compromising spatial resolution. Last but not least, a fourfold reduction in the number of raw images required for reconstruction is achieved by using the same network architecture, yet with different training data.
Collapse
Affiliation(s)
- Jianhui Liao
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Chenshuang Zhang
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Xiangcong Xu
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Liangliang Zhou
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Bin Yu
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Danying Lin
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Jia Li
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| | - Junle Qu
- State Key Laboratory of Radio Frequency Heterogeneous IntegrationKey Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong ProvinceCollege of Physics and Optoelectronic EngineeringShenzhen UniversityShenzhen518060China
| |
Collapse
|
47
|
Wei Z, Bai Y, Cheng R, Hu H, Wang P, Zhang W, Zhang G. Improved sparse domain super-resolution reconstruction algorithm based on CMUT. PLoS One 2023; 18:e0290989. [PMID: 37651438 PMCID: PMC10470967 DOI: 10.1371/journal.pone.0290989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Accepted: 08/20/2023] [Indexed: 09/02/2023] Open
Abstract
A novel breast ultrasound tomography system based on a circular array of capacitive micromechanical ultrasound transducers (CMUT) has broad application prospects. However, the images produced by this system are not suitable as input for the training phase of the super-resolution (SR) reconstruction algorithm. To solve the problem, this paper proposes an improved medical image super-resolution (MeSR) method based on the sparse domain. First, we use the simultaneous algebraic reconstruction technique (SART) with high imaging accuracy to reconstruct the image into a training image in a sparse domain model. Secondly, we denoise and enhance the contrast of the SART images to obtain improved detail images before training the dictionary. Then, we use the original detail image as the guide image to further process the improved detail image. Therefore, a high-precision dictionary was obtained during the testing phase and applied to filtered back projection SR reconstruction. We compared the proposed algorithm with previously reported algorithms in the Shepp Logan model and the model based on the CMUT background. The results showed significant improvements in peak signal-to-noise ratio, entropy, and average gradient compared to previously reported algorithms. The experimental results demonstrated that the proposed MeSR method can use noisy reconstructed images as input for the training phase of the SR algorithm and produce excellent visual effects.
Collapse
Affiliation(s)
- Zhiqing Wei
- School of Mathematics, North University of China, Taiyuan, China
| | - Yanping Bai
- School of Mathematics, North University of China, Taiyuan, China
| | - Rong Cheng
- School of Mathematics, North University of China, Taiyuan, China
| | - Hongping Hu
- School of Mathematics, North University of China, Taiyuan, China
| | - Peng Wang
- School of Mathematics, North University of China, Taiyuan, China
| | - Wendong Zhang
- Key Laboratory of Dynamic Testing Technology, School of Instrument and Electronics, North University of China, Taiyuan, China
| | - Guojun Zhang
- Key Laboratory of Dynamic Testing Technology, School of Instrument and Electronics, North University of China, Taiyuan, China
| |
Collapse
|
48
|
Wang H, Fu T, Du Y, Gao W, Huang K, Liu Z, Chandak P, Liu S, Van Katwyk P, Deac A, Anandkumar A, Bergen K, Gomes CP, Ho S, Kohli P, Lasenby J, Leskovec J, Liu TY, Manrai A, Marks D, Ramsundar B, Song L, Sun J, Tang J, Veličković P, Welling M, Zhang L, Coley CW, Bengio Y, Zitnik M. Scientific discovery in the age of artificial intelligence. Nature 2023; 620:47-60. [PMID: 37532811 DOI: 10.1038/s41586-023-06221-2] [Citation(s) in RCA: 113] [Impact Index Per Article: 113.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 05/16/2023] [Indexed: 08/04/2023]
Abstract
Artificial intelligence (AI) is being increasingly integrated into scientific discovery to augment and accelerate research, helping scientists to generate hypotheses, design experiments, collect and interpret large datasets, and gain insights that might not have been possible using traditional scientific methods alone. Here we examine breakthroughs over the past decade that include self-supervised learning, which allows models to be trained on vast amounts of unlabelled data, and geometric deep learning, which leverages knowledge about the structure of scientific data to enhance model accuracy and efficiency. Generative AI methods can create designs, such as small-molecule drugs and proteins, by analysing diverse data modalities, including images and sequences. We discuss how these methods can help scientists throughout the scientific process and the central issues that remain despite such advances. Both developers and users of AI toolsneed a better understanding of when such approaches need improvement, and challenges posed by poor data quality and stewardship remain. These issues cut across scientific disciplines and require developing foundational algorithmic approaches that can contribute to scientific understanding or acquire it autonomously, making them critical areas of focus for AI innovation.
Collapse
Affiliation(s)
- Hanchen Wang
- Department of Engineering, University of Cambridge, Cambridge, UK
- Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
- Department of Research and Early Development, Genentech Inc, South San Francisco, CA, USA
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Tianfan Fu
- Department of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Yuanqi Du
- Department of Computer Science, Cornell University, Ithaca, NY, USA
| | - Wenhao Gao
- Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Kexin Huang
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Ziming Liu
- Department of Physics, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Payal Chandak
- Harvard-MIT Program in Health Sciences and Technology, Cambridge, MA, USA
| | - Shengchao Liu
- Mila - Quebec AI Institute, Montreal, Quebec, Canada
- Université de Montréal, Montreal, Quebec, Canada
| | - Peter Van Katwyk
- Department of Earth, Environmental and Planetary Sciences, Brown University, Providence, RI, USA
- Data Science Institute, Brown University, Providence, RI, USA
| | - Andreea Deac
- Mila - Quebec AI Institute, Montreal, Quebec, Canada
- Université de Montréal, Montreal, Quebec, Canada
| | - Anima Anandkumar
- Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
- NVIDIA, Santa Clara, CA, USA
| | - Karianne Bergen
- Department of Earth, Environmental and Planetary Sciences, Brown University, Providence, RI, USA
- Data Science Institute, Brown University, Providence, RI, USA
| | - Carla P Gomes
- Department of Computer Science, Cornell University, Ithaca, NY, USA
| | - Shirley Ho
- Center for Computational Astrophysics, Flatiron Institute, New York, NY, USA
- Department of Astrophysical Sciences, Princeton University, Princeton, NJ, USA
- Department of Physics, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Physics and Center for Data Science, New York University, New York, NY, USA
| | | | - Joan Lasenby
- Department of Engineering, University of Cambridge, Cambridge, UK
| | - Jure Leskovec
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | | | - Arjun Manrai
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Debora Marks
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA
- Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | | | - Le Song
- BioMap, Beijing, China
- Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Jimeng Sun
- University of Illinois at Urbana-Champaign, Champaign, IL, USA
| | - Jian Tang
- Mila - Quebec AI Institute, Montreal, Quebec, Canada
- HEC Montréal, Montreal, Quebec, Canada
- CIFAR AI Chair, Toronto, Ontario, Canada
| | - Petar Veličković
- Google DeepMind, London, UK
- Department of Computer Science and Technology, University of Cambridge, Cambridge, UK
| | - Max Welling
- University of Amsterdam, Amsterdam, Netherlands
- Microsoft Research Amsterdam, Amsterdam, Netherlands
| | - Linfeng Zhang
- DP Technology, Beijing, China
- AI for Science Institute, Beijing, China
| | - Connor W Coley
- Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Yoshua Bengio
- Mila - Quebec AI Institute, Montreal, Quebec, Canada
- Université de Montréal, Montreal, Quebec, Canada
| | - Marinka Zitnik
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA.
- Broad Institute of MIT and Harvard, Cambridge, MA, USA.
- Harvard Data Science Initiative, Cambridge, MA, USA.
- Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
49
|
Zhang C, Tian Z, Chen R, Rowan F, Qiu K, Sun Y, Guan JL, Diao J. Advanced imaging techniques for tracking drug dynamics at the subcellular level. Adv Drug Deliv Rev 2023; 199:114978. [PMID: 37385544 PMCID: PMC10527994 DOI: 10.1016/j.addr.2023.114978] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 06/17/2023] [Accepted: 06/26/2023] [Indexed: 07/01/2023]
Abstract
Optical microscopes are an important imaging tool that have effectively advanced the development of modern biomedicine. In recent years, super-resolution microscopy (SRM) has become one of the most popular techniques in the life sciences, especially in the field of living cell imaging. SRM has been used to solve many problems in basic biological research and has great potential in clinical application. In particular, the use of SRM to study drug delivery and kinetics at the subcellular level enables researchers to better study drugs' mechanisms of action and to assess the efficacy of their targets in vivo. The purpose of this paper is to review the recent advances in SRM and to highlight some of its applications in assessing subcellular drug dynamics.
Collapse
Affiliation(s)
- Chengying Zhang
- Department of Cancer Biology, University of Cincinnati College of Medicine, Cincinnati, OH 45267, USA
| | - Zhiqi Tian
- Department of Cancer Biology, University of Cincinnati College of Medicine, Cincinnati, OH 45267, USA
| | - Rui Chen
- Department of Chemistry, University of Cincinnati, Cincinnati, OH 45221, USA
| | - Fiona Rowan
- Department of Cancer Biology, University of Cincinnati College of Medicine, Cincinnati, OH 45267, USA
| | - Kangqiang Qiu
- Department of Cancer Biology, University of Cincinnati College of Medicine, Cincinnati, OH 45267, USA
| | - Yujie Sun
- Department of Chemistry, University of Cincinnati, Cincinnati, OH 45221, USA
| | - Jun-Lin Guan
- Department of Cancer Biology, University of Cincinnati College of Medicine, Cincinnati, OH 45267, USA
| | - Jiajie Diao
- Department of Cancer Biology, University of Cincinnati College of Medicine, Cincinnati, OH 45267, USA.
| |
Collapse
|
50
|
Bouchard C, Wiesner T, Deschênes A, Bilodeau A, Turcotte B, Gagné C, Lavoie-Cardinal F. Resolution enhancement with a task-assisted GAN to guide optical nanoscopy image analysis and acquisition. NAT MACH INTELL 2023; 5:830-844. [PMID: 37615032 PMCID: PMC10442226 DOI: 10.1038/s42256-023-00689-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 06/12/2023] [Indexed: 08/25/2023]
Abstract
Super-resolution fluorescence microscopy methods enable the characterization of nanostructures in living and fixed biological tissues. However, they require the adjustment of multiple imaging parameters while attempting to satisfy conflicting objectives, such as maximizing spatial and temporal resolution while minimizing light exposure. To overcome the limitations imposed by these trade-offs, post-acquisition algorithmic approaches have been proposed for resolution enhancement and image-quality improvement. Here we introduce the task-assisted generative adversarial network (TA-GAN), which incorporates an auxiliary task (for example, segmentation, localization) closely related to the observed biological nanostructure characterization. We evaluate how the TA-GAN improves generative accuracy over unassisted methods, using images acquired with different modalities such as confocal, bright-field, stimulated emission depletion and structured illumination microscopy. The TA-GAN is incorporated directly into the acquisition pipeline of the microscope to predict the nanometric content of the field of view without requiring the acquisition of a super-resolved image. This information is used to automatically select the imaging modality and regions of interest, optimizing the acquisition sequence by reducing light exposure. Data-driven microscopy methods like the TA-GAN will enable the observation of dynamic molecular processes with spatial and temporal resolutions that surpass the limits currently imposed by the trade-offs constraining super-resolution microscopy.
Collapse
Affiliation(s)
- Catherine Bouchard
- Institute Intelligence and Data (IID), Université Laval, Quebec City, Quebec Canada
- CERVO Brain Research Center, Quebec City, Quebec Canada
| | - Theresa Wiesner
- Institute Intelligence and Data (IID), Université Laval, Quebec City, Quebec Canada
- CERVO Brain Research Center, Quebec City, Quebec Canada
| | | | - Anthony Bilodeau
- Institute Intelligence and Data (IID), Université Laval, Quebec City, Quebec Canada
- CERVO Brain Research Center, Quebec City, Quebec Canada
| | - Benoît Turcotte
- Institute Intelligence and Data (IID), Université Laval, Quebec City, Quebec Canada
- CERVO Brain Research Center, Quebec City, Quebec Canada
| | - Christian Gagné
- Institute Intelligence and Data (IID), Université Laval, Quebec City, Quebec Canada
- Département de génie électrique et de génie informatique, Université Laval, Quebec City, Quebec Canada
| | - Flavie Lavoie-Cardinal
- Institute Intelligence and Data (IID), Université Laval, Quebec City, Quebec Canada
- CERVO Brain Research Center, Quebec City, Quebec Canada
- Département de psychiatrie et de neurosciences, Université Laval, Quebec City, Quebec Canada
| |
Collapse
|